quimb.tensor

Tensor and tensor network functionality.

Submodules

Attributes

Classes

Circuit

Class for simulating quantum circuits using tensor networks. The class

CircuitDense

Quantum circuit simulation keeping the state in full dense form.

CircuitMPS

Quantum circuit simulation keeping the state always in a MPS form. If

CircuitPermMPS

Quantum circuit simulation keeping the state always in an MPS form, but

Gate

A simple class for storing the details of a quantum circuit gate.

TNOptimizer

Globally optimize tensors within a tensor network with respect to any

Dense1D

Mimics other 1D tensor network structures, but really just keeps the

MatrixProductOperator

Initialise a matrix product operator, with auto labelling and tagging.

MatrixProductState

Initialise a matrix product state, with auto labelling and tagging.

SuperOperator1D

A 1D tensor network super-operator class:

TensorNetwork1D

Base class for tensor networks with a one-dimensional structure.

TNLinearOperator1D

A 1D tensor network linear operator like:

TEBD

Class implementing Time Evolving Block Decimation (TEBD) [1].

LocalHam1D

An simple interacting hamiltonian object used, for instance, in TEBD.

PEPO

Projected Entangled Pair Operator object:

PEPS

Projected Entangled Pair States object (2D):

TensorNetwork2D

Mixin class for tensor networks with a square lattice two-dimensional

TEBD2D

Generic class for performing two dimensional time evolving block

FullUpdate

Implements the 'Full Update' version of 2D imaginary time evolution,

LocalHam2D

A 2D Hamiltonian represented as local terms. This combines all two site

SimpleUpdate

A simple subclass of TEBD2D that overrides two key methods in

PEPS3D

Projected Entangled Pair States object (3D).

TensorNetwork3D

Mixin class for tensor networks with a cubic lattice three-dimensional

LocalHam3D

Representation of a local hamiltonian defined on a general graph. This

LocalHamGen

Representation of a local hamiltonian defined on a general graph. This

SimpleUpdateGen

Simple update for arbitrary geometry hamiltonians.

TEBDGen

Generic class for performing time evolving block decimation on an

SpinHam1D

Class for easily building custom spin hamiltonians in MPO or LocalHam1D

IsoTensor

A Tensor subclass which keeps its left_inds by default even

PTensor

A tensor whose data array is lazily generated from a set of parameters

Tensor

A labelled, tagged n-dimensional array. The index labels are used

TensorNetwork

A collection of (as yet uncontracted) Tensors.

oset

An ordered set which stores elements as the keys of dict (ordered as of

DMRG

Density Matrix Renormalization Group variational groundstate search.

DMRG1

Simple alias of one site DMRG.

DMRG2

Simple alias of two site DMRG.

DMRGX

Class implmenting DMRG-X [1], whereby local effective energy eigenstates

MovingEnvironment

Helper class for efficiently moving the effective 'environment' of a

MERA

The Multi-scale Entanglement Renormalization Ansatz (MERA) state:

Functions

circ_a2a_rand(n, depth[, seed, gate2])

circ_ansatz_1D_brickwork(n, depth[, cyclic, gate2, seed])

A 1D circuit ansatz with odd and even layers of entangling

circ_ansatz_1D_rand(n, depth[, seed, cyclic, gate2, ...])

A 1D circuit ansatz with randomly place entangling gates interleaved

circ_ansatz_1D_zigzag(n, depth[, gate2, seed])

A 1D circuit ansatz with forward and backward layers of entangling

circ_qaoa(terms, depth, gammas, betas, **circuit_opts)

Generate the QAOA circuit for weighted graph described by terms.

array_contract(arrays, inputs[, output, optimize, backend])

contract_backend(backend[, set_globally])

A context manager to temporarily set the default backend used for tensor

contract_strategy(strategy[, set_globally])

A context manager to temporarily set the default contraction strategy

get_contract_backend()

Get the default backend used for tensor contractions, via 'cotengra'.

get_contract_strategy()

Get the default contraction strategy - the option supplied as

get_tensor_linop_backend()

Get the default backend used for tensor network linear operators, via

inds_to_eq(inputs[, output])

Turn input and output indices of any sort into a single 'equation'

set_contract_backend(backend)

Set the default backend used for tensor contractions, via 'cotengra'.

set_contract_strategy(strategy)

Get the default contraction strategy - the option supplied as

set_tensor_linop_backend(backend)

Set the default backend used for tensor network linear operators, via

tensor_linop_backend(backend[, set_globally])

A context manager to temporarily set the default backend used for tensor

edges_1d_chain(L[, cyclic])

Return the graph edges of a finite 1D chain lattice.

edges_2d_hexagonal(Lx, Ly[, cyclic, cells])

Return the graph edges of a finite 2D hexagonal lattice. There are two

edges_2d_kagome(Lx, Ly[, cyclic, cells])

Return the graph edges of a finite 2D kagome lattice. There are

edges_2d_square(Lx, Ly[, cyclic, cells])

Return the graph edges of a finite 2D square lattice. The nodes

edges_2d_triangular(Lx, Ly[, cyclic, cells])

Return the graph edges of a finite 2D triangular lattice. There is a

edges_2d_triangular_rectangular(Lx, Ly[, cyclic, cells])

Return the graph edges of a finite 2D triangular lattice tiled in a

edges_3d_cubic(Lx, Ly, Lz[, cyclic, cells])

Return the graph edges of a finite 3D cubic lattice. The nodes

edges_3d_diamond(Lx, Ly, Lz[, cyclic, cells])

Return the graph edges of a finite 3D diamond lattice. There are

edges_3d_diamond_cubic(Lx, Ly, Lz[, cyclic, cells])

Return the graph edges of a finite 3D diamond lattice tiled in a cubic

edges_3d_pyrochlore(Lx, Ly, Lz[, cyclic, cells])

Return the graph edges of a finite 3D pyorchlore lattice. There are

edges_tree_rand(n[, max_degree, seed])

Return a random tree with n nodes. This a convenience function for

jax_register_pytree()

pack(obj)

Take a tensor or tensor network like object and return a skeleton needed

unpack(params, skeleton)

Take a skeleton of a tensor or tensor network like object and a pytree

expec_TN_1D(*tns[, compress, eps])

Compute the expectation of several 1D TNs, using transfer matrix

gate_TN_1D(tn, G, where[, contract, tags, ...])

Act with the gate G on sites where, maintaining the outer

superop_TN_1D(tn_super, tn_op[, upper_ind_id, ...])

Take a tensor network superoperator and act with it on a

enforce_1d_like(tn[, site_tags, fix_bonds, inplace])

Check that tn is 1D-like with OBC, i.e. 1) that each tensor has

tensor_network_1d_compress(tn[, max_bond, cutoff, ...])

Compress a 1D-like tensor network using the specified method.

gen_2d_bonds(Lx, Ly[, steppers, coo_filter, cyclic])

Convenience function for tiling pairs of bond coordinates on a 2D

gen_3d_bonds(Lx, Ly, Lz[, steppers, coo_filter, cyclic])

Convenience function for tiling pairs of bond coordinates on a 3D

tensor_network_align(*tns[, ind_ids, trace, inplace])

Align an arbitrary number of tensor networks in a stack-like geometry:

tensor_network_apply_op_op(A, B[, which_A, which_B, ...])

Apply the operator (has upper and lower site inds) represented by tensor

tensor_network_apply_op_vec(A, x[, which_A, contract, ...])

Apply a general a general tensor network representing an operator (has

edge_coloring(edges[, strategy, interchange, group])

Generate an edge coloring for the graph given by edges, using

MPS_COPY(L[, phys_dim, dtype])

Build a matrix product state representation of the COPY tensor.

HTN2D_classical_ising_partition_function(Lx, Ly, beta)

Hyper tensor network representation of the 2D classical ising model

HTN3D_classical_ising_partition_function(Lx, Ly, Lz, beta)

Hyper tensor network representation of the 3D classical ising model

HTN_classical_partition_function_from_edges(edges, beta)

Build a hyper tensor network representation of a classical ising model

HTN_CP_from_sites_and_fill_fn(fill_fn, sites, D[, ...])

Create a CP-decomposition structured hyper tensor network state from a

HTN_dual_from_edges_and_fill_fn(fill_fn, edges, D[, ...])

Create a hyper tensor network with a tensor on each bond and a hyper

HTN_from_clauses(clauses[, weights, mode, dtype, ...])

Given a list of clauses, create a hyper tensor network, with a single

HTN_from_cnf(fname[, mode, dtype, clause_tag_id, ...])

Create a hyper tensor network from a '.cnf' or '.wcnf' file - i.e. a

HTN_random_ksat(k, num_variables[, num_clauses, ...])

Create a random k-SAT instance encoded as a hyper tensor network.

MPO_ham_heis(L[, j, bz, S, cyclic])

Heisenberg Hamiltonian in MPO form.

MPO_ham_ising(L[, j, bx, S, cyclic])

Ising Hamiltonian in MPO form.

MPO_ham_mbl(L, dh[, j, seed, S, cyclic, dh_dist, ...])

The many-body-localized spin hamiltonian in MPO form.

MPO_ham_XY(L[, j, bz, S, cyclic])

XY-Hamiltonian in MPO form.

MPO_identity(L[, sites, phys_dim, dtype, cyclic])

Generate an identity MPO of size L.

MPO_identity_like(mpo, **mpo_opts)

Return an identity matrix operator with the same physical index and

MPO_product_operator(arrays[, cyclic])

Return an MPO of bond dimension 1 representing the product of raw

MPO_rand(L, bond_dim[, phys_dim, normalize, cyclic, ...])

Generate a random matrix product state.

MPO_rand_herm(L, bond_dim[, phys_dim, normalize, dtype])

Generate a random hermitian matrix product operator.

MPO_zeros(L[, phys_dim, dtype, cyclic])

Generate a zeros MPO of size L.

MPO_zeros_like(mpo, **mpo_opts)

Return a zeros matrix product operator with the same physical index and

MPS_computational_state(binary[, dtype, cyclic])

A computational basis state in Matrix Product State form.

MPS_ghz_state(L[, dtype])

Build the chi=2 OBC MPS representation of the GHZ state.

MPS_neel_state(L[, down_first, dtype])

Generate the neel state in Matrix Product State form.

MPS_product_state(arrays[, cyclic])

Generate a product state in MatrixProductState form, i,e,

MPS_rand_computational_state(L[, dtype])

Generate a random computation basis state, like '01101001010'.

MPS_rand_state(L, bond_dim[, phys_dim, normalize, ...])

Generate a random matrix product state.

MPS_sampler(L[, dtype, squeeze])

A product state for sampling tensor network traces. Seen as a vector it

MPS_w_state(L[, dtype])

Build the chi=2 OBC MPS representation of the W state.

MPS_zero_state(L[, bond_dim, phys_dim, cyclic, dtype])

The all-zeros MPS state, of given bond-dimension.

TN2D_classical_ising_partition_function(Lx, Ly, beta)

The tensor network representation of the 2D classical ising model

TN2D_corner_double_line(Lx, Ly[, line_dim, tiling, ...])

Build a 2D 'corner double line' (CDL) tensor network. Each plaquette

TN2D_embedded_classical_ising_partition_function(Jij, beta)

Construct a (triangular) '2D' tensor network representation of the

TN2D_empty(Lx, Ly, D[, cyclic, site_tag_id, x_tag_id, ...])

A scalar 2D lattice tensor network initialized with empty tensors.

TN2D_from_fill_fn(fill_fn, Lx, Ly, D[, cyclic, ...])

A scalar 2D lattice tensor network with tensors filled by a function.

TN2D_rand(Lx, Ly, D[, cyclic, site_tag_id, x_tag_id, ...])

A random scalar 2D lattice tensor network.

TN2D_rand_hidden_loop(Lx, Ly, *[, cyclic, line_dim, ...])

TN2D_rand_symmetric(Lx, Ly, D[, cyclic, site_tag_id, ...])

Create a random 2D lattice tensor network where every tensor is

TN2D_with_value(value, Lx, Ly, D[, cyclic, ...])

A scalar 2D lattice tensor network with every element set to value.

TN3D_classical_ising_partition_function(Lx, Ly, Lz, beta)

Tensor network representation of the 3D classical ising model

TN3D_corner_double_line(Lx, Ly, Lz[, line_dim, ...])

TN3D_empty(Lx, Ly, Lz, D[, cyclic, site_tag_id, ...])

A scalar 3D lattice tensor network initialized with empty tensors.

TN3D_from_fill_fn(fill_fn, Lx, Ly, Lz, D[, cyclic, ...])

A scalar 3D lattice tensor network with tensors filled by a function.

TN3D_rand(Lx, Ly, Lz, D[, cyclic, site_tag_id, ...])

A random scalar 3D lattice tensor network.

TN3D_rand_hidden_loop(Lx, Ly, Lz, *[, cyclic, ...])

TN3D_with_value(value, Lx, Ly, Lz, D[, cyclic, ...])

A scalar 2D lattice tensor network with every element set to value.

TN_classical_partition_function_from_edges(edges, beta)

Build a regular tensor network representation of a classical ising model

TN_dimer_covering_from_edges(edges[, cover_count, ...])

Make a tensor network from sequence of graph edges that counts the

TN_from_edges_and_fill_fn(fill_fn, edges, D[, ...])

Create a tensor network from a sequence of edges defining a graph,

TN_from_edges_empty(edges, D[, phys_dim, site_tag_id, ...])

Create a tensor network from a sequence of edges defining a graph,

TN_from_edges_rand(edges, D[, phys_dim, seed, dtype, ...])

Create a random tensor network with geometry defined from a sequence

TN_from_edges_with_value(value, edges, D[, phys_dim, ...])

Create a tensor network from a sequence of edges defining a graph,

TN_from_sites_computational_state(site_map[, ...])

A computational basis state in general tensor network form.

TN_from_sites_product_state(site_map[, site_tag_id, ...])

A product state in general tensor network form.

TN_from_strings(strings[, fill_fn, line_dim, ...])

TN_matching(tn, max_bond[, site_tags, fill_fn, dtype])

Create a tensor network with the same outer indices as tn but

TN_rand_reg(n, reg, D[, phys_dim, seed, dtype, ...])

Create a random regular tensor network.

TN_rand_tree(n, D[, phys_dim, max_degree, seed, ...])

Create a random tree tensor network.

cnf_file_parse(fname)

Parse a DIMACS style 'cnf' file into a list of clauses, and possibly a

convert_to_2d(tn[, Lx, Ly, site_tag_id, x_tag_id, ...])

Convert tn to a TensorNetwork2D,

convert_to_3d(tn[, Lx, Ly, Lz, site_tag_id, x_tag_id, ...])

Convert tn to a TensorNetwork3D,

ham_1d_heis([L, j, bz, S, cyclic])

Heisenberg Hamiltonian in

ham_1d_ising([L, j, bx, S, cyclic])

Ising Hamiltonian in

ham_1d_mbl(L, dh[, j, seed, S, cyclic, dh_dist, ...])

The many-body-localized spin hamiltonian in

ham_1d_XY([L, j, bz, S, cyclic])

XY-Hamiltonian in

ham_2d_heis(Lx, Ly[, j, bz])

Heisenberg Hamiltonian in

ham_2d_ising(Lx, Ly[, j, bx])

Ising Hamiltonian in

ham_2d_j1j2(Lx, Ly[, j1, j2, bz])

Heisenberg Hamiltonian in

ham_3d_heis(Lx, Ly, Lz[, j, bz])

Heisenberg Hamiltonian in

rand_phased(shape, inds[, tags, dtype])

Generate a random tensor with specified shape and inds, and randomly

rand_tensor(shape, inds[, tags, dtype, dist, scale, ...])

Generate a random tensor with specified shape and inds.

random_ksat_instance(k, num_variables[, num_clauses, ...])

Create a random k-SAT instance.

COPY_tensor(d, inds[, tags, dtype])

Get the tensor representing the COPY operation with dimension size

bonds(t1, t2)

Getting any indices connecting the Tensor(s) or TensorNetwork(s) t1

bonds_size(t1, t2)

Get the size of the bonds linking tensors or tensor networks t1 and

connect(t1, t2, ax1, ax2)

Connect two tensors by setting a shared index for the specified

group_inds(t1, t2)

Group bonds into left only, shared, and right only. If t1 or t2

new_bond(T1, T2[, size, name, axis1, axis2])

Inplace addition of a new bond between tensors T1 and T2. The

rand_uuid([base])

Return a guaranteed unique, shortish identifier, optional appended

tensor_balance_bond(t1, t2[, smudge])

Gauge the bond between two tensors such that the norm of the 'columns'

tensor_canonize_bond(T1, T2[, absorb, gauges, ...])

Inplace 'canonization' of two tensors. This gauges the bond between

tensor_compress_bond(T1, T2[, reduced, absorb, ...])

Inplace compress between the two single tensors. It follows the

tensor_contract(*tensors[, output_inds, optimize, ...])

Contract a collection of tensors into a scalar or tensor, automatically

tensor_direct_product(T1, T2[, sum_inds, inplace])

Direct product of two Tensors. Any axes included in sum_inds must be

tensor_fuse_squeeze(t1, t2[, squeeze, gauges])

If t1 and t2 share more than one bond fuse it, and if the size

tensor_network_distance(tnA, tnB[, xAA, xAB, xBB, ...])

Compute the Frobenius norm distance between two tensor networks:

tensor_network_fit_als(tn, tn_target[, tags, steps, ...])

Optimize the fit of tn with respect to tn_target using

tensor_network_fit_autodiff(tn, tn_target[, steps, ...])

Optimize the fit of tn with respect to tn_target using

tensor_network_gate_inds(self, G, inds[, contract, ...])

Apply the 'gate' G to indices inds, propagating them to the

tensor_network_sum(tnA, tnB[, inplace])

Sum of two tensor networks, whose indices should match exactly, using

tensor_split(T, left_inds[, method, get, absorb, ...])

Decompose this tensor into two tensors.

Package Contents

class quimb.tensor.Circuit(N=None, psi0=None, gate_opts=None, gate_contract='auto-split-gate', gate_propagate_tags='register', tags=None, psi0_dtype='complex128', psi0_tag='PSI0', tag_gate_numbers=True, gate_tag_id='GATE_{}', tag_gate_rounds=True, round_tag_id='ROUND_{}', tag_gate_labels=True, bra_site_ind_id='b{}', to_backend=None)[source]

Class for simulating quantum circuits using tensor networks. The class keeps a list of Gate objects in sync with a tensor network representing the current state of the circuit.

Parameters:
  • N (int, optional) – The number of qubits.

  • psi0 (TensorNetwork1DVector, optional) – The initial state, assumed to be |00000....0> if not given. The state is always copied and the tag PSI0 added.

  • gate_opts (dict_like, optional) – Default keyword arguments to supply to each gate_TN_1D() call during the circuit.

  • gate_contract (str, optional) – Shortcut for setting the default ‘contract’ option in gate_opts.

  • gate_propagate_tags (str, optional) – Shortcut for setting the default ‘propagate_tags’ option in gate_opts.

  • tags (str or sequence of str, optional) – Tag(s) to add to the initial wavefunction tensors (whether these are propagated to the rest of the circuit’s tensors depends on gate_opts).

  • psi0_dtype (str, optional) – Ensure the initial state has this dtype.

  • psi0_tag (str, optional) – Ensure the initial state has this tag.

  • tag_gate_numbers (bool, optional) – Whether to tag each gate tensor with its number in the circuit, like "GATE_{g}". This is required for updating the circuit parameters.

  • gate_tag_id (str, optional) – The format string for tagging each gate tensor, by default e.g. "GATE_{g}".

  • tag_gate_rounds (bool, optional) – Whether to tag each gate tensor with its number in the circuit, like "ROUND_{r}".

  • round_tag_id (str, optional) – The format string for tagging each round of gates, by default e.g. "ROUND_{r}".

  • tag_gate_labels (bool, optional) – Whether to tag each gate tensor with its gate type label, e.g. {"X_1/2", "ISWAP", "CCX", ...}..

  • bra_site_ind_id (str, optional) – Use this to label ‘bra’ site indices when creating certain (mostly internal) intermediate tensor networks.

psi

The current circuit wavefunction as a tensor network.

Type:

TensorNetwork1DVector

uni

The current circuit unitary operator as a tensor network.

Type:

TensorNetwork1DOperator

gates

The gates in the circuit.

Type:

tuple[Gate]

Examples

Create 3-qubit GHZ-state:

>>> qc = qtn.Circuit(3)
>>> gates = [
        ('H', 0),
        ('H', 1),
        ('CNOT', 1, 2),
        ('CNOT', 0, 2),
        ('H', 0),
        ('H', 1),
        ('H', 2),
    ]
>>> qc.apply_gates(gates)
>>> qc.psi
<TensorNetwork1DVector(tensors=12, indices=14, L=3, max_bond=2)>
>>> qc.psi.to_dense().round(4)
qarray([[ 0.7071+0.j],
        [ 0.    +0.j],
        [ 0.    +0.j],
        [-0.    +0.j],
        [-0.    +0.j],
        [ 0.    +0.j],
        [ 0.    +0.j],
        [ 0.7071+0.j]])
>>> for b in qc.sample(10):
...     print(b)
000
000
111
000
111
111
000
111
000
000

See also

Gate

tag_gate_numbers
tag_gate_rounds
tag_gate_labels
to_backend
gate_opts
_gates = []
_ket_site_ind_id
_bra_site_ind_id
_gate_tag_id
_round_tag_id
_sample_n_gates
_storage
_sampled_conditionals
copy()[source]

Copy the circuit and its state.

apply_to_arrays(fn)[source]

Apply a function to all the arrays in the circuit.

get_params()[source]

Get a pytree - in this case a dict - of all the parameters in the circuit.

Returns:

A dictionary mapping gate numbers to their parameters.

Return type:

dict[int, tuple]

set_params(params)[source]

Set the parameters of the circuit.

Parameters:

params (dict`) – A dictionary mapping gate numbers to the new parameters.

classmethod from_qsim_str(contents, **circuit_opts)[source]

Generate a Circuit instance from a ‘qsim’ string.

classmethod from_qsim_file(fname, **circuit_opts)[source]

Generate a Circuit instance from a ‘qsim’ file.

The qsim file format is described here: https://quantumai.google/qsim/input_format.

classmethod from_qsim_url(url, **circuit_opts)[source]

Generate a Circuit instance from a ‘qsim’ url.

from_qasm[source]
from_qasm_file[source]
from_qasm_url[source]
classmethod from_openqasm2_str(contents, **circuit_opts)[source]

Generate a Circuit instance from an OpenQASM 2.0 string.

classmethod from_openqasm2_file(fname, **circuit_opts)[source]

Generate a Circuit instance from an OpenQASM 2.0 file.

classmethod from_openqasm2_url(url, **circuit_opts)[source]

Generate a Circuit instance from an OpenQASM 2.0 url.

classmethod from_gates(gates, N=None, progbar=False, **kwargs)[source]

Generate a Circuit instance from a sequence of gates.

Parameters:
  • gates (sequence[Gate] or sequence[tuple]) – The sequence of gates to apply.

  • N (int, optional) – The number of qubits. If not given, will be inferred from the gates.

  • progbar (bool, optional) – Whether to show a progress bar.

  • kwargs – Supplied to the Circuit constructor.

property gates
property num_gates
ket_site_ind(i)[source]

Get the site index for the given qubit.

bra_site_ind(i)[source]

Get the ‘bra’ site index for the given qubit, if forming an operator.

gate_tag(g)[source]

Get the tag for the given gate, indexed linearly.

round_tag(r)[source]

Get the tag for the given round (/layer).

_init_state(N, dtype='complex128')[source]
_apply_gate(gate, tags=None, **gate_opts)[source]

Apply a Gate to this Circuit. This is the main method that all calls to apply a gate should go through.

Parameters:
  • gate (Gate) – The gate to apply.

  • tags (str or sequence of str, optional) – Tags to add to the gate tensor(s).

apply_gate(gate_id, *gate_args, params=None, qubits=None, controls=None, gate_round=None, parametrize=None, **gate_opts)[source]

Apply a single gate to this tensor network quantum circuit. If gate_round is supplied the tensor(s) added will be tagged with 'ROUND_{gate_round}'. Alternatively, putting an integer first like so:

circuit.apply_gate(10, 'H', 7)

Is automatically translated to:

circuit.apply_gate('H', 7, gate_round=10)
Parameters:
  • gate_id (Gate, str, or array_like) –

    Which gate to apply. This can be:

    • A Gate instance, i.e. with parameters and qubits already specified.

    • A string, e.g. 'H', 'U3', etc. in which case gate_args should be supplied with (*params, *qubits).

    • A raw array, in which case gate_args should be supplied with (*qubits,).

  • gate_args (list[str]) – The arguments to supply to it.

  • gate_round (int, optional) – The gate round. If gate_id is integer-like, will also be taken from here, with then gate_id, gate_args = gate_args[0], gate_args[1:].

  • gate_opts – Supplied to the gate function, options here will override the default gate_opts.

apply_gate_raw(U, where, controls=None, gate_round=None, **gate_opts)[source]

Apply the raw array U as a gate on qubits in where. It will be assumed to be unitary for the sake of computing reverse lightcones.

apply_gates(gates, progbar=False, **gate_opts)[source]

Apply a sequence of gates to this tensor network quantum circuit.

Parameters:
  • gates (Sequence[Gate] or Sequence[Tuple]) – The sequence of gates to apply.

  • gate_opts – Supplied to apply_gate().

h(i, gate_round=None, **kwargs)[source]
x(i, gate_round=None, **kwargs)[source]
y(i, gate_round=None, **kwargs)[source]
z(i, gate_round=None, **kwargs)[source]
s(i, gate_round=None, **kwargs)[source]
sdg(i, gate_round=None, **kwargs)[source]
t(i, gate_round=None, **kwargs)[source]
tdg(i, gate_round=None, **kwargs)[source]
x_1_2(i, gate_round=None, **kwargs)[source]
y_1_2(i, gate_round=None, **kwargs)[source]
z_1_2(i, gate_round=None, **kwargs)[source]
w_1_2(i, gate_round=None, **kwargs)[source]
hz_1_2(i, gate_round=None, **kwargs)[source]
cnot(i, j, gate_round=None, **kwargs)[source]
cx(i, j, gate_round=None, **kwargs)[source]
cy(i, j, gate_round=None, **kwargs)[source]
cz(i, j, gate_round=None, **kwargs)[source]
iswap(i, j, gate_round=None, **kwargs)[source]
iden(i, gate_round=None)[source]
swap(i, j, gate_round=None, **kwargs)[source]
rx(theta, i, gate_round=None, parametrize=False, **kwargs)[source]
ry(theta, i, gate_round=None, parametrize=False, **kwargs)[source]
rz(theta, i, gate_round=None, parametrize=False, **kwargs)[source]
u3(theta, phi, lamda, i, gate_round=None, parametrize=False, **kwargs)[source]
u2(phi, lamda, i, gate_round=None, parametrize=False, **kwargs)[source]
u1(lamda, i, gate_round=None, parametrize=False, **kwargs)[source]
phase(lamda, i, gate_round=None, parametrize=False, **kwargs)[source]
cu3(theta, phi, lamda, i, j, gate_round=None, parametrize=False, **kwargs)[source]
cu2(phi, lamda, i, j, gate_round=None, parametrize=False, **kwargs)[source]
cu1(lamda, i, j, gate_round=None, parametrize=False, **kwargs)[source]
cphase(lamda, i, j, gate_round=None, parametrize=False, **kwargs)[source]
fsim(theta, phi, i, j, gate_round=None, parametrize=False, **kwargs)[source]
fsimg(theta, zeta, chi, gamma, phi, i, j, gate_round=None, parametrize=False, **kwargs)[source]
givens(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
givens2(theta, phi, i, j, gate_round=None, parametrize=False, **kwargs)[source]
rxx(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
ryy(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
rzz(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
crx(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
cry(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
crz(theta, i, j, gate_round=None, parametrize=False, **kwargs)[source]
su4(theta1, phi1, lamda1, theta2, phi2, lamda2, theta3, phi3, lamda3, theta4, phi4, lamda4, t1, t2, t3, i, j, gate_round=None, parametrize=False, **kwargs)[source]
ccx(i, j, k, gate_round=None, **kwargs)[source]
ccnot(i, j, k, gate_round=None, **kwargs)[source]
toffoli(i, j, k, gate_round=None, **kwargs)[source]
ccy(i, j, k, gate_round=None, **kwargs)[source]
ccz(i, j, k, gate_round=None, **kwargs)[source]
cswap(i, j, k, gate_round=None, **kwargs)[source]
fredkin(i, j, k, gate_round=None, **kwargs)[source]
property psi

Tensor network representation of the wavefunction.

get_uni(transposed=False)[source]

Tensor network representation of the unitary operator (i.e. with the initial state removed).

property uni
get_reverse_lightcone_tags(where)[source]

Get the tags of gates in this circuit corresponding to the ‘reverse’ lightcone propagating backwards from registers in where.

Parameters:

where (int or sequence of int) – The register or register to get the reverse lightcone of.

Returns:

The sequence of gate tags (GATE_{i}, …) corresponding to the lightcone.

Return type:

tuple[str]

get_psi_reverse_lightcone(where, keep_psi0=False)[source]

Get just the bit of the wavefunction in the reverse lightcone of sites in where - i.e. causally linked.

Parameters:
  • where (int, or sequence of int) – The sites to propagate the the lightcone back from, supplied to get_reverse_lightcone_tags().

  • keep_psi0 (bool, optional) – Keep the tensors corresponding to the initial wavefunction regardless of whether they are outside of the lightcone.

Returns:

psi_lc

Return type:

TensorNetwork1DVector

clear_storage()[source]

Clear all cached data.

_maybe_init_storage()[source]
get_psi_simplified(seq='ADCRS', atol=1e-12, equalize_norms=False)[source]

Get the full wavefunction post local tensor network simplification.

Parameters:
  • seq (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

Returns:

psi

Return type:

TensorNetwork1DVector

get_rdm_lightcone_simplified(where, seq='ADCRS', atol=1e-12, equalize_norms=False)[source]

Get a simplified TN of the norm of the wavefunction, with gates outside reverse lightcone of where cancelled, and physical indices within where preserved so that they can be fixed (sliced) or used as output indices.

Parameters:
  • where (int or sequence of int) – The region assumed to be the target density matrix essentially. Supplied to get_reverse_lightcone_tags().

  • seq (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

Return type:

TensorNetwork

amplitude(b, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=True, backend=None, dtype='complex128', rehearse=False)[source]

Get the amplitude coefficient of bitstring b.

\[c_b = \langle b | \psi \rangle\]
Parameters:
  • b (str or sequence of int) – The bitstring to compute the transition amplitude for.

  • optimize (str, optional) – Contraction path optimizer to use for the amplitude, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • rehearse (bool or "tn", optional) – If True, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys "tn" and 'tree' with the tensor network that will be contracted and the corresponding contraction tree if so.

amplitude_rehearse(b='random', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=True, optimize='auto-hq', dtype='complex128', rehearse=True)[source]

Perform just the tensor network simplifications and contraction tree finding associated with computing a single amplitude (caching the results) but don’t perform the actual contraction.

Parameters:
  • b ('random', str or sequence of int) – The bitstring to rehearse computing the transition amplitude for, if 'random' (the default) a random bitstring will be used.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

Return type:

dict

amplitude_tn[source]
partial_trace(keep, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=True, backend=None, dtype='complex128', rehearse=False)[source]

Perform the partial trace on the circuit wavefunction, retaining only qubits in keep, and making use of reverse lightcone cancellation:

\[\rho_{\bar{q}} = Tr_{\bar{p}} |\psi_{\bar{q}} \rangle \langle \psi_{\bar{q}}|\]

Where \(\bar{q}\) is the set of qubits to keep, \(\psi_{\bar{q}}\) is the circuit wavefunction only with gates in the causal cone of this set, and \(\bar{p}\) is the remaining qubits.

Parameters:
  • keep (int or sequence of int) – The qubit(s) to keep as we trace out the rest.

  • optimize (str, optional) – Contraction path optimizer to use for the reduced density matrix, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • rehearse (bool or "tn", optional) – If True, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys "tn" and 'tree' with the tensor network that will be contracted and the corresponding contraction tree if so.

Return type:

array or dict

partial_trace_rehearse[source]
partial_trace_tn[source]
local_expectation(G, where, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=True, backend=None, dtype='complex128', rehearse=False)[source]

Compute the a single expectation value of operator G, acting on sites where, making use of reverse lightcone cancellation.

\[\langle \psi_{\bar{q}} | G_{\bar{q}} | \psi_{\bar{q}} \rangle\]

where \(\bar{q}\) is the set of qubits \(G\) acts one and \(\psi_{\bar{q}}\) is the circuit wavefunction only with gates in the causal cone of this set. If you supply a tuple or list of gates then the expectations will be computed simultaneously.

Parameters:
  • G (array or sequence[array]) – The raw operator(s) to find the expectation of.

  • where (int or sequence of int) – Which qubits the operator acts on.

  • optimize (str, optional) – Contraction path optimizer to use for the local expectation, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • gate_opts (None or dict_like) – Options to use when applying G to the wavefunction.

  • rehearse (bool or "tn", optional) – If True, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys 'tn' and 'tree' with the tensor network that will be contracted and the corresponding contraction tree if so.

Return type:

scalar, tuple[scalar] or dict

local_expectation_rehearse[source]
local_expectation_tn[source]
compute_marginal(where, fix=None, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True, rehearse=False)[source]

Compute the probability tensor of qubits in where, given possibly fixed qubits in fix and tracing everything else having removed redundant unitary gates.

Parameters:
  • where (sequence of int) – The qubits to compute the marginal probability distribution of.

  • fix (None or dict[int, str], optional) – Measurement results on other qubits to fix.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • rehearse (bool or "tn", optional) – Whether to perform the marginal contraction or just return the associated TN and contraction tree.

compute_marginal_rehearse[source]
compute_marginal_tn[source]
calc_qubit_ordering(qubits=None, method='greedy-lightcone')[source]

Get a order to measure qubits in, by greedily choosing whichever has the smallest reverse lightcone followed by whichever expands this lightcone least.

Parameters:

qubits (None or sequence of int) – The qubits to generate a lightcone ordering for, if None, assume all qubits.

Returns:

The order to ‘measure’ qubits in.

Return type:

tuple[int]

_parse_qubits_order(qubits=None, order=None)[source]

Simply initializes the default of measuring all qubits, and the default order, or checks that order is a permutation of qubits.

_group_order(order, group_size=1)[source]

Take the qubit ordering order and batch it in groups of size group_size, sorting the qubits (for caching reasons) within each group.

get_qubit_distances(method='dijkstra', alpha=2)[source]

Get a nested dictionary of qubit distances. This is computed from a graph representing qubit interactions. The graph has an edge between qubits if they are acted on by the same gate, and the distance-weight of the edge is exponentially small in the number of gates between them.

Parameters:
  • method ({'dijkstra', 'resistance'}, optional) – The method to use to compute the qubit distances. See networkx.all_pairs_dijkstra_path_length() and networkx.resistance_distance().

  • alpha (float, optional) – The distance weight between qubits is alpha**(num_gates - 1 ).

Returns:

The distance between each pair of qubits, accessed like distances[q1][q2]. If two qubits are not connected, the distance is missing.

Return type:

dict[int, dict[int, float]]

reordered_gates_dfs_clustered()[source]

Get the gates reordered by a depth first search traversal of the multi-qubit gate graph that greedily selects successive gates which are ‘close’ in graph distance, and shifts single qubit gates to be adjacent to multi-qubit gates where possible.

sample(C, qubits=None, order=None, group_size=10, max_marginal_storage=2**20, seed=None, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True)[source]

Sample the circuit given by gates, C times, using lightcone cancelling and caching marginal distribution results. This is a generator. This proceeds as a chain of marginal computations.

Assuming we have group_size=1, and some ordering of the qubits, \(\{q_0, q_1, q_2, q_3, \ldots\}\) we first compute:

\[p(q_0) = \mathrm{diag} \mathrm{Tr}_{1, 2, 3,\ldots} | \psi_{0} \rangle \langle \psi_{0} |\]

I.e. simply the probability distribution on a single qubit, conditioned on nothing. The subscript on \(\psi\) refers to the fact that we only need gates from the causal cone of qubit 0. From this we can sample an outcome, either 0 or 1, if we call this \(r_0\) we can then move on to the next marginal:

\[p(q_1 | r_0) = \mathrm{diag} \mathrm{Tr}_{2, 3,\ldots} \langle r_0 | \psi_{0, 1} \rangle \langle \psi_{0, 1} | r_0 \rangle\]

I.e. the probability distribution of the next qubit, given our prior result. We can sample from this to get \(r_1\). Then we compute:

\[p(q_2 | r_0 r_1) = \mathrm{diag} \mathrm{Tr}_{3,\ldots} \langle r_0 r_1 | \psi_{0, 1, 2} \rangle \langle \psi_{0, 1, 2} | r_0 r_1 \rangle\]

Eventually we will reach the ‘final marginal’, which we can compute as

\[|\langle r_0 r_1 r_2 r_3 \ldots | \psi \rangle|^2\]

since there is nothing left to trace out.

Parameters:
  • C (int) – The number of times to sample.

  • qubits (None or sequence of int, optional) – Which qubits to measure, defaults (None) to all qubits.

  • order (None or sequence of int, optional) – Which order to measure the qubits in, defaults (None) to an order based on greedily expanding the smallest reverse lightcone. If specified it should be a permutation of qubits.

  • group_size (int, optional) – How many qubits to group together into marginals, the larger this is the fewer marginals need to be computed, which can be faster at the cost of higher memory. The marginal themselves will each be of size 2**group_size.

  • max_marginal_storage (int, optional) – The total cumulative number of marginal probabilites to cache, once this is exceeded caching will be turned off.

  • seed (None or int, optional) – A random seed, passed to numpy.random.seed if given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a non-reusable path optimizer as called on many different TNs. Passed to cotengra.array_contract_tree().

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

Yields:

bitstrings (sequence of str)

sample_rehearse(qubits=None, order=None, group_size=10, result=None, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True, rehearse=True, progbar=False)[source]

Perform the preparations and contraction tree findings for sample(), caching various intermedidate objects, but don’t perform the main contractions.

Parameters:
  • qubits (None or sequence of int, optional) – Which qubits to measure, defaults (None) to all qubits.

  • order (None or sequence of int, optional) – Which order to measure the qubits in, defaults (None) to an order based on greedily expanding the smallest reverse lightcone.

  • group_size (int, optional) – How many qubits to group together into marginals, the larger this is the fewer marginals need to be computed, which can be faster at the cost of higher memory. The marginal’s size itself is exponential in group_size.

  • result (None or dict[int, str], optional) – Explicitly check the computational cost of this result, assumed to be all zeros if not given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a non-reusable path optimizer as called on many different TNs. Passed to cotengra.array_contract_tree().

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • progbar (bool, optional) – Whether to show the progress of finding each contraction tree.

Returns:

One contraction tree object per grouped marginal computation. The keys of the dict are the qubits the marginal is computed for, the values are a dict containing a representative simplified tensor network (key: ‘tn’) and the main contraction tree (key: ‘tree’).

Return type:

dict[tuple[int], dict]

sample_tns[source]
sample_chaotic(C, marginal_qubits, fix=None, max_marginal_storage=2**20, seed=None, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True)[source]

Sample from this circuit, assuming it to be chaotic. Which is to say, only compute and sample correctly from the final marginal, assuming that the distribution on the other qubits is uniform. Given marginal_qubits=5 for instance, for each sample a random bit-string \(r_0 r_1 r_2 \ldots r_{N - 6}\) for the remaining \(N - 5\) qubits will be chosen, then the final marginal will be computed as

\[p(q_{N-5}q_{N-4}q_{N-3}q_{N-2}q_{N-1} | r_0 r_1 r_2 \ldots r_{N-6}) = |\langle r_0 r_1 r_2 \ldots r_{N - 6} | \psi \rangle|^2\]

and then sampled from. Note the expression on the right hand side has 5 open indices here and so is a tensor, however if marginal_qubits is not too big then the cost of contracting this is very similar to a single amplitude.

Note

This method assumes the circuit is chaotic, if its not, then the samples produced will not be an accurate representation of the probability distribution.

Parameters:
  • C (int) – The number of times to sample.

  • marginal_qubits (int or sequence of int) – The number of qubits to treat as marginal, or the actual qubits. If an int is given then the qubits treated as marginal will be circuit.calc_qubit_ordering()[:marginal_qubits].

  • fix (None or dict[int, str], optional) – Measurement results on other qubits to fix. These will be randomly sampled if fix is not given or a qubit is missing.

  • seed (None or int, optional) – A random seed, passed to numpy.random.seed if given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

Yields:

str

sample_chaotic_rehearse(marginal_qubits, result=None, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True, dtype='complex64', rehearse=True)[source]

Rehearse chaotic sampling (perform just the TN simplifications and contraction tree finding).

Parameters:
  • marginal_qubits (int or sequence of int) – The number of qubits to treat as marginal, or the actual qubits. If an int is given then the qubits treated as marginal will be circuit.calc_qubit_ordering()[:marginal_qubits].

  • result (None or dict[int, str], optional) – Explicitly check the computational cost of this result, assumed to be all zeros if not given.

  • optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

Returns:

The contraction path information for the main computation, the key is the qubits that formed the final marginal. The value is itself a dict with keys 'tn' - a representative tensor network - and 'tree' - the contraction tree.

Return type:

dict[tuple[int], dict]

sample_chaotic_tn[source]
get_gate_by_gate_circuits(group_size=10)[source]

Get a sequence of circuits by partitioning the gates into groups such circuit i + 1 acts on at most group_size new qubits compared to circuit i.

Parameters:

group_size (int, optional) – The maximum number of new qubits that can be acted on by a circuit compared to its predecessor.

Returns:

A sequence of dicts, each with keys 'circuit' and 'where', where the former is a Circuit and the latter the tuple of new qubits that it acts on comparaed to the previous circuit.

Return type:

Sequence[dict]

sample_gate_by_gate(C, group_size=10, seed=None, max_marginal_storage=2**20, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True)[source]

Sample this circuit using the gate-by-gate method, where we ‘evolve’ a result bitstring by sequentially including more and more gates, at each step updating the result by computing a full conditional marginal. See “How to simulate quantum measurement without computing marginals” by Sergey Bravyi, David Gosset, Yinchen Liu (https://arxiv.org/abs/2112.08499). The overall complexity of this is guaranteed to be similar to that of computing a single amplitude which can be much better than the naive “qubit-by-qubit” (.sample) method. However, it requires evaluting a number of tensor networks that scales linearly with the number of gates which can offset any practical advantages for shallow circuits for example.

Parameters:
  • C (int) – The number of samples to generate.

  • group_size (int, optional) – The maximum number of qubits that can be acted on by a circuit compared to its predecessor. This will be the dimension of the marginal computed at each step.

  • seed (None or int, optional) – A random seed, passed to numpy.random.seed if given.

  • max_marginal_storage (int, optional) – The total cumulative number of marginal probabilites to cache, once this is exceeded caching will be turned off.

  • optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a non-reusable path optimizer as called on many different TNs. Passed to cotengra.array_contract_tree().

  • backend (str, optional) – Backend to perform the marginal contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • rehearse (bool, optional) – If True, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys 'tn' and 'tree' with the tensor network that will be contracted and the corresponding contraction tree if so.

Yields:

str

sample_gate_by_gate_rehearse(group_size=10, optimize='auto-hq', dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True, rehearse=True, progbar=False)[source]

Perform the preparations and contraction tree findings for sample_gate_by_gate(), caching various intermedidate objects, but don’t perform the main contractions.

Parameters:
  • group_size (int, optional) – The maximum number of qubits that can be acted on by a circuit compared to its predecessor. This will be the dimension of the marginal computed at each step.

  • optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a non-reusable path optimizer as called on many different TNs. Passed to cotengra.array_contract_tree().

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • rehearse (True or "tn", optional) – If True, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. If “tn”, only generate the simplified tensor networks.

Return type:

Sequence[dict] or Sequence[TensorNetwork]

sample_gate_by_gate_tns[source]
to_dense(reverse=False, optimize='auto-hq', simplify_sequence='R', simplify_atol=1e-12, simplify_equalize_norms=True, backend=None, dtype=None, rehearse=False)[source]

Generate the dense representation of the final wavefunction.

Parameters:
  • reverse (bool, optional) – Whether to reverse the order of the subsystems, to match the convention of qiskit for example.

  • optimize (str, optional) – Contraction path optimizer to use for the contraction, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).

  • dtype (str, optional) – If given, convert the tensors to this dtype prior to contraction.

  • simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.

  • backend (str, optional) – Backend to perform the contraction with, e.g. 'numpy', 'cupy' or 'jax'. Passed to cotengra.

  • dtype – Data type to cast the TN to before contraction.

  • rehearse (bool, optional) – If True, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys 'tn' and 'tree' with the tensor network that will be contracted and the corresponding contraction tree if so.

Returns:

psi – The densely represented wavefunction with dtype data.

Return type:

qarray

to_dense_rehearse[source]
to_dense_tn[source]
simulate_counts(C, seed=None, reverse=False, **to_dense_opts)[source]

Simulate measuring all qubits in the computational basis many times. Unlike sample(), this generates all the samples simultaneously using the full wavefunction constructed from to_dense(), then calling simulate_counts().

Warning

Because this constructs the full wavefunction it always requires exponential memory in the number of qubits, regardless of circuit depth and structure.

Parameters:
  • C (int) – The number of ‘experimental runs’, i.e. total counts.

  • seed (int, optional) – A seed for reproducibility.

  • reverse (bool, optional) – Whether to reverse the order of the subsystems, to match the convention of qiskit for example.

  • to_dense_opts – Suppled to to_dense().

Returns:

results – The number of recorded counts for each

Return type:

dict[str, int]

schrodinger_contract(*args, **contract_opts)[source]
xeb(samples_or_counts, cache=None, cache_maxsize=2**20, progbar=False, **amplitude_opts)[source]

Compute the linear cross entropy benchmark (XEB) for samples or counts, amplitude per amplitude.

Parameters:
  • samples_or_counts (Iterable[str] or Dict[str, int]) – Either the raw bitstring samples or a dict mapping bitstrings to the number of counts observed.

  • cache (dict, optional) – A dictionary to store the probabilities in, if not supplied quimb.utils.LRU(cache_maxsize) will be used.

  • cache_maxsize – The maximum size of the cache to be used.

  • optional – The maximum size of the cache to be used.

  • progbar – Whether to show progress as the bitstrings are iterated over.

  • optional – Whether to show progress as the bitstrings are iterated over.

  • amplitude_opts – Supplied to amplitude().

xeb_ex(optimize='auto-hq', simplify_sequence='R', simplify_atol=1e-12, simplify_equalize_norms=True, dtype=None, backend=None, autojit=False, progbar=False, **contract_opts)[source]

Compute the exactly expected XEB for this circuit. The main feature here is that if you supply a cotengra optimizer that searches for sliced indices then the XEB will be computed without constructing the full wavefunction.

Parameters:
  • optimize (str or PathOptimizer, optional) – Contraction path optimizer.

  • simplify_sequence (str, optional) – Simplifications to apply to tensor network prior to contraction.

  • simplify_sequence – Which local tensor network simplifications to perform and in which order, see full_simplify().

  • simplify_atol (float, optional) – The tolerance with which to compare to zero when applying full_simplify().

  • dtype (str, optional) – Data type to cast the TN to before contraction.

  • backend (str, optional) – Convert tensors to, and then use contractions from, this library.

  • autojit (bool, optional) – Apply autoray.autojit to the contraciton and map-reduce.

  • progbar (bool, optional) – Show progress in terms of number of wavefunction chunks processed.

update_params_from(tn)[source]

Assuming tn is a tensor network with tensors tagged GATE_{i} corresponding to this circuit (e.g. from circ.psi or circ.uni) but with updated parameters, update the current circuit parameters and tensors with those values.

This is an inplace modification of the Circuit.

Parameters:

tn (TensorNetwork) – The tensor network to find the updated parameters from.

draw(figsize=None, radius=1 / 3, drawcolor=(0.5, 0.5, 0.5), linewidth=1)[source]

Draw a simple linear schematic of the circuit.

Parameters:
  • figsize (tuple, optional) – The size of the figure, if not given will be set based on the number of gates and qubits.

  • radius (float, optional) – The radius of the gates.

  • drawcolor (tuple, optional) – The color of the wires.

  • linewidth (float, optional) – The linewidth of the wires.

Returns:

  • fig (matplotlib.Figure) – The figure object.

  • ax (matplotlib.Axes) – The axis object.

__repr__()[source]
class quimb.tensor.CircuitDense(N=None, psi0=None, gate_opts=None, gate_contract=True, tags=None)[source]

Bases: Circuit

Quantum circuit simulation keeping the state in full dense form.

property psi

Tensor network representation of the wavefunction.

property uni
calc_qubit_ordering(qubits=None)[source]

Qubit ordering doesn’t matter for a dense wavefunction.

get_psi_reverse_lightcone(where, keep_psi0=False)[source]

Override get_psi_reverse_lightcone as for a dense wavefunction the lightcone is not meaningful.

class quimb.tensor.CircuitMPS(N=None, *, psi0=None, max_bond=None, cutoff=1e-10, gate_opts=None, gate_contract='auto-mps', **circuit_opts)[source]

Bases: Circuit

Quantum circuit simulation keeping the state always in a MPS form. If you think the circuit will not build up much entanglement, or you just want to keep a rigorous handle on how much entanglement is present, this can be useful.

Parameters:
  • N (int, optional) – The number of qubits in the circuit.

  • psi0 (TensorNetwork1DVector, optional) – The initial state, assumed to be |00000....0> if not given. The state is always copied and the tag PSI0 added.

  • max_bond (int, optional) – The maximum bond dimension to truncate to when applying gates, if any. This is simply a shortcut for setting gate_opts['max_bond'].

  • cutoff (float, optional) – The singular value cutoff to use when truncating the state. This is simply a shortcut for setting gate_opts['cutoff'].

  • gate_opts (dict, optional) – Default options to pass to each gate, for example, “max_bond” and “cutoff” etc.

  • gate_contract (str, optional) –

    The default method for applying gates. Relevant MPS options are:

    • 'auto-mps': automatically choose a method that maintains the MPS form (default). This uses 'swap+split' for 2-qubit gates and 'nonlocal' for 3+ qubit gates.

    • 'swap+split': swap nonlocal qubits to be next to each other, before applying the gate, then swapping them back

    • 'nonlocal': turn the gate into a potentially nonlocal (sub) MPO and apply it directly. See tensor_network_1d_compress().

  • circuit_opts – Supplied to Circuit.

psi

The current state of the circuit, always in MPS form.

Type:

MatrixProductState

Examples

Create a circuit object that always uses the “nonlocal” method for contracting in gates, and the “dm” compression method within that, using a large cutoff and maximum bond dimension:

circ = qtn.CircuitMPS(
    N=56,
    gate_opts=dict(
        contract="nonlocal",
        method="dm",
        max_bond=1024,
        cutoff=1e-3,
    )
)
_init_state(N, dtype='complex128')[source]
apply_gates(gates, progbar=False, **gate_opts)[source]

Apply a sequence of gates to this tensor network quantum circuit.

Parameters:
  • gates (Sequence[Gate] or Sequence[Tuple]) – The sequence of gates to apply.

  • gate_opts – Supplied to apply_gate().

property psi

Tensor network representation of the wavefunction.

property uni
calc_qubit_ordering(qubits=None)[source]

MPS already has a natural ordering.

get_psi_reverse_lightcone(where, keep_psi0=False)[source]

Override get_psi_reverse_lightcone as for an MPS the lightcone is not meaningful.

sample(C, seed=None)[source]

Sample the MPS circuit C times.

Parameters:
  • C (int) – The number of samples to generate.

  • seed (None, int, or generator, optional) – A random seed or generator to use for reproducibility.

fidelity_estimate()[source]

Estimate the fidelity of the current state based on its norm, which tracks how much the state has been truncated:

\[\tilde{F} = \left| \langle \psi | \psi \rangle \right|^2 \approx \left|\langle \psi_\mathrm{ideal} | \psi \rangle\right|^2\]

See also

error_estimate

error_estimate()[source]

Estimate the error in the current state based on the norm of the discarded part of the state:

\[\epsilon = 1 - \tilde{F}\]
local_expectation(G, where, normalized=False, **contract_opts)[source]

Compute the local expectation value of a local operator at where (via forming the reduced density matrix). Note this moves the orthogonality around inplace, and records it in info.

Parameters:
  • G (Tensor) – The local operator tensor.

  • where (int) – The qubit to compute the expectation value at.

Return type:

float

class quimb.tensor.CircuitPermMPS(N=None, psi0=None, gate_opts=None, gate_contract='swap+split', **circuit_opts)[source]

Bases: CircuitMPS

Quantum circuit simulation keeping the state always in an MPS form, but lazily tracking the qubit ordering rather than ‘swapping back’ qubits after applying non-local gates. This can be useful for circuits with no expectation of locality. The qubit ordering is always tracked in the attribute qubits. The psi attribute returns the TN with the sites reindexed and retagged according to the current qubit ordering, meaning it is no longer an MPS. Use circ.get_psi_unordered() to get the unpermuted MPS and use circ.qubits to get the current qubit ordering if you prefer.

qubits
_apply_gate(gate, tags=None, **gate_opts)[source]

Apply a Gate to this Circuit. This is the main method that all calls to apply a gate should go through.

Parameters:
  • gate (Gate) – The gate to apply.

  • tags (str or sequence of str, optional) – Tags to add to the gate tensor(s).

calc_qubit_ordering(qubits=None)[source]

Given by the current qubit permutation.

get_psi_unordered()[source]

Return the MPS representing the state but without reordering the sites.

sample(C, seed=None)[source]

Sample the PermMPS circuit C times.

Parameters:
  • C (int) – The number of samples to generate.

  • seed (None, int, or generator, optional) – A random seed or generator to use for reproducibility.

Yields:

str – The next sample bitstring.

property psi

Tensor network representation of the wavefunction.

class quimb.tensor.Gate(label, params, qubits=None, controls=None, round=None, parametrize=False)[source]

A simple class for storing the details of a quantum circuit gate.

Parameters:
  • label (str) – The name or ‘identifier’ of the gate.

  • params (Iterable[float]) – The parameters of the gate.

  • qubits (Iterable[int], optional) – Which qubits the gate acts on.

  • controls (Iterable[int], optional) – Which qubits are the controls.

  • round (int, optional) – If given, which round or layer the gate is part of.

  • parametrize (bool, optional) – Whether the gate will correspond to a parametrized tensor.

__slots__ = ('_label', '_params', '_qubits', '_controls', '_round', '_parametrize', '_tag', '_special',...
_label
_params
_round
_parametrize
_tag
_special
_constant
_array = None
classmethod from_raw(U, qubits=None, controls=None, round=None)[source]
copy()[source]
property label
property params
property qubits
property total_qubit_count
property controls
property round
property special
property parametrize
property tag
copy_with(**kwargs)[source]

Take a copy of this gate but with some attributes changed.

build_array()[source]

Build the array representation of the gate. For controlled gates this excludes the control qubits.

property array
build_mpo(L=None, **kwargs)[source]

Build an MPO representation of this gate.

__repr__()[source]
quimb.tensor.circ_a2a_rand(n, depth, seed=None, gate2='cz')[source]
quimb.tensor.circ_ansatz_1D_brickwork(n, depth, cyclic=False, gate2='cz', seed=None, **circuit_opts)[source]

A 1D circuit ansatz with odd and even layers of entangling gates interleaved with U3 single qubit unitaries:

|  |  |  |  |
|  u  u  u  u
u  o++o  o++o
|  u  u  u  |
o++o  o++o  u
|  u  u  u  |
u  o++o  o++o
|  u  u  u  |
o++o  o++o  u
|  u  u  u  u
u  o++o  o++o
|  u  u  u  |
o++o  o++o  u
u  u  u  u  |
|  |  |  |  |
Parameters:
  • n (int) – The number of qubits.

  • depth (int) – The number of entangling gates per pair.

  • cyclic (bool, optional) – Whether to add entangling gates between qubits 0 and n - 1.

  • gate2 ({'cx', 'cy', 'cz', 'iswap', ..., str}, optional) – The gate to use for the entanling pairs.

  • seed (int, optional) – Random seed for parameters.

  • opts – Supplied to gates_to_param_circuit().

Return type:

Circuit

quimb.tensor.circ_ansatz_1D_rand(n, depth, seed=None, cyclic=False, gate2='cz', avoid_doubling=True, **circuit_opts)[source]

A 1D circuit ansatz with randomly place entangling gates interleaved with U3 single qubit unitaries.

Parameters:
  • n (int) – The number of qubits.

  • depth (int) – The number of entangling gates per pair.

  • seed (int, optional) – Random seed.

  • cyclic (bool, optional) – Whether to add entangling gates between qubits 0 and n - 1.

  • gate2 ({'cx', 'cy', 'cz', 'iswap', ..., str}, optional) – The gate to use for the entanling pairs.

  • avoid_doubling (bool, optional) – Whether to avoid placing an entangling gate directly above the same entangling gate (there will still be single qubit gates interleaved).

  • opts – Supplied to gates_to_param_circuit().

Return type:

Circuit

quimb.tensor.circ_ansatz_1D_zigzag(n, depth, gate2='cz', seed=None, **circuit_opts)[source]

A 1D circuit ansatz with forward and backward layers of entangling gates interleaved with U3 single qubit unitaries:

|  |  |  |
u  u  |  |
o++o  u  |
|  |  |  u
|  o++o  |
|  |  u  |
|  |  o++o
u  u  u  u
|  |  o++o
|  |  u  |
|  o++o  |
|  u  |  u
o++o  u  |
u  u  |  |
|  |  |  |
Parameters:
  • n (int) – The number of qubits.

  • depth (int) – The number of entangling gates per pair.

  • gate2 ({'cx', 'cy', 'cz', 'iswap', ..., str}, optional) – The gate to use for the entanling pairs.

  • seed (int, optional) – Random seed for parameters.

  • opts – Supplied to gates_to_param_circuit().

Return type:

Circuit

quimb.tensor.circ_qaoa(terms, depth, gammas, betas, **circuit_opts)[source]

Generate the QAOA circuit for weighted graph described by terms.

\[|{\bar{\gamma}, \bar{\beta}}\rangle = U_B (\beta _p) U_C (\gamma _p) \cdots U_B (\beta _1) U_C (\gamma _1) |{+}\rangle\]

with

\[U_C (\gamma) = e^{-i \gamma \mathcal{C}} = \prod \limits_{i, j \in E(G)} e^{-i \gamma w_{i j} Z_i Z_j}\]

and

\[U_B (\beta) = \prod \limits_{i \in G} e^{-i \beta X_i}\]
Parameters:
  • terms (dict[tuple[int], float]) – The mapping of integer pair keys (i, j) to the edge weight values, wij. The integers should be a contiguous range enumerated from zero, with the total number of qubits being inferred from this.

  • depth (int) – The number of layers of gates to apply, p above.

  • gammas (iterable of float) – The interaction angles for each layer.

  • betas (iterable of float) – The rotation angles for each layer.

  • circuit_opts – Supplied to Circuit. Note gate_opts={'contract': False} is set by default (it can be overridden) since the RZZ gate, even though it has a rank-2 decomposition, is also diagonal.

quimb.tensor.array_contract(arrays, inputs, output=None, optimize=None, backend=None, **kwargs)[source]
quimb.tensor.contract_backend(backend, set_globally=False)[source]

A context manager to temporarily set the default backend used for tensor contractions, via ‘cotengra’. By default, this only sets the contract backend for the current thread.

Parameters:

set_globally (bool, optimize) – Whether to set the backend just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want True.

quimb.tensor.contract_strategy(strategy, set_globally=False)[source]

A context manager to temporarily set the default contraction strategy supplied as optimize to cotengra. By default, this only sets the contract strategy for the current thread.

Parameters:

set_globally (bool, optimize) – Whether to set the strategy just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want True.

quimb.tensor.get_contract_backend()[source]

Get the default backend used for tensor contractions, via ‘cotengra’.

quimb.tensor.get_contract_strategy()[source]

Get the default contraction strategy - the option supplied as optimize to cotengra.

quimb.tensor.get_symbol[source]
quimb.tensor.get_tensor_linop_backend()[source]

Get the default backend used for tensor network linear operators, via ‘cotengra’. This is different from the default contraction backend as the contractions are likely repeatedly called many times.

quimb.tensor.inds_to_eq(inputs, output=None)[source]

Turn input and output indices of any sort into a single ‘equation’ string where each index is a single ‘symbol’ (unicode character).

Parameters:
  • inputs (sequence of sequence of hashable) – The input indices per tensor.

  • output (sequence of hashable) – The output indices.

Returns:

eq – The string to feed to einsum/contract.

Return type:

str

quimb.tensor.set_contract_backend(backend)[source]

Set the default backend used for tensor contractions, via ‘cotengra’.

quimb.tensor.set_contract_strategy(strategy)[source]

Get the default contraction strategy - the option supplied as optimize to cotengra.

quimb.tensor.set_tensor_linop_backend(backend)[source]

Set the default backend used for tensor network linear operators, via ‘cotengra’. This is different from the default contraction backend as the contractions are likely repeatedly called many times.

quimb.tensor.tensor_linop_backend(backend, set_globally=False)[source]

A context manager to temporarily set the default backend used for tensor network linear operators, via ‘cotengra’. By default, this only sets the contract backend for the current thread.

Parameters:

set_globally (bool, optimize) – Whether to set the backend just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want True.

quimb.tensor.edges_1d_chain(L, cyclic=False)[source]

Return the graph edges of a finite 1D chain lattice.

Parameters:
  • L (int) – The number of cells.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

Returns:

edges

Return type:

list[(int, int)]

quimb.tensor.edges_2d_hexagonal(Lx, Ly, cyclic=False, cells=None)[source]

Return the graph edges of a finite 2D hexagonal lattice. There are two sites per cell, and note the cells do not form a square tiling. The nodes (sites) are labelled like (i, j, s) for s in 'AB'.

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly)).

Returns:

edges

Return type:

list[((int, int, str), (int, int, str))]

quimb.tensor.edges_2d_kagome(Lx, Ly, cyclic=False, cells=None)[source]

Return the graph edges of a finite 2D kagome lattice. There are three sites per cell, and note the cells do not form a square tiling. The nodes (sites) are labelled like (i, j, s) for s in 'ABC'.

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly)).

Returns:

edges

Return type:

list[((int, int, str), (int, int, str))]

quimb.tensor.edges_2d_square(Lx, Ly, cyclic=False, cells=None)[source]

Return the graph edges of a finite 2D square lattice. The nodes (sites) are labelled like (i, j).

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly)).

Returns:

edges

Return type:

list[((int, int), (int, int))]

quimb.tensor.edges_2d_triangular(Lx, Ly, cyclic=False, cells=None)[source]

Return the graph edges of a finite 2D triangular lattice. There is a single site per cell, and note the cells do not form a square tiling. The nodes (sites) are labelled like (i, j).

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly)).

Returns:

edges

Return type:

list[((int, int), (int, int))]

quimb.tensor.edges_2d_triangular_rectangular(Lx, Ly, cyclic=False, cells=None)[source]

Return the graph edges of a finite 2D triangular lattice tiled in a rectangular geometry. There are two sites per rectangular cell. The nodes (sites) are labelled like (i, j, s) for s in 'AB'.

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly)).

Returns:

edges

Return type:

list[((int, int, s), (int, int, s))]

quimb.tensor.edges_3d_cubic(Lx, Ly, Lz, cyclic=False, cells=None)[source]

Return the graph edges of a finite 3D cubic lattice. The nodes (sites) are labelled like (i, j, k).

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • Lz (int) – The number of cells along the z-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly), range(Lz)).

Returns:

edges

Return type:

list[((int, int, int), (int, int, int))]

quimb.tensor.edges_3d_diamond(Lx, Ly, Lz, cyclic=False, cells=None)[source]

Return the graph edges of a finite 3D diamond lattice. There are two sites per cell, and note the cells do not form a cubic tiling. The nodes (sites) are labelled like (i, j, k, s) for s in 'AB'.

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • Lz (int) – The number of cells along the z-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly), range(Lz)).

Returns:

edges

Return type:

list[((int, int, int, str), (int, int, int, str))]

quimb.tensor.edges_3d_diamond_cubic(Lx, Ly, Lz, cyclic=False, cells=None)[source]

Return the graph edges of a finite 3D diamond lattice tiled in a cubic geometry. There are eight sites per cubic cell. The nodes (sites) are labelled like (i, j, k, s) for s in 'ABCDEFGH'.

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • Lz (int) – The number of cells along the z-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly), range(Lz)).

Returns:

edges

Return type:

list[((int, int, int, str), (int, int, int, str))]

quimb.tensor.edges_3d_pyrochlore(Lx, Ly, Lz, cyclic=False, cells=None)[source]

Return the graph edges of a finite 3D pyorchlore lattice. There are four sites per cell, and note the cells do not form a cubic tiling. The nodes (sites) are labelled like (i, j, k, s) for s in 'ABCD'.

Parameters:
  • Lx (int) – The number of cells along the x-direction.

  • Ly (int) – The number of cells along the y-direction.

  • Lz (int) – The number of cells along the z-direction.

  • cyclic (bool, optional) – Whether to use periodic boundary conditions.

  • cells (list, optional) – A list of cells to use. If not given the cells used are itertools.product(range(Lx), range(Ly), range(Lz)).

Returns:

edges

Return type:

list[((int, int, int, str), (int, int, int, str))]

quimb.tensor.edges_tree_rand(n, max_degree=None, seed=None)[source]

Return a random tree with n nodes. This a convenience function for testing purposes and the trees generated are not guaranteed to be uniformly random (for that see networkx.random_labeled_tree).

Parameters:
  • n (int) – The number of nodes.

  • max_degree (int, optional) – The maximum degree of the nodes. For example max_degree=3 means generate a binary tree.

  • seed (int, optional) – The random seed.

Returns:

edges

Return type:

list[(int, int)]

quimb.tensor.jax_register_pytree()[source]
quimb.tensor.pack(obj)[source]

Take a tensor or tensor network like object and return a skeleton needed to reconstruct it, and a pytree of raw parameters.

Parameters:

obj (Tensor, TensorNetwork, or similar) – Something that has copy, set_params, and get_params methods.

Returns:

  • params (pytree) – A pytree of raw parameter arrays.

  • skeleton (Tensor, TensorNetwork, or similar) – A copy of obj with all references to the original data removed.

quimb.tensor.unpack(params, skeleton)[source]

Take a skeleton of a tensor or tensor network like object and a pytree of raw parameters and return a new reconstructed object with those parameters inserted.

Parameters:
  • params (pytree) – A pytree of raw parameter arrays, with the same structure as the output of skeleton.get_params().

  • skeleton (Tensor, TensorNetwork, or similar) – Something that has copy, set_params, and get_params methods.

Returns:

obj – A copy of skeleton with parameters inserted.

Return type:

Tensor, TensorNetwork, or similar

class quimb.tensor.TNOptimizer(tn, loss_fn, norm_fn=None, loss_constants=None, loss_kwargs=None, tags=None, shared_tags=None, constant_tags=None, loss_target=None, optimizer='L-BFGS-B', progbar=True, bounds=None, autodiff_backend='AUTO', executor=None, callback=None, **backend_opts)[source]

Globally optimize tensors within a tensor network with respect to any loss function via automatic differentiation. If parametrized tensors are used, optimize the parameters rather than the raw arrays.

Parameters:
  • tn (TensorNetwork) – The core tensor network structure within which to optimize tensors.

  • loss_fn (callable or sequence of callable) – The function that takes tn (as well as loss_constants and loss_kwargs) and returns a single real ‘loss’ to be minimized. For Hamiltonians which can be represented as a sum over terms, an iterable collection of terms (e.g. list) can be given instead. In that case each term is evaluated independently and the sum taken as loss_fn. This can reduce the total memory requirements or allow for parallelization (see executor).

  • norm_fn (callable, optional) – A function to call before loss_fn that prepares or ‘normalizes’ the raw tensor network in some way.

  • loss_constants (dict, optional) – Extra tensor networks, tensors, dicts/list/tuples of arrays, or arrays which will be supplied to loss_fn but also converted to the correct backend array type.

  • loss_kwargs (dict, optional) – Extra options to supply to loss_fn (unlike loss_constants these are assumed to be simple options that don’t need conversion).

  • tags (str, or sequence of str, optional) – If supplied, only optimize tensors with any of these tags.

  • shared_tags (str, or sequence of str, optional) – If supplied, each tag in shared_tags corresponds to a group of tensors to be optimized together.

  • constant_tags (str, or sequence of str, optional) – If supplied, skip optimizing tensors with any of these tags. This ‘opt-out’ mode is overridden if either tags or shared_tags is supplied.

  • loss_target (float, optional) – Stop optimizing once this loss value is reached.

  • optimizer (str, optional) – Which scipy.optimize.minimize optimizer to use (the 'method' kwarg of that function). In addition, quimb implements a few custom optimizers compatible with this interface that you can reference by name - {'adam', 'nadam', 'rmsprop', 'sgd'}.

  • executor (None or Executor, optional) – To be used with term-by-term Hamiltonians. If supplied, this executor is used to parallelize the evaluation. Otherwise each term is evaluated in sequence. It should implement the basic concurrent.futures (PEP 3148) interface.

  • progbar (bool, optional) – Whether to show live progress.

  • bounds (None or (float, float), optional) – Constrain the optimized tensor entries within this range (if the scipy optimizer supports it).

  • autodiff_backend ({'jax', 'autograd', 'tensorflow', 'torch'}, optional) – Which backend library to use to perform the automatic differentation (and computation).

  • callback (callable, optional) –

    A function to call after each optimization step. It should take the current TNOptimizer instance as its only argument. Information such as the current loss and number of evaluations can then be accessed:

    def callback(tnopt):
        print(tnopt.nevals, tnopt.loss)
    

  • backend_opts – Supplied to the backend function compiler and array handler. For example jit_fn=True or device='cpu' .

progbar
tags
shared_tags
constant_tags
_autodiff_backend
_multiloss
norm_fn
loss_constants
loss_kwargs
property bounds
property optimizer

The underlying optimizer that works with the vectorized functions.

callback
_set_tn(tn)[source]
_reset_tracking_info(loss_target=None)[source]
reset(tn=None, clear_info=True, loss_target=None)[source]

Reset this optimizer without losing the compiled loss and gradient functions.

Parameters:
  • tn (TensorNetwork, optional) – Set this tensor network as the current state of the optimizer, it must exactly match the original tensor network.

  • clear_info (bool, optional) – Clear the tracked losses and iterations.

_maybe_init_pbar(n)[source]
_maybe_update_pbar()[source]
_maybe_close_pbar()[source]
_check_loss_target()[source]
_maybe_call_callback()[source]
vectorized_value(x)[source]

The value of the loss function at vector x.

vectorized_value_and_grad(x)[source]

The value and gradient of the loss function at vector x.

vectorized_hessp(x, p)[source]

The action of the hessian at point x on vector p.

__repr__()[source]
property d
property nevals

The number of gradient evaluations.

get_tn_opt()[source]

Extract the optimized tensor network, this is a three part process:

  1. inject the current optimized vector into the target tensor network,

  2. run it through norm_fn,

  3. drop any tags used to identify variables.

Returns:

tn_opt

Return type:

TensorNetwork

optimize(n, tol=None, jac=True, hessp=False, optlib='scipy', **options)[source]

Run the optimizer for n function evaluations, using by default scipy.optimize.minimize() as the driver for the vectorized computation. Supplying the gradient and hessian vector product is controlled by the jac and hessp options respectively.

Parameters:
  • n (int) – Notionally the maximum number of iterations for the optimizer, note that depending on the optimizer being used, this may correspond to number of function evaluations rather than just iterations.

  • tol (None or float, optional) – Tolerance for convergence, note that various more specific tolerances can usually be supplied to options, depending on the optimizer being used.

  • jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.

  • hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.

  • optlib ({'scipy', 'nlopt'}, optional) – Which optimization library to use.

  • options – Supplied to scipy.optimize.minimize() or whichever optimizer is being used.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_scipy(n, tol=None, jac=True, hessp=False, **options)[source]

Scipy based optimization, see optimize() for details.

optimize_basinhopping(n, nhop, temperature=1.0, jac=True, hessp=False, **options)[source]

Run the optimizer for using scipy.optimize.basinhopping() as the driver for the vectorized computation. This performs nhop local optimization each with n iterations.

Parameters:
  • n (int) – Number of iterations per local optimization.

  • nhop (int) – Number of local optimizations to hop between.

  • temperature (float, optional) – H

  • options – Supplied to the inner scipy.optimize.minimize() call.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_nlopt(n, tol=None, jac=True, hessp=False, ftol_rel=None, ftol_abs=None, xtol_rel=None, xtol_abs=None)[source]

Run the optimizer for n function evaluations, using nlopt as the backend library to run the optimization. Whether the gradient is computed depends on which optimizer is selected, see valid options at https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/.

The following scipy optimizer options are automatically translated to the corresponding nlopt algorithms: {“l-bfgs-b”, “slsqp”, “tnc”, “cobyla”}.

Parameters:
  • n (int) – The maximum number of iterations for the optimizer.

  • tol (None or float, optional) – Tolerance for convergence, here this is taken to be the relative tolerance for the loss (ftol_rel below overrides this).

  • jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.

  • hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.

  • ftol_rel (float, optional) – Set relative tolerance on function value.

  • ftol_abs (float, optional) – Set absolute tolerance on function value.

  • xtol_rel (float, optional) – Set relative tolerance on optimization parameters.

  • xtol_abs (float, optional) – Set absolute tolerances on optimization parameters.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_ipopt(n, tol=None, **options)[source]

Run the optimizer for n function evaluations, using ipopt as the backend library to run the optimization via the python package cyipopt.

Parameters:

n (int) – The maximum number of iterations for the optimizer.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_nevergrad(n)[source]

Run the optimizer for n function evaluations, using nevergrad as the backend library to run the optimization. As the name suggests, the gradient is not required for this method.

Parameters:

n (int) – The maximum number of iterations for the optimizer.

Returns:

tn_opt

Return type:

TensorNetwork

plot(xscale='symlog', xscale_linthresh=20, zoom='auto', hlines=())[source]

Plot the loss function as a function of the number of iterations.

Parameters:
  • xscale (str, optional) – The scale of the x-axis. Default is "symlog", i.e. linear for the first part of the plot, and logarithmic for the rest, changing at xscale_linthresh.

  • xscale_linthresh (int, optional) – The threshold for the change from linear to logarithmic scale, if xscale is "symlog". Default is 20.

  • zoom (None or int, optional) – If not None, show an inset plot of the last zoom iterations.

  • hlines (dict, optional) – A dictionary of horizontal lines to plot. The keys are the labels of the lines, and the values are the y-values of the lines.

Returns:

  • fig (matplotlib.figure.Figure) – The figure object.

  • ax (matplotlib.axes.Axes) – The axes object.

class quimb.tensor.Dense1D(array, phys_dim=2, tags=None, site_ind_id='k{}', site_tag_id='I{}', **tn_opts)[source]

Bases: TensorNetwork1DVector

Mimics other 1D tensor network structures, but really just keeps the full state in a single tensor. This allows e.g. applying gates in the same way for quantum circuit simulation as lazily represented hilbert spaces.

Parameters:
  • array (array_like) – The full hilbert space vector - assumed to be made of equal hilbert spaces each of size phys_dim and will be reshaped as such.

  • phys_dim (int, optional) – The hilbert space size of each site, default: 2.

  • tags (sequence of str, optional) – Extra tags to add to the tensor network.

  • site_ind_id (str, optional) – String formatter describing how to label the site indices.

  • site_tag_id (str, optional) – String formatter describing how to label the site tags.

  • tn_opts – Supplied to TensorNetwork.

_EXTRA_PROPS = ('_site_ind_id', '_site_tag_id', '_L')
_L
_site_ind_id
_site_tag_id
classmethod rand(n, phys_dim=2, dtype=float, **dense1d_opts)[source]

Create a random dense vector ‘tensor network’.

class quimb.tensor.MatrixProductOperator(arrays, *, sites=None, L=None, shape='lrud', tags=None, upper_ind_id='k{}', lower_ind_id='b{}', site_tag_id='I{}', **tn_opts)[source]

Bases: TensorNetwork1DOperator, TensorNetwork1DFlat

Initialise a matrix product operator, with auto labelling and tagging.

Parameters:
  • arrays (sequence of arrays) – The tensor arrays to form into a MPO.

  • sites (sequence of int, optional) – Construct the MPO on these sites only. If not given, enumerate from zero. Should be monotonically increasing and match arrays.

  • L (int, optional) – The number of sites the MPO should be defined on. If not given, this is taken as the max sites value plus one (i.e.g the number of arrays if sites is not given).

  • shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrud’ (the default) indicates the shape corresponds left-bond, right-bond, ‘up’ physical index, ‘down’ physical index. End tensors have either ‘l’ or ‘r’ dropped from the string.

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

  • upper_ind_id (str) – A string specifiying how to label the upper physical site indices. Should contain a '{}' placeholder. It is used to generate the actual indices like: map(upper_ind_id.format, range(len(arrays))).

  • lower_ind_id (str) – A string specifiying how to label the lower physical site indices. Should contain a '{}' placeholder. It is used to generate the actual indices like: map(lower_ind_id.format, range(len(arrays))).

  • site_tag_id (str) – A string specifiying how to tag the tensors at each site. Should contain a '{}' placeholder. It is used to generate the actual tags like: map(site_tag_id.format, range(len(arrays))).

_EXTRA_PROPS = ('_site_tag_id', '_upper_ind_id', '_lower_ind_id', 'cyclic', '_L')
_L
_upper_ind_id
_lower_ind_id
_site_tag_id
cyclic
classmethod from_fill_fn(fill_fn, L, bond_dim, phys_dim=2, sites=None, cyclic=False, shape='lrud', tags=None, upper_ind_id='k{}', lower_ind_id='b{}', site_tag_id='I{}')[source]

Create an MPO by supplying a ‘filling’ function to generate the data for each site.

Parameters:
  • fill_fn (callable) – A function with signature fill_fn(shape : tuple[int]) -> array_like.

  • L (int) – The number of sites.

  • bond_dim (int) – The bond dimension.

  • phys_dim (int or Sequence[int], optional) – The physical dimension(s) of each site, if a sequence it will be cycled over.

  • sites (None or sequence of int, optional) – Construct the MPO on these sites only. If not given, enumerate from zero.

  • cyclic (bool, optional) – Whether the MPO should be cyclic (periodic).

  • shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrud’ (the default) indicates the shape corresponds left-bond, right-bond, ‘up’ physical index, ‘down’ physical index. End tensors have either ‘l’ or ‘r’ dropped from the string.

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

  • upper_ind_id (str) – A string specifiying how to label the upper physical site indices. Should contain a '{}' placeholder.

  • lower_ind_id (str) – A string specifiying how to label the lower physical site indices. Should contain a '{}' placeholder.

  • site_tag_id (str, optional) – How to tag the physical sites. Should contain a '{}' placeholder.

Return type:

MatrixProductState

classmethod from_dense(A, dims=2, sites=None, L=None, tags=None, site_tag_id='I{}', upper_ind_id='k{}', lower_ind_id='b{}', **split_opts)[source]

Build an MPO from a raw dense matrix.

Parameters:
  • A (array) – The dense operator, it should be reshapeable to (*dims, *dims).

  • dims (int, sequence of int, optional) – The physical subdimensions of the operator. If any integer, assume all sites have the same dimension. If a sequence, the dimension of each site. Default is 2.

  • sites (sequence of int, optional) – The sites to place the operator on. If None, will place it on first len(dims) sites.

  • L (int, optional) – The total number of sites in the MPO, if the operator represents only a subset.

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

  • site_tag_id (str, optional) – The string to use to label the site tags.

  • upper_ind_id (str, optional) – The string to use to label the upper physical indices.

  • lower_ind_id (str, optional) – The string to use to label the lower physical indices.

  • split_opts – Supplied to tensor_split().

Return type:

MatrixProductOperator

fill_empty_sites(mode='full', phys_dim=None, fill_array=None, inplace=False)[source]

Fill any empty sites of this MPO with identity tensors, adding size 1 bonds or draping existing bonds where necessary such that the resulting tensor has nearest neighbor bonds only.

Parameters:
  • mode ({'full', 'minimal'}, optional) – Whether to fill in all sites, including at either end, or simply the minimal range covering the min to max current sites present.

  • phys_dim (int, optional) – The physical dimension of the identity tensors to add. If not specified, will use the upper physical dimension of the first present site.

  • fill_array (array, optional) – The array to use for the identity tensors. If not specified, will use the identity array of the same dtype as the first present site.

  • inplace (bool, optional) – Whether to perform the operation inplace.

Returns:

The modified MPO.

Return type:

MatrixProductOperator

fill_empty_sites_[source]
add_MPO(other, inplace=False, **kwargs)[source]
add_MPO_[source]
_apply_mps(other, compress=False, contract=True, **compress_opts)[source]
_apply_mpo(other, compress=False, contract=True, **compress_opts)[source]
apply(other, compress=False, **compress_opts)[source]

Act with this MPO on another MPO or MPS, such that the resulting object has the same tensor network structure/indices as other.

For an MPS:

       | | | | | | | | | | | | | | | | | |
 self: A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A
       | | | | | | | | | | | | | | | | | |
other: x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x

                       -->

       | | | | | | | | | | | | | | | | | |   <- other.site_ind_id
  out: y=y=y=y=y=y=y=y=y=y=y=y=y=y=y=y=y=y

For an MPO:

       | | | | | | | | | | | | | | | | | |
 self: A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A
       | | | | | | | | | | | | | | | | | |
other: B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B
       | | | | | | | | | | | | | | | | | |

                       -->

       | | | | | | | | | | | | | | | | | |   <- other.upper_ind_id
  out: C=C=C=C=C=C=C=C=C=C=C=C=C=C=C=C=C=C
       | | | | | | | | | | | | | | | | | |   <- other.lower_ind_id

The resulting TN will have the same structure/indices as other, but probably with larger bonds (depending on compression).

Parameters:
Return type:

MatrixProductOperator or MatrixProductState

dot[source]
permute_arrays(shape='lrud')[source]

Permute the indices of each tensor in this MPO to match shape. This doesn’t change how the overall object interacts with other tensor networks but may be useful for extracting the underlying arrays consistently. This is an inplace operation.

Parameters:

shape (str, optional) – A permutation of 'lrud' specifying the desired order of the left, right, upper and lower (down) indices respectively.

trace(left_inds=None, right_inds=None)[source]

Take the trace of this MPO.

partial_transpose(sysa, inplace=False)[source]

Perform the partial transpose on this MPO by swapping the bra and ket indices on sites in sysa.

Parameters:
  • sysa (sequence of int or int) – The sites to transpose indices on.

  • inplace (bool, optional) – Whether to perform the partial transposition inplace.

Return type:

MatrixProductOperator

rand_state(bond_dim, **mps_opts)[source]

Get a random vector matching this MPO.

identity(**mpo_opts)[source]

Get a identity matching this MPO.

show(max_width=None)[source]
class quimb.tensor.MatrixProductState(arrays, *, sites=None, L=None, shape='lrp', tags=None, site_ind_id='k{}', site_tag_id='I{}', **tn_opts)[source]

Bases: TensorNetwork1DVector, TensorNetwork1DFlat

Initialise a matrix product state, with auto labelling and tagging.

Parameters:
  • arrays (sequence of arrays) – The tensor arrays to form into a MPS.

  • sites (sequence of int, optional) – Construct the MPO on these sites only. If not given, enumerate from zero. Should be monotonically increasing and match arrays.

  • L (int, optional) – The number of sites the MPO should be defined on. If not given, this is taken as the max sites value plus one (i.e.g the number of arrays if sites is not given).

  • shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrp’ (the default) indicates the shape corresponds left-bond, right-bond, physical index. End tensors have either ‘l’ or ‘r’ dropped from the string.

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

  • site_ind_id (str) – A string specifiying how to label the physical site indices. Should contain a '{}' placeholder. It is used to generate the actual indices like: map(site_ind_id.format, range(len(arrays))).

  • site_tag_id (str) – A string specifiying how to tag the tensors at each site. Should contain a '{}' placeholder. It is used to generate the actual tags like: map(site_tag_id.format, range(len(arrays))).

_EXTRA_PROPS = ('_site_tag_id', '_site_ind_id', 'cyclic', '_L')
_L
_site_ind_id
_site_tag_id
cyclic
classmethod from_fill_fn(fill_fn, L, bond_dim, phys_dim=2, sites=None, cyclic=False, shape='lrp', site_ind_id='k{}', site_tag_id='I{}', tags=None)[source]

Create an MPS by supplying a ‘filling’ function to generate the data for each site.

Parameters:
  • fill_fn (callable) – A function with signature fill_fn(shape : tuple[int]) -> array_like.

  • L (int) – The number of sites.

  • bond_dim (int) – The bond dimension.

  • phys_dim (int or Sequence[int], optional) – The physical dimension(s) of each site, if a sequence it will be cycled over.

  • sites (None or sequence of int, optional) – Construct the MPS on these sites only. If not given, enumerate from zero.

  • cyclic (bool, optional) – Whether the MPS should be cyclic (periodic).

  • shape (str, optional) – What specific order to layout the indices in, should be a sequence of 'l', 'r', and 'p', corresponding to left, right, and physical indices respectively.

  • site_ind_id (str, optional) – How to label the physical site indices.

  • site_tag_id (str, optional) – How to tag the physical sites.

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

Return type:

MatrixProductState

classmethod from_dense(psi, dims=2, tags=None, site_ind_id='k{}', site_tag_id='I{}', **split_opts)[source]

Create a MatrixProductState directly from a dense vector

Parameters:
  • psi (array_like) – The dense state to convert to MPS from.

  • dims (int or sequence of int) – Physical subsystem dimensions of each site. If a single int, all sites have this same dimension, by default, 2.

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

  • site_ind_id (str, optional) – How to index the physical sites, see MatrixProductState.

  • site_tag_id (str, optional) – How to tag the physical sites, see MatrixProductState.

  • split_opts – Supplied to tensor_split() to in order to partition the dense vector into tensors. absorb='left' is set by default, to ensure the compression is canonical / optimal.

Return type:

MatrixProductState

Examples

>>> dims = [2, 2, 2, 2, 2, 2]
>>> psi = rand_ket(prod(dims))
>>> mps = MatrixProductState.from_dense(psi, dims)
>>> mps.show()
 2 4 8 4 2
o-o-o-o-o-o
| | | | | |
add_MPS(other, inplace=False, **kwargs)[source]

Add another MatrixProductState to this one.

add_MPS_[source]
permute_arrays(shape='lrp')[source]

Permute the indices of each tensor in this MPS to match shape. This doesn’t change how the overall object interacts with other tensor networks but may be useful for extracting the underlying arrays consistently. This is an inplace operation.

Parameters:

shape (str, optional) – A permutation of 'lrp' specifying the desired order of the left, right, and physical indices respectively.

normalize(bra=None, eps=1e-15, insert=None)[source]

Normalize this MPS, optional with co-vector bra. For periodic MPS this uses transfer matrix SVD approximation with precision eps in order to be efficient. Inplace.

Parameters:
  • bra (MatrixProductState, optional) – If given, normalize this MPS with the same factor.

  • eps (float, optional) – If cyclic, precision to approximation transfer matrix with. Default: 1e-14.

  • insert (int, optional) – Insert the corrective normalization on this site, random if not given.

Returns:

old_norm – The old norm self.H @ self.

Return type:

float

gate_split(G, where, inplace=False, **compress_opts)[source]

Apply a two-site gate and then split resulting tensor to retrieve a MPS form:

-o-o-A-B-o-o-
 | | | | | |            -o-o-GGG-o-o-           -o-o-X~Y-o-o-
 | | GGG | |     ==>     | | | | | |     ==>     | | | | | |
 | | | | | |                 i j                     i j
     i j

As might be found in TEBD.

Parameters:
  • G (array) – The gate, with shape (d**2, d**2) for physical dimension d.

  • where ((int, int)) – Indices of the sites to apply the gate to.

  • compress_opts – Supplied to tensor_split().

See also

gate, gate_with_auto_swap

gate_split_[source]
swap_sites_with_compress(i, j, info=None, inplace=False, **compress_opts)[source]

Swap sites i and j by contracting, then splitting with the physical indices swapped. If the sites are not adjacent, this will happen multiple times.

Parameters:
  • i (int) – The first site to swap.

  • j (int) – The second site to swap.

  • cur_orthog (int, sequence of int, or 'calc') – If known, the current orthogonality center.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • inplace (bond, optional) – Perform the swaps inplace.

  • compress_opts – Supplied to tensor_split().

swap_sites_with_compress_[source]
swap_site_to(i, f, info=None, inplace=False, **compress_opts)[source]

Swap site i to site f, compressing the bond after each swap:

      i       f
0 1 2 3 4 5 6 7 8 9      0 1 2 4 5 6 7 3 8 9
o-o-o-x-o-o-o-o-o-o      >->->->->->->-x-<-<
| | | | | | | | | |  ->  | | | | | | | | | |
Parameters:
  • i (int) – The site to move.

  • f (int) – The new location for site i.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • inplace (bond, optional) – Perform the swaps inplace.

  • compress_opts – Supplied to tensor_split().

swap_site_to_[source]
gate_with_auto_swap(G, where, info=None, swap_back=True, inplace=False, **compress_opts)[source]

Perform a two site gate on this MPS by, if necessary, swapping and compressing the sites until they are adjacent, using gate_split, then unswapping the sites back to their original position.

Parameters:
  • G (array) – The gate, with shape (d**2, d**2) for physical dimension d.

  • where ((int, int)) – Indices of the sites to apply the gate to.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • swap_back (bool, optional) – Whether to swap the sites back to their original position after applying the gate. If not, for sites i < j, the site j will remain swapped to i + 1, and sites between i + 1 and j will be shifted one place up.

  • inplace (bond, optional) – Perform the swaps inplace.

  • compress_opts – Supplied to tensor_split().

See also

gate, gate_split

gate_with_auto_swap_[source]
gate_with_submpo(submpo, where=None, method='direct', transpose=False, info=None, inplace=False, inplace_mpo=False, **compress_opts)[source]

Apply an MPO, which only acts on a subset of sites, to this MPS, compressing the MPS with the MPO only on the minimal set of sites covering where, keeping the MPS form:

    │   │ │
    A───A─A
    │   │ │         ->    │ │ │ │ │ │ │ │
                          >─>─O━O━O━O─<─<
│ │ │ │ │ │ │ │
o─o─o─o─o─o─o─o
Parameters:
  • submpo (MatrixProductOperator) – The MPO to apply.

  • where (sequence of int, optional) – The range of sites the MPO acts on, will be inferred from the support of the MPO if not given.

  • method ({'direct", 'dm', 'zipup', 'zipup-first', 'fit'}, optional) – The compression method to use.

  • transpose (bool, optional) – Whether to transpose the MPO before applying it. By default the lower inds of the MPO are contracted with the MPS, if transposed the upper inds are contracted.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • inplace (bool, optional) – Whether to perform the application and compression inplace.

  • compress_opts – Supplied to tensor_network_1d_compress().

Return type:

MatrixProductState

gate_with_submpo_[source]
gate_with_mpo(mpo, method='direct', transpose=False, inplace=False, inplace_mpo=False, **compress_opts)[source]

Gate this MPS with an MPO and compress the result with one of various methods back to MPS form:

│ │ │ │ │ │ │ │
A─A─A─A─A─A─A─A
│ │ │ │ │ │ │ │     ->    │ │ │ │ │ │ │ │
                          O━O━O━O━O━O━O━O
│ │ │ │ │ │ │ │
o─o─o─o─o─o─o─o
Parameters:
  • mpo (MatrixProductOperator) – The MPO to apply.

  • max_bond (int, optional) – A maximum bond dimension to keep when compressing.

  • cutoff (float, optional) – A singular value cutoff to use when compressing.

  • method ({'direct", 'dm', 'zipup', 'zipup-first', 'fit', ...}, optional) – The compression method to use.

  • transpose (bool, optional) – Whether to transpose the MPO before applying it. By default the lower inds of the MPO are contracted with the MPS, if transposed the upper inds are contracted.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • inplace_mpo (bool, optional) – Whether the modify the MPO in place, a minor performance gain.

  • compress_opts – Other options supplied to tensor_network_1d_compress().

Return type:

MatrixProductState

gate_with_mpo_[source]
gate_nonlocal(G, where, dims=None, method='direct', info=None, inplace=False, **compress_opts)[source]

Apply a potentially non-local gate to this MPS by first decomposing it into an MPO, then compressing the MPS with MPO only on the minimal set of sites covering where.

Parameters:
  • G (array_like) – The gate to apply.

  • where (sequence of int) – The sites to apply the gate to.

  • max_bond (int, optional) – A maximum bond dimension to keep when compressing.

  • cutoff (float, optional) – A singular value cutoff to use when compressing.

  • dims (sequence of int, optional) – The factorized dimensions of the gate G, which should match the physical dimensions of the sites it acts on. Calculated if not supplied. If a single int, all sites are assumed to have this same dimension.

  • method ({'direct", 'dm', 'zipup', 'zipup-first', 'fit', ...}, optional) – The compression method to use.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • compress_opts – Supplied to tensor_network_1d_compress().

Return type:

MatrixProductState

gate_nonlocal_[source]
flip(inplace=False)[source]

Reverse the order of the sites in the MPS, such that site i is now at site L - i - 1.

magnetization(i, direction='Z', info=None)[source]

Compute the magnetization at site i.

schmidt_values(i, info=None, method='svd')[source]

Find the schmidt values associated with the bipartition of this MPS between sites on either site of i. In other words, i is the number of sites in the left hand partition:

....L....   i
o-o-o-o-o-S-o-o-o-o-o-o-o-o-o-o-o
| | | | |   | | | | | | | | | | |
       i-1  ..........R..........

The schmidt values, S, are the singular values associated with the (i - 1, i) bond, squared, provided the MPS is mixed canonized at one of those sites.

Parameters:
  • i (int) – The number of sites in the left partition.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

Returns:

S – The schmidt values.

Return type:

1d-array

entropy(i, info=None, method='svd')[source]

The entropy of bipartition between the left block of i sites and the rest.

Parameters:
  • i (int) – The number of sites in the left partition.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

Return type:

float

schmidt_gap(i, info=None, method='svd')[source]

The schmidt gap of bipartition between the left block of i sites and the rest.

Parameters:
  • i (int) – The number of sites in the left partition.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

Return type:

float

partial_trace_to_mpo(keep, upper_ind_id='b{}', rescale_sites=True)[source]

Partially trace this matrix product state, producing a matrix product operator.

Parameters:
  • keep (sequence of int or slice) – Indicies of the sites to keep.

  • upper_ind_id (str, optional) – The ind id of the (new) ‘upper’ inds, i.e. the ‘bra’ inds.

  • rescale_sites (bool, optional) – If True (the default), then the kept sites will be rescaled to (0, 1, 2, ...) etc. rather than keeping their original site numbers.

Returns:

rho – The density operator in MPO form.

Return type:

MatrixProductOperator

partial_trace(*_, **__)[source]

Partially trace this tensor network state, keeping only the sites in keep, using compressed contraction.

Parameters:
  • keep (iterable of hashable) – The sites to keep.

  • max_bond (int) – The maximum bond dimensions to use while compressed contracting.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.

  • flatten ({False, True, 'all'}, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually improved. If 'all' also flatten the tensors in keep.

  • reduce (bool, optional) – Whether to first ‘pull’ the physical indices off their respective tensors using QR reduction. Experimental.

  • normalized (bool, optional) – Whether to normalize the reduced density matrix at the end.

  • symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computation or not:

    - False: perform the computation.
    - 'tn': return the tensor network without running the path
      optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree``..
    - True: run the path optimizer and return the ``PathInfo``.
    

  • contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to contract_compressed().

Returns:

rho – The reduce density matrix of sites in keep.

Return type:

array_like

ptr(*_, **__)[source]
partial_trace_to_dense_canonical(where, normalized=True, info=None, **contract_opts)[source]

Compute the dense local reduced density matrix by canonicalizing around the target sites and then contracting the local tensors. Note this moves the orthogonality around inplace, and records it in info.

Parameters:
  • where (int or tuple[int]) – The site or sites to compute the reduced density matrix for.

  • normalized (bool, optional) – Explicitly normalize the local reduced density matrix.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • contract_opts – Passed to tensor_contract when computing the reduced local density matrix.

Return type:

array_like

local_expectation_canonical(G, where, normalized=True, info=None, **contract_opts)[source]

Compute a local expectation value (via forming the reduced density matrix). Note this moves the orthogonality around inplace, and records it in info.

Parameters:
  • G (array_like) – The local operator to compute the expectation of.

  • where (int or tuple[int]) – The site or sites to compute the expectation at.

  • normalized (bool, optional) – Explicitly normalize the local reduced density matrix.

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • contract_opts – Passed to tensor_contract when computing the reduced local density matrix.

Return type:

float

compute_local_expectation_canonical(terms, normalized=True, return_all=False, info=None, inplace=False, **contract_opts)[source]

Compute many local expectations at once, via forming the relevant reduced density matrices via canonicalization. This moves the orthogonality around inplace, and records it in info.

Parameters:
  • terms (dict[int or tuple[int], array_like]) – The local terms to compute values for.

  • normalized (bool, optional) – Explicitly normalize each local reduced density matrix.

  • return_all (bool, optional) – Whether to return each expectation in terms separately or sum them all together (the default).

  • info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • inplace (bool, optional) – Whether to perform the required canonicalizations inplace.

  • contract_opts – Supplied to contract() when contracting the local density matrices.

Returns:

The expecetation value(s), either summed or for each term if return_all=True.

Return type:

float or dict[in or tuple[int], float]

compute_local_expectation_via_envs(terms, normalized=True, return_all=False, **contract_opts)[source]

Compute many local expectations at once, via forming the relevant local overlaps using left and right environments formed via contraction. This does not require any canonicalization and can be quicker if the canonical center is not already aligned.

Parameters:
  • terms (dict[int or tuple[int], array_like]) – The local terms to compute values for.

  • normalized (bool, optional) – Explicitly normalize each local reduced density matrix.

  • return_all (bool, optional) – Whether to return each expectation in terms separately or sum them all together (the default).

  • contract_opts – Supplied to contract() when contracting the local overlaps.

Returns:

The expecetation value(s), either summed or for each term if return_all=True.

Return type:

float or dict[int or tuple[int], float]

See also

compute_local_expectation_canonical, compute_left_environments, compute_right_environments

compute_local_expectation(terms, normalized=True, return_all=False, method='canonical', info=None, inplace=False, **contract_opts)[source]

Compute many local expectations at once.

Parameters:
  • terms (dict[int or tuple[int], array_like]) – The local terms to compute values for.

  • normalized (bool, optional) – Explicitly normalize each local term.

  • return_all (bool, optional) – Whether to return each expectation in terms separately or sum them all together (the default).

  • method ({'canonical', 'envs'}, optional) –

    The method to use to compute the local expectations.

    • ’canonical’: canonicalize around the sites of interest and contract the local reduced density matrices, moving the canonical center around as needed.

    • ’envs’: form the local overlaps using left and right environments and contract these directly. This can be quicker if the canonical center is not already aligned.

  • info (dict, optional) – If supplied, and method==”canonical”, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center. Its input value can be "calc", a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.

  • inplace (bool, optional) – If method==”canonical”, whether to perform the required canonicalizations inplace or on a copy of the state.

  • contract_opts – Supplied to contract() when contracting the local overlaps or density matrices.

Returns:

The expecetation value(s), either summed or for each term if return_all=True.

Return type:

float or dict[int or tuple[int], float]

bipartite_schmidt_state(sz_a, get='ket', info=None)[source]

Compute the reduced state for a bipartition of an OBC MPS, in terms of the minimal left/right schmidt basis:

     A            B
 .........     ...........
 >->->->->--s--<-<-<-<-<-<    ->   +-s-+
 | | | | |     | | | | | |         |   |
k0 k1...                          kA   kB
Parameters:
  • sz_a (int) – The number of sites in subsystem A, must be 0 < sz_a < N.

  • get ({'ket', 'rho', 'ket-dense', 'rho-dense'}, optional) –

    Get the:

    • ’ket’: vector form as tensor.

    • ’rho’: density operator form, i.e. vector outer product

    • ’ket-dense’: like ‘ket’ but return qarray.

    • ’rho-dense’: like ‘rho’ but return qarray.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

static _do_lateral_compress(mps, kb, section, leave_short, ul, ll, heps, hmethod, hmax_bond, verbosity, compressed, **compress_opts)[source]
static _do_vertical_decomp(mps, kb, section, sysa, sysb, compressed, ul, ur, ll, lr, vmethod, vmax_bond, veps, verbosity, **compress_opts)[source]
partial_trace_compress(sysa, sysb, eps=1e-08, method=('isvd', None), max_bond=(None, 1024), leave_short=True, renorm=True, lower_ind_id='b{}', verbosity=0, **compress_opts)[source]

Perform a compressed partial trace using singular value lateral then vertical decompositions of transfer matrix products:

        .....sysa......     ...sysb....
o-o-o-o-A-A-A-A-A-A-A-A-o-o-B-B-B-B-B-B-o-o-o-o-o-o-o-o-o
| | | | | | | | | | | | | | | | | | | | | | | | | | | | |

                          ==> form inner product

        ...............     ...........
o-o-o-o-A-A-A-A-A-A-A-A-o-o-B-B-B-B-B-B-o-o-o-o-o-o-o-o-o
| | | | | | | | | | | | | | | | | | | | | | | | | | | | |
o-o-o-o-A-A-A-A-A-A-A-A-o-o-B-B-B-B-B-B-o-o-o-o-o-o-o-o-o

                          ==> lateral SVD on each section

          .....sysa......     ...sysb....
          /\             /\   /\         /\
  ... ~~~E  A~~~~~~~~~~~A  E~E  B~~~~~~~B  E~~~ ...
          \/             \/   \/         \/

                          ==> vertical SVD and unfold on A & B

                  |                 |
          /-------A-------\   /-----B-----\
  ... ~~~E                 E~E             E~~~ ...
          \-------A-------/   \-----B-----/
                  |                 |

With various special cases including OBC or end spins included in subsytems.

Parameters:
  • sysa (sequence of int) – The sites, which should be contiguous, defining subsystem A.

  • sysb (sequence of int) – The sites, which should be contiguous, defining subsystem B.

  • eps (float or (float, float), optional) – Tolerance(s) to use when compressing the subsystem transfer matrices and vertically decomposing.

  • method (str or (str, str), optional) – Method(s) to use for laterally compressing the state then vertially compressing subsytems.

  • max_bond (int or (int, int), optional) – The maximum bond to keep for laterally compressing the state then vertially compressing subsytems.

  • leave_short (bool, optional) – If True (the default), don’t try to compress short sections.

  • renorm (bool, optional) – If True (the default), renomalize the state so that tr(rho)==1.

  • lower_ind_id (str, optional) – The index id to create for the new density matrix, the upper_ind_id is automatically taken as the current site_ind_id.

  • compress_opts (dict, optional) – If given, supplied to partial_trace_compress to govern how singular values are treated. See tensor_split.

  • verbosity ({0, 1}, optional) – How much information to print while performing the compressed partial trace.

Returns:

rho_ab – Density matrix tensor network with outer_inds = ('k0', 'k1', 'b0', 'b1') for example.

Return type:

TensorNetwork

logneg_subsys(sysa, sysb, compress_opts=None, approx_spectral_opts=None, verbosity=0, approx_thresh=2**12)[source]

Compute the logarithmic negativity between subsytem blocks, e.g.:

                   sysa         sysb
                 .........       .....
... -o-o-o-o-o-o-A-A-A-A-A-o-o-o-B-B-B-o-o-o-o-o-o-o- ...
     | | | | | | | | | | | | | | | | | | | | | | | |
Parameters:
  • sysa (sequence of int) – The sites, which should be contiguous, defining subsystem A.

  • sysb (sequence of int) – The sites, which should be contiguous, defining subsystem B.

  • eps (float, optional) – Tolerance to use when compressing the subsystem transfer matrices.

  • method (str or (str, str), optional) – Method(s) to use for laterally compressing the state then vertially compressing subsytems.

  • compress_opts (dict, optional) – If given, supplied to partial_trace_compress to govern how singular values are treated. See tensor_split.

  • approx_spectral_opts – Supplied to approx_spectral_function().

Returns:

ln – The logarithmic negativity.

Return type:

float

See also

MatrixProductState.partial_trace_compress, approx_spectral_function

measure(site, remove=False, outcome=None, renorm=True, info=None, get=None, inplace=False)[source]

Measure this MPS at site, including projecting the state. Optionally remove the site afterwards, yielding an MPS with one less site. In either case the orthogonality center of the returned MPS is min(site, new_L - 1).

Parameters:
  • site (int) – The site to measure.

  • remove (bool, optional) –

    Whether to remove the site completely after projecting the measurement. If True, sites greater than site will be retagged and reindex one down, and the MPS will have one less site. E.g:

    0-1-2-3-4-5-6
           / / /  - measure and remove site 3
    0-1-2-4-5-6
                  - reindex sites (4, 5, 6) to (3, 4, 5)
    0-1-2-3-4-5
    

  • outcome (None or int, optional) – Specify the desired outcome of the measurement. If None, it will be randomly sampled according to the local density matrix.

  • renorm (bool, optional) – Whether to renormalize the state post measurement.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

  • get ({None, 'outcome'}, optional) – If 'outcome', simply return the outcome, and don’t perform any projection.

  • inplace (bool, optional) – Whether to perform the measurement in place or not.

Returns:

  • outcome (int) – The measurement outcome, drawn from range(phys_dim).

  • psi (MatrixProductState) – The measured state, if get != 'outcome'.

measure_[source]
sample_configuration(seed=None, info=None)[source]

Sample a configuration from this MPS.

Parameters:
  • seed (None, int, or np.random.Generator, optional) – A random seed or generator to use.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

sample(C, seed=None, info=None)[source]

Generate C samples rom this MPS, along with their probabilities.

Parameters:
  • C (int) – The number of samples to generate.

  • seed (None, int, or np.random.Generator, optional) – A random seed or generator to use.

  • info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.

Yields:
  • config (sequence of int) – The sample configuration.

  • omega (float) – The probability of this configuration.

class quimb.tensor.SuperOperator1D(arrays, shape='lrkud', site_tag_id='I{}', outer_upper_ind_id='kn{}', inner_upper_ind_id='k{}', inner_lower_ind_id='b{}', outer_lower_ind_id='bn{}', tags=None, tags_upper=None, tags_lower=None, **tn_opts)[source]

Bases: TensorNetwork1D

A 1D tensor network super-operator class:

0   1   2       n-1
|   |   |        |     <-- outer_upper_ind_id
O===O===O==     =O
|\  |\  |\       |\     <-- inner_upper_ind_id
  )   )   ) ...    )   <-- K (size of local Kraus sum)
|/  |/  |/       |/     <-- inner_lower_ind_id
O===O===O==     =O
|   | : |        |     <-- outer_lower_ind_id
      :
     chi (size of entangling bond dim)
Parameters:

arrays (sequence of arrays) – The data arrays defining the superoperator, this should be a sequence of 2n arrays, such that the first two correspond to the upper and lower operators acting on site 0 etc. The arrays should be 5 dimensional unless OBC conditions are desired, in which case the first two and last two should be 4-dimensional. The dimensions of array can be should match the shape option.

_EXTRA_PROPS = ('_site_tag_id', '_outer_upper_ind_id', '_inner_upper_ind_id', '_inner_lower_ind_id',...
_L
_outer_upper_ind_id
_inner_upper_ind_id
_inner_lower_ind_id
_outer_lower_ind_id
_site_tag_id
cyclic
classmethod rand(n, K, chi, phys_dim=2, herm=True, cyclic=False, dtype=complex, **superop_opts)[source]
property outer_upper_ind_id
property inner_upper_ind_id
property inner_lower_ind_id
property outer_lower_ind_id
class quimb.tensor.TensorNetwork1D(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: quimb.tensor.tensor_arbgeom.TensorNetworkGen

Base class for tensor networks with a one-dimensional structure.

_NDIMS = 1
_EXTRA_PROPS = ('_site_tag_id', '_L')
_CONTRACT_STRUCTURED = True
_compatible_1d(other)[source]

Check whether self and other are compatible 2D tensor networks such that they can remain a 2D tensor network when combined.

combine(other, *, virtual=False, check_collisions=True)[source]

Combine this tensor network with another, returning a new tensor network. If the two are compatible, cast the resulting tensor network to a TensorNetwork1D instance.

Parameters:
  • other (TensorNetwork1D or TensorNetwork) – The other tensor network to combine with.

  • virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (False, the default), or view them as virtual (True).

  • check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If True (the default), any inner indices that clash will be mangled.

Return type:

TensorNetwork1D or TensorNetwork

property L

The number of sites, i.e. length.

property nsites

The number of sites.

gen_site_coos()[source]

Generate the coordinates of all possible sites.

site_tag(i)[source]

The name of the tag specifiying the tensor at site i.

slice2sites(tag_slice)[source]

Take a slice object, and work out its implied start, stop and step, taking into account cyclic boundary conditions.

Examples

Normal slicing:

>>> p = MPS_rand_state(10, bond_dim=7)
>>> p.slice2sites(slice(5))
(0, 1, 2, 3, 4)
>>> p.slice2sites(slice(4, 8))
(4, 5, 6, 7)

Slicing from end backwards:

>>> p.slice2sites(slice(..., -3, -1))
(9, 8)

Slicing round the end:

>>> p.slice2sites(slice(7, 12))
(7, 8, 9, 0, 1)
>>> p.slice2sites(slice(-3, 2))
(7, 8, 9, 0, 1)

If the start point is > end point (before modulo n), then step needs to be negative to return anything.

maybe_convert_coo(x)[source]

Check if x is an integer and convert to the corresponding site tag if so.

contract_structured(tag_slice, structure_bsz=5, inplace=False, **opts)[source]

Perform a structured contraction, translating tag_slice from a slice or to a cumulative sequence of tags.

Parameters:
  • tag_slice (slice or ...) – The range of sites, or for all.

  • inplace (bool, optional) – Whether to perform the contraction inplace.

Returns:

The result of the contraction, still a TensorNetwork if the contraction was only partial.

Return type:

TensorNetwork, Tensor or scalar

See also

contract, contract_tags, contract_cumulative

compute_left_environments(**contract_opts)[source]

Compute the left environments of this 1D tensor network.

Parameters:

contract_opts – Supplied to contract().

Returns:

Environments indexed by the site they are to the left of, so keys run from (1, … L - 1).

Return type:

dict[int, Tensor]

compute_right_environments(**contract_opts)[source]

Compute the right environments of this 1D tensor network.

Parameters:

contract_opts – Supplied to contract().

Returns:

Environments indexed by the site they are to the right of, so keys run from (0, … L - 2).

Return type:

dict[int, Tensor]

_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

class quimb.tensor.TNLinearOperator1D(tn, left_inds, right_inds, start, stop, ldims=None, rdims=None, is_conj=False, is_trans=False)[source]

Bases: scipy.sparse.linalg.LinearOperator

A 1D tensor network linear operator like:

         start                 stop - 1
           .                     .
         :-O-O-O-O-O-O-O-O-O-O-O-O-:                 --+
         : | | | | | | | | | | | | :                   |
         :-H-H-H-H-H-H-H-H-H-H-H-H-:    acting on    --V
         : | | | | | | | | | | | | :                   |
         :-O-O-O-O-O-O-O-O-O-O-O-O-:                 --+
left_inds^                         ^right_inds

Like TNLinearOperator, but performs a structured contract from one end to the other than can handle very long chains possibly more efficiently by contracting in blocks from one end.

Parameters:
  • tn (TensorNetwork) – The tensor network to turn into a LinearOperator.

  • left_inds (sequence of str) – The left indicies.

  • right_inds (sequence of str) – The right indicies.

  • start (int) – Index of starting site.

  • stop (int) – Index of stopping site (does not include this site).

  • ldims (tuple of int, optional) – If known, the dimensions corresponding to left_inds.

  • rdims (tuple of int, optional) – If known, the dimensions corresponding to right_inds.

See also

TNLinearOperator

tn
tags
is_conj
is_trans
_conj_linop = None
_adjoint_linop = None
_transpose_linop = None
_matvec(vec)[source]

Default matrix-vector multiplication handler.

If self is a linear operator of shape (M, N), then this method will be called on a shape (N,) or (N, 1) ndarray, and should return a shape (M,) or (M, 1) ndarray.

This default implementation falls back on _matmat, so defining that will define matrix-vector multiplication as well.

_matmat(mat)[source]

Default matrix-matrix multiplication handler.

Falls back on the user-defined _matvec method, so defining that will define matrix multiplication (though in a very suboptimal way).

copy(conj=False, transpose=False)[source]
conj()[source]
_transpose()[source]

Default implementation of _transpose; defers to rmatvec + conj

_adjoint()[source]

Hermitian conjugate of this TNLO.

to_dense()[source]
toarray()[source]
property A
quimb.tensor.align_TN_1D[source]
quimb.tensor.expec_TN_1D(*tns, compress=None, eps=1e-15)[source]

Compute the expectation of several 1D TNs, using transfer matrix compression if any are periodic.

Parameters:
  • tns (sequence of TensorNetwork1D) – The MPS and MPO to find expectation of. Should start and begin with an MPS e.g. (MPS, MPO, ...,  MPS).

  • compress ({None, False, True}, optional) – Whether to perform transfer matrix compression on cyclic systems. If set to None (the default), decide heuristically.

  • eps (float, optional) – The accuracy of the transfer matrix compression.

Returns:

x – The expectation value.

Return type:

float

quimb.tensor.gate_TN_1D(tn, G, where, contract=False, tags=None, propagate_tags='sites', info=None, inplace=False, cur_orthog=None, **compress_opts)[source]

Act with the gate G on sites where, maintaining the outer indices of the 1D tensor network:

contract=False       contract=True
    . .                    . .             <- where
o-o-o-o-o-o-o        o-o-o-GGG-o-o-o
| | | | | | |        | | | / \ | | |
    GGG
    | |


contract='split-gate'        contract='swap-split-gate'
      . .                          . .                      <- where
  o-o-o-o-o-o-o                o-o-o-o-o-o-o
  | | | | | | |                | | | | | | |
      G~G                          G~G
      | |                          \ /
                                    X
                                   / \

contract='swap+split'
        . .            <- where
  o-o-o-G=G-o-o-o
  | | | | | | | |

Note that the sites in where do not have to be contiguous. By default, site tags will be propagated to the gate tensors, identifying a ‘light cone’.

Parameters:
  • tn (TensorNetwork1DVector) – The 1D vector-like tensor network, for example, and MPS.

  • G (array) – A square array to act with on sites where. It should have twice the number of dimensions as the number of sites. The second half of these will be contracted with the MPS, and the first half indexed with the correct site_ind_id. Sites are read left to right from the shape. A two-dimensional array is permissible if each dimension factorizes correctly.

  • where (int or sequence of int) – Where the gate should act.

  • contract ({False, 'split-gate', 'swap-split-gate',) –

    ‘auto-split-gate’, True, ‘swap+split’}, optional Whether to contract the gate into the 1D tensor network. If,

    • False: leave the gate uncontracted, the default

    • ’split-gate’: like False, but split the gate if it is two-site.

    • ’swap-split-gate’: like ‘split-gate’, but decompose the gate as if a swap had first been applied

    • ’auto-split-gate’: automatically select between the above three options, based on the rank of the gate.

    • True: contract the gate into the tensor network, if the gate acts on more than one site, this will produce an ever larger tensor.

    • ’swap+split’: Swap sites until they are adjacent, then contract the gate and split the resulting tensor, then swap the sites back to their original position. In this way an MPS structure can be explicitly maintained at the cost of rising bond-dimension.

  • tags (str or sequence of str, optional) – Tag the new gate tensor with these tags.

  • propagate_tags ({'sites', 'register', False, True}, optional) –

    Add any tags from the sites to the new gate tensor (only matters if contract=False else tags are merged anyway):

    • If 'sites', then only propagate tags matching e.g. ‘I{}’ and ignore all others. I.e. just propagate the lightcone.

    • If 'register', then only propagate tags matching the sites of where this gate was actually applied. I.e. ignore the lightcone, just keep track of which ‘registers’ the gate was applied to.

    • If False, propagate nothing.

    • If True, propagate all tags.

  • inplace – Perform the gate in place.

  • bool – Perform the gate in place.

  • optional – Perform the gate in place.

  • compress_opts – Supplied to split() if contract='swap+split' or gate_with_auto_swap() if contract='swap+split'.

Return type:

TensorNetwork1DVector

Examples

>>> p = MPS_rand_state(3, 7)
>>> p.gate_(spin_operator('X'), where=1, tags=['GX'])
>>> p
<MatrixProductState(tensors=4, L=3, max_bond=7)>
>>> p.outer_inds()
('k0', 'k1', 'k2')
quimb.tensor.superop_TN_1D(tn_super, tn_op, upper_ind_id='k{}', lower_ind_id='b{}', so_outer_upper_ind_id=None, so_inner_upper_ind_id=None, so_inner_lower_ind_id=None, so_outer_lower_ind_id=None)[source]

Take a tensor network superoperator and act with it on a tensor network operator, maintaining the original upper and lower indices of the operator:

outer_upper_ind_id                           upper_ind_id
   | | | ... |                               | | | ... |
   +----------+                              +----------+
   | tn_super +---+                          | tn_super +---+
   +----------+   |     upper_ind_id         +----------+   |
   | | | ... |    |      | | | ... |         | | | ... |    |
inner_upper_ind_id|     +-----------+       +-----------+   |
                  |  +  |   tn_op   |   =   |   tn_op   |   |
inner_lower_ind_id|     +-----------+       +-----------+   |
   | | | ... |    |      | | | ... |         | | | ... |    |
   +----------+   |      lower_ind_id        +----------+   |
   | tn_super +---+                          | tn_super +---+
   +----------+                              +----------+
   | | | ... | <--                           | | | ... |
outer_lower_ind_id                           lower_ind_id
Parameters:
  • tn_super (TensorNetwork) – The superoperator in the form of a 1D-like tensor network.

  • tn_op (TensorNetwork) – The operator to be acted on in the form of a 1D-like tensor network.

  • upper_ind_id (str, optional) – Current id of the upper operator indices, e.g. usually 'k{}'.

  • lower_ind_id (str, optional) – Current id of the lower operator indices, e.g. usually 'b{}'.

  • so_outer_upper_ind_id (str, optional) – Current id of the superoperator’s upper outer indices, these will be reindexed to form the new effective operators upper indices.

  • so_inner_upper_ind_id (str, optional) – Current id of the superoperator’s upper inner indices, these will be joined with those described by upper_ind_id.

  • so_inner_lower_ind_id (str, optional) – Current id of the superoperator’s lower inner indices, these will be joined with those described by lower_ind_id.

  • so_outer_lower_ind_id (str, optional) – Current id of the superoperator’s lower outer indices, these will be reindexed to form the new effective operators lower indices.

Returns:

KAK – The tensornetwork of the superoperator acting on the operator.

Return type:

TensorNetwork

quimb.tensor.enforce_1d_like(tn, site_tags=None, fix_bonds=True, inplace=False)[source]

Check that tn is 1D-like with OBC, i.e. 1) that each tensor has exactly one of the given site_tags. If not, raise a ValueError. 2) That there are no hyper indices. And 3) that there are only bonds within sites or between nearest neighbor sites. This issue can be optionally automatically fixed by inserting a string of identity tensors.

Parameters:
  • tn (TensorNetwork) – The tensor network to check.

  • site_tags (sequence of str, optional) – The tags to use to group and order the tensors from tn. If not given, uses tn.site_tags.

  • fix_bonds (bool, optional) – Whether to fix the bond structure by inserting identity tensors.

  • inplace (bool, optional) – Whether to perform the fix inplace or not.

Raises:

ValueError – If the tensor network is not 1D-like.

quimb.tensor.tensor_network_1d_compress(tn, max_bond=None, cutoff=1e-10, method='dm', site_tags=None, canonize=True, permute_arrays=True, optimize='auto-hq', sweep_reverse=False, equalize_norms=False, compress_opts=None, inplace=False, **kwargs)[source]

Compress a 1D-like tensor network using the specified method.

Parameters:
  • tn (TensorNetwork) – The tensor network to compress. Every tensor should have exactly one of the site tags. Each site can have multiple tensors and output indices.

  • max_bond (int) – The maximum bond dimension to compress to.

  • cutoff (float, optional) – A dynamic threshold for discarding singular values when compressing.

  • method ({"direct", "dm", "zipup", "zipup-first", "fit", "projector", ...}) – The compression method to use.

  • site_tags (sequence of str, optional) – The tags to use to group and order the tensors from tn. If not given, uses tn.site_tags. The tensor network built will have one tensor per site, in the order given by site_tags.

  • canonize (bool, optional) – Whether to perform canonicalization, pseudo or otherwise depending on the method, before compressing. Ignored for method='dm' and method='fit'.

  • permute_arrays (bool or str, optional) – Whether to permute the array indices of the final tensor network into canonical order. If True will use the default order, otherwise if a string this specifies a custom order.

  • optimize (str, optional) – The contraction path optimizer to use.

  • sweep_reverse (bool, optional) – Whether to sweep in the reverse direction, resulting in a left canonical form instead of right canonical (for the fit method, this also depends on the last sweep direction).

  • equalize_norms (bool or float, optional) – Whether to equalize the norms of the tensors after compression. If an explicit value is give, then the norms will be set to that value, and the overall scaling factor will be accumulated into .exponent.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • kwargs – Supplied to the chosen compression method.

Return type:

TensorNetwork

quimb.tensor.NNI[source]
class quimb.tensor.TEBD(p0, H, dt=None, tol=None, t0=0.0, split_opts=None, progbar=True, imag=False)[source]

Class implementing Time Evolving Block Decimation (TEBD) [1].

[1] Guifré Vidal, Efficient Classical Simulation of Slightly Entangled Quantum Computations, PRL 91, 147902 (2003)

Parameters:
  • p0 (MatrixProductState) – Initial state.

  • H (LocalHam1D or array_like) – Dense hamiltonian representing the two body interaction. Should have shape (d * d, d * d), where d is the physical dimension of p0.

  • dt (float, optional) – Default time step, cannot be set as well as tol.

  • tol (float, optional) – Default target error for each evolution, cannot be set as well as dt, which will instead be calculated from the trotter orderm length of time, and hamiltonian norm.

  • t0 (float, optional) – Initial time. Defaults to 0.0.

  • split_opts (dict, optional) – Compression options applied for splitting after gate application, see tensor_split().

  • imag (bool, optional) – Enable imaginary time evolution. Defaults to false.

See also

quimb.Evolution

_pt
L
H
cyclic
_ham_norm
_err = 0.0
tol
imag
progbar
split_opts
property pt

The MPS state of the system at the current time.

property err
choose_time_step(tol, T, order)[source]

Trotter error is ~ (T / dt) * dt^(order + 1). Invert to find desired time step, and scale by norm of interaction term.

_get_gate_from_ham(dt_frac, sites)[source]

Get the unitary (exponentiated) gate for fraction of timestep dt_frac and sites sites, cached.

sweep(direction, dt_frac, dt=None, queue=False)[source]

Perform a single sweep of gates and compression. This shifts the orthonognality centre along with the gates as they are applied and split.

Parameters:
  • direction ({'right', 'left'}) – Which direction to sweep. Right is even bonds, left is odd.

  • dt_frac (float) – What fraction of dt substep to take.

  • dt (float, optional) – Overide the current dt with a custom value.

_step_order2(tau=1, **sweep_opts)[source]

Perform a single, second order step.

_step_order4(**sweep_opts)[source]

Perform a single, fourth order step.

step(order=2, dt=None, progbar=None, **sweep_opts)[source]

Perform a single step of time self.dt.

_compute_sweep_dt_tol(T, dt, tol, order)[source]
TARGET_TOL = 1e-13
update_to(T, dt=None, tol=None, order=4, progbar=None)[source]

Update the state to time T.

Parameters:
  • T (float) – The time to evolve to.

  • dt (float, optional) – Time step to use. Can’t be set as well as tol.

  • tol (float, optional) – Tolerance for whole evolution. Can’t be set as well as dt.

  • order (int, optional) – Trotter order to use.

  • progbar (bool, optional) – Manually turn the progress bar off.

_set_progbar_desc(progbar)[source]
at_times(ts, dt=None, tol=None, order=4, progbar=None)[source]

Generate the time evolved state at each time in ts.

Parameters:
  • ts (sequence of float) – The times to evolve to and yield the state at.

  • dt (float, optional) – Time step to use. Can’t be set as well as tol.

  • tol (float, optional) – Tolerance for whole evolution. Can’t be set as well as dt.

  • order (int, optional) – Trotter order to use.

  • progbar (bool, optional) – Manually turn the progress bar off.

Yields:

pt (MatrixProductState) – The state at each of the times in ts. This is a copy of internal state used, so inplace changes can be made to it.

class quimb.tensor.LocalHam1D(L, H2, H1=None, cyclic=False)[source]

Bases: quimb.tensor.tensor_arbgeom_tebd.LocalHamGen

An simple interacting hamiltonian object used, for instance, in TEBD. Once instantiated, the LocalHam1D hamiltonian stores a single term per pair of sites, cached versions of which can be retrieved like H.get_gate_expm((i, i + 1), -1j * 0.5) etc.

Parameters:
  • L (int) – The size of the hamiltonian.

  • H2 (array_like or dict[tuple[int], array_like]) – The sum of interaction terms. If a dict is given, the keys should be nearest neighbours like (10, 11), apart from any default term which should have the key None, and the values should be the sum of interaction terms for that interaction.

  • H1 (array_like or dict[int, array_like], optional) – The sum of single site terms. If a dict is given, the keys should be integer sites, apart from any default term which should have the key None, and the values should be the sum of single site terms for that site.

  • cyclic (bool, optional) – Whether the hamiltonian has periodic boundary conditions or not.

terms

The terms in the hamiltonian, combined from the inputs such that there is a single term per pair.

Type:

dict[tuple[int], array]

Examples

A simple, translationally invariant, interaction-only LocalHam1D:

>>> XX = pauli('X') & pauli('X')
>>> YY = pauli('Y') & pauli('Y')
>>> ham = LocalHam1D(L=100, H2=XX + YY)

The same, but with a translationally invariant field as well:

>>> Z = pauli('Z')
>>> ham = LocalHam1D(L=100, H2=XX + YY, H1=Z)

Specifying a default interaction and field, with custom values set for some sites:

>>> H2 = {None: XX + YY, (49, 50): (XX + YY) / 2}
>>> H1 = {None: Z, 49: 2 * Z, 50: 2 * Z}
>>> ham = LocalHam1D(L=100, H2=H2, H1=H1)

Specifying the hamiltonian entirely through site specific interactions and fields:

>>> H2 = {(i, i + 1): XX + YY for i in range(99)}
>>> H1 = {i: Z for i in range(100)}
>>> ham = LocalHam1D(L=100, H2=H2, H1=H1)

See also

SpinHam1D

L
cyclic
mean_norm()[source]

Computes the average frobenius norm of local terms.

build_mpo_propagator_trotterized(x, site_tag_id='I{}', tags=None, upper_ind_id='k{}', lower_ind_id='b{}', shape='lrud', contract_sites=True, **split_opts)[source]

Build an MPO representation of expm(H * x), i.e. the imaginary or real time propagator of this local 1D hamiltonian, using a first order trotterized decomposition.

Parameters:
  • x (float) – The time to evolve for. Note this does not include the imaginary prefactor of the Schrodinger equation, so real x corresponds to imaginary time evolution, and vice versa.

  • site_tag_id (str) – A string specifiying how to tag the tensors at each site. Should contain a '{}' placeholder. It is used to generate the actual tags like: map(site_tag_id.format, range(len(arrays))).

  • tags (str or sequence of str, optional) – Global tags to attach to all tensors.

  • upper_ind_id (str) – A string specifiying how to label the upper physical site indices. Should contain a '{}' placeholder. It is used to generate the actual indices like: map(upper_ind_id.format, range(len(arrays))).

  • lower_ind_id (str) – A string specifiying how to label the lower physical site indices. Should contain a '{}' placeholder. It is used to generate the actual indices like: map(lower_ind_id.format, range(len(arrays))).

  • shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrud’ (the default) indicates the shape corresponds left-bond, right-bond, ‘up’ physical index, ‘down’ physical index. End tensors have either ‘l’ or ‘r’ dropped from the string if not periodic.

  • contract_sites (bool, optional) – Whether to contract all the decomposed factors at each site to yield a single tensor per site, by default True.

  • split_opts – Supplied to tensor_split().

__repr__()[source]
class quimb.tensor.PEPO(arrays, *, shape='urdlbk', tags=None, upper_ind_id='k{},{}', lower_ind_id='b{},{}', site_tag_id='I{},{}', x_tag_id='X{}', y_tag_id='Y{}', **tn_opts)[source]

Bases: TensorNetwork2DOperator, TensorNetwork2DFlat

Projected Entangled Pair Operator object:

             ...
 │╱   │╱   │╱   │╱   │╱   │╱
 ●────●────●────●────●────●──
╱│   ╱│   ╱│   ╱│   ╱│   ╱│
 │╱   │╱   │╱   │╱   │╱   │╱
 ●────●────●────●────●────●──
╱│   ╱│   ╱│   ╱│   ╱│   ╱│
 │╱   │╱   │╱   │╱   │╱   │╱   ...
 ●────●────●────●────●────●──
╱│   ╱│   ╱│   ╱│   ╱│   ╱│
 │╱   │╱   │╱   │╱   │╱   │╱
 ●────●────●────●────●────●──
╱    ╱    ╱    ╱    ╱    ╱
Parameters:
  • arrays (sequence of sequence of array) – The core tensor data arrays.

  • shape (str, optional) – Which order the dimensions of the arrays are stored in, the default 'urdlbk' stands for (‘up’, ‘right’, ‘down’, ‘left’, ‘bra’, ‘ket’). Arrays on the edge of lattice are assumed to be missing the corresponding dimension.

  • tags (set[str], optional) – Extra global tags to add to the tensor network.

  • upper_ind_id (str, optional) – String specifier for naming convention of upper site indices.

  • lower_ind_id (str, optional) – String specifier for naming convention of lower site indices.

  • site_tag_id (str, optional) – String specifier for naming convention of site tags.

  • x_tag_id (str, optional) – String specifier for naming convention of row (‘x’) tags.

  • y_tag_id (str, optional) – String specifier for naming convention of column (‘y’) tags.

_EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly', '_upper_ind_id', '_lower_ind_id')
_upper_ind_id
_lower_ind_id
_site_tag_id
_x_tag_id
_y_tag_id
_Lx
_Ly
classmethod from_fill_fn(fill_fn, Lx, Ly, bond_dim, phys_dim=2, cyclic=False, shape='urdlbk', **pepo_opts)[source]

Create a PEPO and fill the tensor entries with a supplied function matching signature fill_fn(shape) -> array.

Parameters:
  • fill_fn (callable) – A function that takes a shape tuple and returns a data array.

  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • phys_dim (int, optional) – The physical indices dimension.

  • cyclic (bool or tuple[bool, bool], optional) – Whether the lattice is cyclic in the x and y directions.

  • shape (str, optional) – How to layout the indices of the tensors, the default is (up, right, down, left bra, ket) == 'urdlbk'.

  • pepo_opts – Supplied to PEPO.

classmethod rand(Lx, Ly, bond_dim, phys_dim=2, herm=False, dist='normal', loc=0.0, dtype='float64', seed=None, **pepo_opts)[source]

Create a random PEPO.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • physical (int, optional) – The physical index dimension.

  • herm (bool, optional) – Whether to symmetrize the tensors across the physical bonds to make the overall operator hermitian.

  • dtype (dtype, optional) – The dtype to create the arrays with, default is real double.

  • seed (int, optional) – A random seed.

  • pepo_opts – Supplied to PEPO.

Returns:

X

Return type:

PEPO

rand_herm
classmethod zeros(Lx, Ly, bond_dim, phys_dim=2, dtype='float64', backend='numpy', **pepo_opts)[source]

Create a PEPO with all zero entries.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • physical (int, optional) – The physical index dimension.

  • dtype (dtype, optional) – The dtype to create the arrays with, default is real double.

  • backend (str, optional) – Which backend to use, default is 'numpy'.

  • pepo_opts – Supplied to PEPO.

add_PEPO(other, inplace=False)[source]
add_PEPO_[source]
_apply_peps(other, compress=False, contract=True, **compress_opts)[source]
apply(other, compress=False, **compress_opts)[source]

Act with this PEPO on other, returning a new TN like other with the same outer indices.

Parameters:
  • other (PEPS) – The TN to act on.

  • compress (bool, optional) – Whether to compress the resulting TN.

  • compress_opts – Supplied to compress().

Return type:

TensorNetwork2DFlat

show()[source]

Print a unicode schematic of this PEPO and its bond dimensions.

class quimb.tensor.PEPS(arrays, *, shape='urdlp', tags=None, site_ind_id='k{},{}', site_tag_id='I{},{}', x_tag_id='X{}', y_tag_id='Y{}', **tn_opts)[source]

Bases: TensorNetwork2DVector, TensorNetwork2DFlat

Projected Entangled Pair States object (2D):

             ...
 │    │    │    │    │    │
 ●────●────●────●────●────●──
╱│   ╱│   ╱│   ╱│   ╱│   ╱│
 │    │    │    │    │    │
 ●────●────●────●────●────●──
╱│   ╱│   ╱│   ╱│   ╱│   ╱│
 │    │    │    │    │    │   ...
 ●────●────●────●────●────●──
╱│   ╱│   ╱│   ╱│   ╱│   ╱│
 │    │    │    │    │    │
 ●────●────●────●────●────●──
╱    ╱    ╱    ╱    ╱    ╱
Parameters:
  • arrays (sequence of sequence of array_like) – The core tensor data arrays.

  • shape (str, optional) – Which order the dimensions of the arrays are stored in, the default 'urdlp' stands for (‘up’, ‘right’, ‘down’, ‘left’, ‘physical’). Arrays on the edge of lattice are assumed to be missing the corresponding dimension.

  • tags (set[str], optional) – Extra global tags to add to the tensor network.

  • site_ind_id (str, optional) – String specifier for naming convention of site indices.

  • site_tag_id (str, optional) – String specifier for naming convention of site tags.

  • x_tag_id (str, optional) – String specifier for naming convention of row (‘x’) tags.

  • y_tag_id (str, optional) – String specifier for naming convention of column (‘y’) tags.

_EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly', '_site_ind_id')
_site_ind_id
_site_tag_id
_x_tag_id
_y_tag_id
_Lx
_Ly
classmethod from_fill_fn(fill_fn, Lx, Ly, bond_dim, phys_dim=2, cyclic=False, shape='urdlp', **peps_opts)[source]

Create a 2D PEPS from a filling function with signature fill_fn(shape).

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • phys_dim (int, optional) – The physical index dimension.

  • cyclic (bool or tuple[bool, bool], optional) – Whether the lattice is cyclic in the x and y directions.

  • shape (str, optional) – How to layout the indices of the tensors, the default is (up, right, down, left, phys) == 'urdlbk'. This is the order of the shape supplied to the filling function.

  • peps_opts – Supplied to PEPS.

Returns:

psi

Return type:

PEPS

classmethod empty(Lx, Ly, bond_dim, phys_dim=2, like='numpy', **peps_opts)[source]

Create an empty 2D PEPS.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • physical (int, optional) – The physical index dimension.

  • peps_opts – Supplied to PEPS.

Returns:

psi

Return type:

PEPS

classmethod ones(Lx, Ly, bond_dim, phys_dim=2, like='numpy', **peps_opts)[source]

Create a 2D PEPS whose tensors are filled with ones.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • physical (int, optional) – The physical index dimension.

  • peps_opts – Supplied to PEPS.

Returns:

psi

Return type:

PEPS

classmethod zeros(Lx, Ly, bond_dim, phys_dim=2, like='numpy', **peps_opts)[source]

Create a 2D PEPS whose tensors are filled with zeros.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • physical (int, optional) – The physical index dimension.

  • peps_opts – Supplied to PEPS.

Returns:

psi

Return type:

PEPS

classmethod rand(Lx, Ly, bond_dim, phys_dim=2, dist='normal', loc=0.0, dtype='float64', seed=None, **peps_opts)[source]

Create a random (un-normalized) PEPS.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • bond_dim (int) – The bond dimension.

  • physical (int, optional) – The physical index dimension.

  • dist ({'normal', 'uniform', 'rademacher', 'exp'}, optional) – Type of random number to generate, defaults to ‘normal’.

  • loc (float, optional) – An additive offset to add to the random numbers.

  • dtype (dtype, optional) – The dtype to create the arrays with, default is real double.

  • seed (int, optional) – A random seed.

  • peps_opts – Supplied to PEPS.

Returns:

psi

Return type:

PEPS

add_PEPS(other, inplace=False)[source]
add_PEPS_[source]
show()[source]

Print a unicode schematic of this PEPS and its bond dimensions.

class quimb.tensor.TensorNetwork2D(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: quimb.tensor.tensor_arbgeom.TensorNetworkGen

Mixin class for tensor networks with a square lattice two-dimensional structure, indexed by [{row},{column}] so that:

             'Y{j}'
                v

i=Lx-1 ●──●──●──●──●──●──   ──●
       |  |  |  |  |  |       |
             ...
       |  |  |  |  |  | 'I{i},{j}' = 'I3,5' e.g.
i=3    ●──●──●──●──●──●──
       |  |  |  |  |  |       |
i=2    ●──●──●──●──●──●──   ──●    <== 'X{i}'
       |  |  |  |  |  |  ...  |
i=1    ●──●──●──●──●──●──   ──●
       |  |  |  |  |  |       |
i=0    ●──●──●──●──●──●──   ──●

     j=0, 1, 2, 3, 4, 5    j=Ly-1

This implies the following conventions:

  • the ‘up’ bond is coordinates (i, j), (i + 1, j)

  • the ‘down’ bond is coordinates (i, j), (i - 1, j)

  • the ‘right’ bond is coordinates (i, j), (i, j + 1)

  • the ‘left’ bond is coordinates (i, j), (i, j - 1)

_NDIMS = 2
_EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly')
_compatible_2d(other)[source]

Check whether self and other are compatible 2D tensor networks such that they can remain a 2D tensor network when combined.

combine(other, *, virtual=False, check_collisions=True)[source]

Combine this tensor network with another, returning a new tensor network. If the two are compatible, cast the resulting tensor network to a TensorNetwork2D instance.

Parameters:
  • other (TensorNetwork2D or TensorNetwork) – The other tensor network to combine with.

  • virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (False, the default), or view them as virtual (True).

  • check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If True (the default), any inner indices that clash will be mangled.

Return type:

TensorNetwork2D or TensorNetwork

property Lx

The number of rows.

property Ly

The number of columns.

property nsites

The total number of sites.

site_tag(i, j=None)[source]

The name of the tag specifiying the tensor at site (i, j).

property x_tag_id

The string specifier for tagging each row of this 2D TN.

x_tag(i)[source]
property x_tags

A tuple of all of the Lx different row tags.

row_tag[source]
row_tags
property y_tag_id

The string specifier for tagging each column of this 2D TN.

y_tag(j)[source]
property y_tags

A tuple of all of the Ly different column tags.

col_tag[source]
col_tags
maybe_convert_coo(x)[source]

Check if x is a tuple of two ints and convert to the corresponding site tag if so.

_get_tids_from_tags(tags, which='all')[source]

This is the function that lets coordinates such as (i, j) be used for many ‘tag’ based functions.

gen_site_coos()[source]

Generate coordinates for all the sites in this 2D TN.

gen_bond_coos()[source]

Generate pairs of coordinates for all the bonds in this 2D TN.

gen_horizontal_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i, j + 1).

gen_horizontal_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i, j + 1) where j is even, which thus don’t overlap at all.

gen_horizontal_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i, j + 1) where j is odd, which thus don’t overlap at all.

gen_vertical_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j).

gen_vertical_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j) where i is even, which thus don’t overlap at all.

gen_vertical_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j) where i is odd, which thus don’t overlap at all.

gen_diagonal_left_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j - 1).

gen_diagonal_left_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j - 1) where j is even, which thus don’t overlap at all.

gen_diagonal_left_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j - 1) where j is odd, which thus don’t overlap at all.

gen_diagonal_right_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j + 1).

gen_diagonal_right_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j + 1) where i is even, which thus don’t overlap at all.

gen_diagonal_right_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j + 1) where i is odd, which thus don’t overlap at all.

gen_diagonal_bond_coos()[source]

Generate all next nearest neighbor diagonal coordinate pairs.

valid_coo(coo, xrange=None, yrange=None)[source]

Check whether coo is in-bounds.

Parameters:
  • coo ((int, int, int), optional) – The coordinates to check.

  • xrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

  • yrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

Return type:

bool

get_ranges_present()[source]

Return the range of site coordinates present in this TN.

Returns:

xrange, yrange – The minimum and maximum site coordinates present in each direction.

Return type:

tuple[tuple[int, int]]

is_cyclic_x(j=None, imin=None, imax=None)[source]

Check if the x dimension is cyclic (periodic), specifically whether a bond exists between (imin, j) and (imax, j), with default values of imin = 0 and imax = Lx - 1, and j at the center of the lattice. If imin and imax are adjacent then this is considered False, since there is no ‘extra’ connectivity.

is_cyclic_y(i=None, jmin=None, jmax=None)[source]

Check if the y dimension is cyclic (periodic), specifically whether a bond exists between (i, jmin) and (i, jmax), with default values of jmin = 0 and jmax = Ly - 1, and i at the center of the lattice. If jmin and jmax are adjacent then this is considered False, since there is no ‘extra’ connectivity.

__getitem__(key)[source]

Key based tensor selection, checking for integer based shortcut.

show()[source]

Print a unicode schematic of this 2D TN and its bond dimensions.

_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

flatten(fuse_multibonds=True, inplace=False)[source]

Contract all tensors corresponding to each site into one.

flatten_[source]
gen_pairs(xrange=None, yrange=None, xreverse=False, yreverse=False, coordinate_order='xy', xstep=None, ystep=None, stepping_order='xy', step_only=None)[source]

Helper function for generating pairs of cooordinates for all bonds within a certain range, optionally specifying an order.

Parameters:
  • xrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

  • yrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

  • xreverse (bool, optional) – Whether to reverse the order of the x and y sweeps.

  • yreverse (bool, optional) – Whether to reverse the order of the x and y sweeps.

  • coordinate_order (str, optional) – The order in which to sweep the x and y coordinates. Earlier dimensions will change slower. If the corresponding range has size 1 then that dimension doesn’t need to be specified.

  • xstep (int, optional) – When generating a bond, step in this direction to yield the neighboring coordinate. By default, these follow xreverse and yreverse respectively.

  • ystep (int, optional) – When generating a bond, step in this direction to yield the neighboring coordinate. By default, these follow xreverse and yreverse respectively.

  • stepping_order (str, optional) – The order in which to step the x and y coordinates to generate bonds. Does not need to include all dimensions.

  • step_only (int, optional) – Only perform the ith steps in stepping_order, used to interleave canonizing and compressing for example.

Yields:

coo_a, coo_b (((int, int), (int, int)))

canonize_plane(xrange, yrange, equalize_norms=False, canonize_opts=None, **gen_pair_opts)[source]

Canonize every pair of tensors within a subrange, optionally specifying a order to visit those pairs in.

canonize_row(i, sweep, yrange=None, **canonize_opts)[source]

Canonize all or part of a row.

If sweep == 'right' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─  ==>  ─●──>──>──>──>──o──●─ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstart      jstop           jstart      jstop

If sweep == 'left' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─  ==>  ─●──o──<──<──<──<──●─ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstop       jstart          jstop       jstart

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • i (int) – Which row to canonize.

  • sweep ({'right', 'left'}) – Which direction to sweep in.

  • jstart (int or None) – Starting column, defaults to whole row.

  • jstop (int or None) – Stopping column, defaults to whole row.

  • canonize_opts – Supplied to canonize_between.

canonize_column(j, sweep, xrange=None, **canonize_opts)[source]

Canonize all or part of a column.

If sweep='up' then:

 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
─●──●──●─       ─●──o──●─ istop
 |  |  |   ==>   |  |  |
─●──●──●─       ─●──^──●─
 |  |  |         |  |  |
─●──●──●─       ─●──^──●─ istart
 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
    .               .
    j               j

If sweep='down' then:

 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
─●──●──●─       ─●──v──●─ istart
 |  |  |   ==>   |  |  |
─●──●──●─       ─●──v──●─
 |  |  |         |  |  |
─●──●──●─       ─●──o──●─ istop
 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
    .               .
    j               j

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • j (int) – Which column to canonize.

  • sweep ({'up', 'down'}) – Which direction to sweep in.

  • xrange (None or (int, int), optional) – The range of columns to canonize.

  • canonize_opts – Supplied to canonize_between.

canonize_row_around(i, around=(0, 1))[source]
compress_plane(xrange, yrange, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None, **gen_pair_opts)[source]

Compress every pair of tensors within a subrange, optionally specifying a order to visit those pairs in.

compress_row(i, sweep, yrange=None, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None)[source]

Compress all or part of a row.

If sweep == 'right' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━  ━━>  ━●━━>──>──>──>──o━━●━ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstart      jstop           jstart      jstop

If sweep == 'left' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━  ━━>  ━●━━o──<──<──<──<━━●━ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstop       jstart          jstop       jstart

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • i (int) – Which row to compress.

  • sweep ({'right', 'left'}) – Which direction to sweep in.

  • yrange (tuple[int, int] or None) – The range of columns to compress.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

compress_column(j, sweep, xrange=None, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None)[source]

Compress all or part of a column.

If sweep='up' then:

 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──o──●─  .
 ┃  ┃  ┃   ==>   ┃  |  ┃   .
─●──●──●─       ─●──^──●─  . xrange
 ┃  ┃  ┃         ┃  |  ┃   .
─●──●──●─       ─●──^──●─  .
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
    .               .
    j               j

If sweep='down' then:

 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──v──●─ .
 ┃  ┃  ┃   ==>   ┃  |  ┃  .
─●──●──●─       ─●──v──●─ . xrange
 ┃  ┃  ┃         ┃  |  ┃  .
─●──●──●─       ─●──o──●─ .
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
    .               .
    j               j

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • j (int) – Which column to compress.

  • sweep ({'up', 'down'}) – Which direction to sweep in.

  • xrange (None or (int, int), optional) – The range of rows to compress.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

_contract_boundary_core_via_1d(xrange, yrange, from_which, max_bond, cutoff=1e-10, method='dm', layer_tags=None, **compress_opts)[source]
_contract_boundary_core(xrange, yrange, from_which, max_bond, cutoff=1e-10, canonize=True, layer_tags=None, compress_late=True, sweep_reverse=False, equalize_norms=False, compress_opts=None, canonize_opts=None)[source]
_contract_boundary_full_bond(xrange, yrange, from_which, max_bond, cutoff=0.0, method='eigh', renorm=False, optimize='auto-hq', opposite_envs=None, equalize_norms=False, contract_boundary_opts=None)[source]

Contract the boundary of this 2D TN using the ‘full bond’ environment information obtained from a boundary contraction in the opposite direction.

Parameters:
  • xrange ((int, int) or None, optional) – The range of rows to contract and compress.

  • yrange ((int, int)) – The range of columns to contract and compress.

  • from_which ({'xmin', 'ymin', 'xmax', 'ymax'}) – Which direction to contract the rectangular patch from.

  • max_bond (int) – The maximum boundary dimension, AKA ‘chi’. By default used for the opposite direction environment contraction as well.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction - only for the opposite direction environment contraction.

  • method ({'eigh', 'eig', 'svd', 'biorthog'}, optional) – Which similarity decomposition method to use to compress the full bond environment.

  • renorm (bool, optional) – Whether to renormalize the isometric projection or not.

  • optimize (str or PathOptimize, optimize) – Contraction optimizer to use for the exact contractions.

  • opposite_envs (dict, optional) – If supplied, the opposite environments will be fetched or lazily computed into this dict depending on whether they are missing.

  • contract_boundary_opts – Other options given to the opposite direction environment contraction.

_contract_boundary_projector(xrange, yrange, from_which, max_bond=None, cutoff=1e-10, lazy=False, equalize_norms=False, optimize='auto-hq', compress_opts=None)[source]

Contract the boundary of this 2D tensor network by explicitly computing and inserting explicit local projector tensors, which can optionally be left uncontracted. Multilayer networks are naturally supported.

Parameters:
  • xrange (tuple) – The range of x indices to contract.

  • yrange (tuple) – The range of y indices to contract.

  • from_which ({'xmin', 'xmax', 'ymin', 'ymax'}) – From which boundary to contract.

  • max_bond (int, optional) – The maximum bond dimension to contract to. If None (default), compression is left to cutoff.

  • cutoff (float, optional) – The cutoff to use for boundary compression.

  • lazy (bool, optional) – Whether to leave the boundary tensors uncontracted. If False (the default), the boundary tensors are contracted and the resulting boundary has a single tensor per site.

  • equalize_norms (bool, optional) – Whether to actively absorb the norm of modified tensors into self.exponent.

  • optimize (str or PathOptimizer, optional) – The contract path optimization to use when forming the projector tensors.

  • compress_opts (dict, optional) – Other options to pass to svd_truncated().

contract_boundary_from(xrange, yrange, from_which, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]

Unified entrypoint for contracting any rectangular patch of tensors from any direction, with any boundary method.

contract_boundary_from_[source]
contract_boundary_from_xmin(xrange, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]

Contract a 2D tensor network inwards from the bottom, canonizing and compressing (left to right) along the way. If layer_tags is None this looks like:

a) contract

│  │  │  │  │
●──●──●──●──●       │  │  │  │  │
│  │  │  │  │  -->  ●══●══●══●══●
●──●──●──●──●

b) optionally canonicalize

│  │  │  │  │
●══●══<══<══<

c) compress in opposite direction

│  │  │  │  │  -->  │  │  │  │  │  -->  │  │  │  │  │
>──●══●══●══●  -->  >──>──●══●══●  -->  >──>──>──●══●
.  .           -->     .  .        -->        .  .

If layer_tags is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:

a) first flatten the outer boundary only

│ ││ ││ ││ ││ │       │ ││ ││ ││ ││ │
●─○●─○●─○●─○●─○       ●─○●─○●─○●─○●─○
│ ││ ││ ││ ││ │  ==>   ╲│ ╲│ ╲│ ╲│ ╲│
●─○●─○●─○●─○●─○         ●══●══●══●══●

b) contract and compress a single layer only

│ ││ ││ ││ ││ │
│ ○──○──○──○──○
│╱ │╱ │╱ │╱ │╱
●══<══<══<══<

c) contract and compress the next layer

╲│ ╲│ ╲│ ╲│ ╲│
 >══>══>══>══●
Parameters:
  • xrange ((int, int)) – The range of rows to compress (inclusive).

  • yrange ((int, int) or None, optional) – The range of columns to compress (inclusive), sweeping along with canonization and compression. Defaults to all columns.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • canonize (bool, optional) – Whether to sweep one way with canonization before compressing.

  • mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.

  • layer_tags (None or sequence[str], optional) – If None, all tensors at each coordinate pair [(i, j), (i + 1, j)] will be first contracted. If specified, then the outer tensor at (i, j) will be contracted with the tensor specified by [(i + 1, j), layer_tag], for each layer_tag in layer_tags.

  • sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

  • inplace (bool, optional) – Whether to perform the contraction inplace or not.

contract_boundary_from_xmin_[source]
contract_boundary_from_xmax(xrange, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, inplace=False, sweep_reverse=False, compress_opts=None, **contract_boundary_opts)[source]

Contract a 2D tensor network inwards from the top, canonizing and compressing (right to left) along the way. If layer_tags is None this looks like:

a) contract

●──●──●──●──●
|  |  |  |  |  -->  ●══●══●══●══●
●──●──●──●──●       |  |  |  |  |
|  |  |  |  |

b) optionally canonicalize

●══●══<══<══<
|  |  |  |  |

c) compress in opposite direction

>──●══●══●══●  -->  >──>──●══●══●  -->  >──>──>──●══●
|  |  |  |  |  -->  |  |  |  |  |  -->  |  |  |  |  |
.  .           -->     .  .        -->        .  .

If layer_tags is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:

a) first flatten the outer boundary only

●─○●─○●─○●─○●─○         ●══●══●══●══●
│ ││ ││ ││ ││ │  ==>   ╱│ ╱│ ╱│ ╱│ ╱│
●─○●─○●─○●─○●─○       ●─○●─○●─○●─○●─○
│ ││ ││ ││ ││ │       │ ││ ││ ││ ││ │

b) contract and compress a single layer only

●══<══<══<══<
│╲ │╲ │╲ │╲ │╲
│ ○──○──○──○──○
│ ││ ││ ││ ││ │

c) contract and compress the next layer

 ●══●══●══●══●
╱│ ╱│ ╱│ ╱│ ╱│
Parameters:
  • xrange ((int, int)) – The range of rows to compress (inclusive).

  • yrange ((int, int) or None, optional) – The range of columns to compress (inclusive), sweeping along with canonization and compression. Defaults to all columns.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • canonize (bool, optional) – Whether to sweep one way with canonization before compressing.

  • mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.

  • layer_tags (None or str, optional) – If None, all tensors at each coordinate pair [(i, j), (i - 1, j)] will be first contracted. If specified, then the outer tensor at (i, j) will be contracted with the tensor specified by [(i - 1, j), layer_tag], for each layer_tag in layer_tags.

  • sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

  • inplace (bool, optional) – Whether to perform the contraction inplace or not.

contract_boundary_from_xmax_[source]
contract_boundary_from_ymin(yrange, xrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]

Contract a 2D tensor network inwards from the left, canonizing and compressing (bottom to top) along the way. If layer_tags is None this looks like:

a) contract

●──●──       ●──
│  │         ║
●──●──  ==>  ●──
│  │         ║
●──●──       ●──

b) optionally canonicalize

●──       v──
║         ║
●──  ==>  v──
║         ║
●──       ●──

c) compress in opposite direction

v──       ●──
║         │
v──  ==>  ^──
║         │
●──       ^──

If layer_tags is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:

a) first flatten the outer boundary only

○──○──           ●──○──
│╲ │╲            │╲ │╲
●─○──○──         ╰─●──○──
 ╲│╲╲│╲     ==>    │╲╲│╲
  ●─○──○──         ╰─●──○──
   ╲│ ╲│             │ ╲│
    ●──●──           ╰──●──

b) contract and compress a single layer only

   ○──
 ╱╱ ╲
●─── ○──
 ╲ ╱╱ ╲
  ^─── ○──
   ╲ ╱╱
    ^─────

c) contract and compress the next layer

●──
│╲
╰─●──
  │╲
  ╰─●──
    │
    ╰──
Parameters:
  • yrange ((int, int)) – The range of columns to compress (inclusive).

  • xrange ((int, int) or None, optional) – The range of rows to compress (inclusive), sweeping along with canonization and compression. Defaults to all rows.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • canonize (bool, optional) – Whether to sweep one way with canonization before compressing.

  • mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.

  • layer_tags (None or str, optional) – If None, all tensors at each coordinate pair [(i, j), (i, j + 1)] will be first contracted. If specified, then the outer tensor at (i, j) will be contracted with the tensor specified by [(i + 1, j), layer_tag], for each layer_tag in layer_tags.

  • sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

  • inplace (bool, optional) – Whether to perform the contraction inplace or not.

contract_boundary_from_ymin_[source]
contract_boundary_from_ymax(yrange, xrange=None<