quimb.tensor¶
Tensor and tensor network functionality.
Submodules¶
- quimb.tensor.array_ops
- quimb.tensor.circuit
- quimb.tensor.circuit_gen
- quimb.tensor.contraction
- quimb.tensor.decomp
- quimb.tensor.drawing
- quimb.tensor.fitting
- quimb.tensor.geometry
- quimb.tensor.interface
- quimb.tensor.networking
- quimb.tensor.optimize
- quimb.tensor.tensor_1d
- quimb.tensor.tensor_1d_compress
- quimb.tensor.tensor_1d_tebd
- quimb.tensor.tensor_2d
- quimb.tensor.tensor_2d_compress
- quimb.tensor.tensor_2d_tebd
- quimb.tensor.tensor_3d
- quimb.tensor.tensor_3d_tebd
- quimb.tensor.tensor_approx_spectral
- quimb.tensor.tensor_arbgeom
- quimb.tensor.tensor_arbgeom_compress
- quimb.tensor.tensor_arbgeom_tebd
- quimb.tensor.tensor_builder
- quimb.tensor.tensor_core
- quimb.tensor.tensor_dmrg
- quimb.tensor.tensor_mera
Attributes¶
Classes¶
Class for simulating quantum circuits using tensor networks. The class |
|
Quantum circuit simulation keeping the state in full dense form. |
|
Quantum circuit simulation keeping the state always in a MPS form. If |
|
Quantum circuit simulation keeping the state always in an MPS form, but |
|
A simple class for storing the details of a quantum circuit gate. |
|
Globally optimize tensors within a tensor network with respect to any |
|
Mimics other 1D tensor network structures, but really just keeps the |
|
Initialise a matrix product operator, with auto labelling and tagging. |
|
Initialise a matrix product state, with auto labelling and tagging. |
|
A 1D tensor network super-operator class: |
|
Base class for tensor networks with a one-dimensional structure. |
|
A 1D tensor network linear operator like: |
|
Class implementing Time Evolving Block Decimation (TEBD) [1]. |
|
An simple interacting hamiltonian object used, for instance, in TEBD. |
|
Projected Entangled Pair Operator object: |
|
Projected Entangled Pair States object (2D): |
|
Mixin class for tensor networks with a square lattice two-dimensional |
|
Generic class for performing two dimensional time evolving block |
|
Implements the 'Full Update' version of 2D imaginary time evolution, |
|
A 2D Hamiltonian represented as local terms. This combines all two site |
|
A simple subclass of |
|
Projected Entangled Pair States object (3D). |
|
Mixin class for tensor networks with a cubic lattice three-dimensional |
|
Representation of a local hamiltonian defined on a general graph. This |
|
Representation of a local hamiltonian defined on a general graph. This |
|
Simple update for arbitrary geometry hamiltonians. |
|
Generic class for performing time evolving block decimation on an |
|
Class for easily building custom spin hamiltonians in MPO or LocalHam1D |
|
A |
|
A tensor whose data array is lazily generated from a set of parameters |
|
A labelled, tagged n-dimensional array. The index labels are used |
|
A collection of (as yet uncontracted) Tensors. |
|
An ordered set which stores elements as the keys of dict (ordered as of |
|
Density Matrix Renormalization Group variational groundstate search. |
|
Simple alias of one site |
|
Simple alias of two site |
|
Class implmenting DMRG-X [1], whereby local effective energy eigenstates |
|
Helper class for efficiently moving the effective 'environment' of a |
|
The Multi-scale Entanglement Renormalization Ansatz (MERA) state: |
Functions¶
|
A 1D circuit ansatz with odd and even layers of entangling |
|
A 1D circuit ansatz with randomly place entangling gates interleaved |
|
A 1D circuit ansatz with forward and backward layers of entangling |
|
Generate the QAOA circuit for weighted graph described by |
|
|
|
A context manager to temporarily set the default backend used for tensor |
|
A context manager to temporarily set the default contraction strategy |
Get the default backend used for tensor contractions, via 'cotengra'. |
|
Get the default contraction strategy - the option supplied as |
|
Get the default backend used for tensor network linear operators, via |
|
|
Turn input and output indices of any sort into a single 'equation' |
|
Set the default backend used for tensor contractions, via 'cotengra'. |
|
Get the default contraction strategy - the option supplied as |
|
Set the default backend used for tensor network linear operators, via |
|
A context manager to temporarily set the default backend used for tensor |
|
Return the graph edges of a finite 1D chain lattice. |
|
Return the graph edges of a finite 2D hexagonal lattice. There are two |
|
Return the graph edges of a finite 2D kagome lattice. There are |
|
Return the graph edges of a finite 2D square lattice. The nodes |
|
Return the graph edges of a finite 2D triangular lattice. There is a |
|
Return the graph edges of a finite 2D triangular lattice tiled in a |
|
Return the graph edges of a finite 3D cubic lattice. The nodes |
|
Return the graph edges of a finite 3D diamond lattice. There are |
|
Return the graph edges of a finite 3D diamond lattice tiled in a cubic |
|
Return the graph edges of a finite 3D pyorchlore lattice. There are |
|
Return a random tree with |
|
Take a tensor or tensor network like object and return a skeleton needed |
|
Take a skeleton of a tensor or tensor network like object and a pytree |
|
Compute the expectation of several 1D TNs, using transfer matrix |
|
Act with the gate |
|
Take a tensor network superoperator and act with it on a |
|
Check that |
|
Compress a 1D-like tensor network using the specified method. |
|
Convenience function for tiling pairs of bond coordinates on a 2D |
|
Convenience function for tiling pairs of bond coordinates on a 3D |
|
Align an arbitrary number of tensor networks in a stack-like geometry: |
|
Apply the operator (has upper and lower site inds) represented by tensor |
|
Apply a general a general tensor network representing an operator (has |
|
Generate an edge coloring for the graph given by |
|
Build a matrix product state representation of the COPY tensor. |
|
Hyper tensor network representation of the 2D classical ising model |
|
Hyper tensor network representation of the 3D classical ising model |
|
Build a hyper tensor network representation of a classical ising model |
|
Create a CP-decomposition structured hyper tensor network state from a |
|
Create a hyper tensor network with a tensor on each bond and a hyper |
|
Given a list of clauses, create a hyper tensor network, with a single |
|
Create a hyper tensor network from a '.cnf' or '.wcnf' file - i.e. a |
|
Create a random k-SAT instance encoded as a hyper tensor network. |
|
Heisenberg Hamiltonian in MPO form. |
|
Ising Hamiltonian in MPO form. |
|
The many-body-localized spin hamiltonian in MPO form. |
|
XY-Hamiltonian in MPO form. |
|
Generate an identity MPO of size |
|
Return an identity matrix operator with the same physical index and |
|
Return an MPO of bond dimension 1 representing the product of raw |
|
Generate a random matrix product state. |
|
Generate a random hermitian matrix product operator. |
|
Generate a zeros MPO of size |
|
Return a zeros matrix product operator with the same physical index and |
|
A computational basis state in Matrix Product State form. |
|
Build the chi=2 OBC MPS representation of the GHZ state. |
|
Generate the neel state in Matrix Product State form. |
|
Generate a product state in MatrixProductState form, i,e, |
|
Generate a random computation basis state, like '01101001010'. |
|
Generate a random matrix product state. |
|
A product state for sampling tensor network traces. Seen as a vector it |
|
Build the chi=2 OBC MPS representation of the W state. |
|
The all-zeros MPS state, of given bond-dimension. |
|
The tensor network representation of the 2D classical ising model |
|
Build a 2D 'corner double line' (CDL) tensor network. Each plaquette |
Construct a (triangular) '2D' tensor network representation of the |
|
|
A scalar 2D lattice tensor network initialized with empty tensors. |
|
A scalar 2D lattice tensor network with tensors filled by a function. |
|
A random scalar 2D lattice tensor network. |
|
|
|
Create a random 2D lattice tensor network where every tensor is |
|
A scalar 2D lattice tensor network with every element set to |
|
Tensor network representation of the 3D classical ising model |
|
|
|
A scalar 3D lattice tensor network initialized with empty tensors. |
|
A scalar 3D lattice tensor network with tensors filled by a function. |
|
A random scalar 3D lattice tensor network. |
|
|
|
A scalar 2D lattice tensor network with every element set to |
|
Build a regular tensor network representation of a classical ising model |
|
Make a tensor network from sequence of graph edges that counts the |
|
Create a tensor network from a sequence of edges defining a graph, |
|
Create a tensor network from a sequence of edges defining a graph, |
|
Create a random tensor network with geometry defined from a sequence |
|
Create a tensor network from a sequence of edges defining a graph, |
|
A computational basis state in general tensor network form. |
|
A product state in general tensor network form. |
|
|
|
Create a tensor network with the same outer indices as |
|
Create a random regular tensor network. |
|
Create a random tree tensor network. |
|
Parse a DIMACS style 'cnf' file into a list of clauses, and possibly a |
|
Convert |
|
Convert |
|
Heisenberg Hamiltonian in |
|
Ising Hamiltonian in |
|
The many-body-localized spin hamiltonian in |
|
XY-Hamiltonian in |
|
Heisenberg Hamiltonian in |
|
Ising Hamiltonian in |
|
Heisenberg Hamiltonian in |
|
Heisenberg Hamiltonian in |
|
Generate a random tensor with specified shape and inds, and randomly |
|
Generate a random tensor with specified shape and inds. |
|
Create a random k-SAT instance. |
|
Get the tensor representing the COPY operation with dimension size |
|
Getting any indices connecting the Tensor(s) or TensorNetwork(s) |
|
Get the size of the bonds linking tensors or tensor networks |
|
Connect two tensors by setting a shared index for the specified |
|
Group bonds into left only, shared, and right only. If |
|
Inplace addition of a new bond between tensors |
|
Return a guaranteed unique, shortish identifier, optional appended |
|
Gauge the bond between two tensors such that the norm of the 'columns' |
|
Inplace 'canonization' of two tensors. This gauges the bond between |
|
Inplace compress between the two single tensors. It follows the |
|
Contract a collection of tensors into a scalar or tensor, automatically |
|
Direct product of two Tensors. Any axes included in |
|
If |
|
Compute the Frobenius norm distance between two tensor networks: |
|
Optimize the fit of |
|
Optimize the fit of |
|
Apply the 'gate' |
|
Sum of two tensor networks, whose indices should match exactly, using |
|
Decompose this tensor into two tensors. |
Package Contents¶
- class quimb.tensor.Circuit(N=None, psi0=None, gate_opts=None, gate_contract='auto-split-gate', gate_propagate_tags='register', tags=None, psi0_dtype='complex128', psi0_tag='PSI0', tag_gate_numbers=True, gate_tag_id='GATE_{}', tag_gate_rounds=True, round_tag_id='ROUND_{}', tag_gate_labels=True, bra_site_ind_id='b{}', to_backend=None)[source]¶
Class for simulating quantum circuits using tensor networks. The class keeps a list of
Gate
objects in sync with a tensor network representing the current state of the circuit.- Parameters:
N (int, optional) – The number of qubits.
psi0 (TensorNetwork1DVector, optional) – The initial state, assumed to be
|00000....0>
if not given. The state is always copied and the tagPSI0
added.gate_opts (dict_like, optional) – Default keyword arguments to supply to each
gate_TN_1D()
call during the circuit.gate_contract (str, optional) – Shortcut for setting the default ‘contract’ option in gate_opts.
gate_propagate_tags (str, optional) – Shortcut for setting the default ‘propagate_tags’ option in gate_opts.
tags (str or sequence of str, optional) – Tag(s) to add to the initial wavefunction tensors (whether these are propagated to the rest of the circuit’s tensors depends on
gate_opts
).psi0_dtype (str, optional) – Ensure the initial state has this dtype.
psi0_tag (str, optional) – Ensure the initial state has this tag.
tag_gate_numbers (bool, optional) – Whether to tag each gate tensor with its number in the circuit, like
"GATE_{g}"
. This is required for updating the circuit parameters.gate_tag_id (str, optional) – The format string for tagging each gate tensor, by default e.g.
"GATE_{g}"
.tag_gate_rounds (bool, optional) – Whether to tag each gate tensor with its number in the circuit, like
"ROUND_{r}"
.round_tag_id (str, optional) – The format string for tagging each round of gates, by default e.g.
"ROUND_{r}"
.tag_gate_labels (bool, optional) – Whether to tag each gate tensor with its gate type label, e.g.
{"X_1/2", "ISWAP", "CCX", ...}
..bra_site_ind_id (str, optional) – Use this to label ‘bra’ site indices when creating certain (mostly internal) intermediate tensor networks.
- psi¶
The current circuit wavefunction as a tensor network.
- Type:
- uni¶
The current circuit unitary operator as a tensor network.
- Type:
Examples
Create 3-qubit GHZ-state:
>>> qc = qtn.Circuit(3) >>> gates = [ ('H', 0), ('H', 1), ('CNOT', 1, 2), ('CNOT', 0, 2), ('H', 0), ('H', 1), ('H', 2), ] >>> qc.apply_gates(gates) >>> qc.psi <TensorNetwork1DVector(tensors=12, indices=14, L=3, max_bond=2)>
>>> qc.psi.to_dense().round(4) qarray([[ 0.7071+0.j], [ 0. +0.j], [ 0. +0.j], [-0. +0.j], [-0. +0.j], [ 0. +0.j], [ 0. +0.j], [ 0.7071+0.j]])
>>> for b in qc.sample(10): ... print(b) 000 000 111 000 111 111 000 111 000 000
See also
- tag_gate_numbers¶
- tag_gate_rounds¶
- tag_gate_labels¶
- to_backend¶
- gate_opts¶
- _gates = []¶
- _ket_site_ind_id¶
- _bra_site_ind_id¶
- _gate_tag_id¶
- _round_tag_id¶
- _sample_n_gates¶
- _storage¶
- _sampled_conditionals¶
- set_params(params)[source]¶
Set the parameters of the circuit.
- Parameters:
params (dict`) – A dictionary mapping gate numbers to the new parameters.
- classmethod from_qsim_str(contents, **circuit_opts)[source]¶
Generate a
Circuit
instance from a ‘qsim’ string.
- classmethod from_qsim_file(fname, **circuit_opts)[source]¶
Generate a
Circuit
instance from a ‘qsim’ file.The qsim file format is described here: https://quantumai.google/qsim/input_format.
- classmethod from_qsim_url(url, **circuit_opts)[source]¶
Generate a
Circuit
instance from a ‘qsim’ url.
- classmethod from_openqasm2_str(contents, **circuit_opts)[source]¶
Generate a
Circuit
instance from an OpenQASM 2.0 string.
- classmethod from_openqasm2_file(fname, **circuit_opts)[source]¶
Generate a
Circuit
instance from an OpenQASM 2.0 file.
- classmethod from_openqasm2_url(url, **circuit_opts)[source]¶
Generate a
Circuit
instance from an OpenQASM 2.0 url.
- classmethod from_gates(gates, N=None, progbar=False, **kwargs)[source]¶
Generate a
Circuit
instance from a sequence of gates.
- property gates¶
- property num_gates¶
- _apply_gate(gate, tags=None, **gate_opts)[source]¶
Apply a
Gate
to thisCircuit
. This is the main method that all calls to apply a gate should go through.
- apply_gate(gate_id, *gate_args, params=None, qubits=None, controls=None, gate_round=None, parametrize=None, **gate_opts)[source]¶
Apply a single gate to this tensor network quantum circuit. If
gate_round
is supplied the tensor(s) added will be tagged with'ROUND_{gate_round}'
. Alternatively, putting an integer first like so:circuit.apply_gate(10, 'H', 7)
Is automatically translated to:
circuit.apply_gate('H', 7, gate_round=10)
- Parameters:
gate_id (Gate, str, or array_like) –
Which gate to apply. This can be:
A
Gate
instance, i.e. with parameters and qubits already specified.A string, e.g.
'H'
,'U3'
, etc. in which casegate_args
should be supplied with(*params, *qubits)
.A raw array, in which case
gate_args
should be supplied with(*qubits,)
.
gate_round (int, optional) – The gate round. If
gate_id
is integer-like, will also be taken from here, with thengate_id, gate_args = gate_args[0], gate_args[1:]
.gate_opts – Supplied to the gate function, options here will override the default
gate_opts
.
- apply_gate_raw(U, where, controls=None, gate_round=None, **gate_opts)[source]¶
Apply the raw array
U
as a gate on qubits inwhere
. It will be assumed to be unitary for the sake of computing reverse lightcones.
- apply_gates(gates, progbar=False, **gate_opts)[source]¶
Apply a sequence of gates to this tensor network quantum circuit.
- Parameters:
gates (Sequence[Gate] or Sequence[Tuple]) – The sequence of gates to apply.
gate_opts – Supplied to
apply_gate()
.
- su4(theta1, phi1, lamda1, theta2, phi2, lamda2, theta3, phi3, lamda3, theta4, phi4, lamda4, t1, t2, t3, i, j, gate_round=None, parametrize=False, **kwargs)[source]¶
- property psi¶
- Tensor network representation of the wavefunction.
- get_uni(transposed=False)[source]¶
Tensor network representation of the unitary operator (i.e. with the initial state removed).
- property uni¶
- get_reverse_lightcone_tags(where)[source]¶
Get the tags of gates in this circuit corresponding to the ‘reverse’ lightcone propagating backwards from registers in
where
.
- get_psi_reverse_lightcone(where, keep_psi0=False)[source]¶
Get just the bit of the wavefunction in the reverse lightcone of sites in
where
- i.e. causally linked.- Parameters:
where (int, or sequence of int) – The sites to propagate the the lightcone back from, supplied to
get_reverse_lightcone_tags()
.keep_psi0 (bool, optional) – Keep the tensors corresponding to the initial wavefunction regardless of whether they are outside of the lightcone.
- Returns:
psi_lc
- Return type:
- get_psi_simplified(seq='ADCRS', atol=1e-12, equalize_norms=False)[source]¶
Get the full wavefunction post local tensor network simplification.
- Parameters:
seq (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
- Returns:
psi
- Return type:
- get_rdm_lightcone_simplified(where, seq='ADCRS', atol=1e-12, equalize_norms=False)[source]¶
Get a simplified TN of the norm of the wavefunction, with gates outside reverse lightcone of
where
cancelled, and physical indices withinwhere
preserved so that they can be fixed (sliced) or used as output indices.- Parameters:
where (int or sequence of int) – The region assumed to be the target density matrix essentially. Supplied to
get_reverse_lightcone_tags()
.seq (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
- Return type:
- amplitude(b, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, backend=None, dtype='complex128', rehearse=False)[source]¶
Get the amplitude coefficient of bitstring
b
.\[c_b = \langle b | \psi \rangle\]- Parameters:
b (str or sequence of int) – The bitstring to compute the transition amplitude for.
optimize (str, optional) – Contraction path optimizer to use for the amplitude, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
backend (str, optional) – Backend to perform the contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
rehearse (bool or "tn", optional) – If
True
, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys"tn"
and'tree'
with the tensor network that will be contracted and the corresponding contraction tree if so.
- amplitude_rehearse(b='random', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, optimize='auto-hq', dtype='complex128', rehearse=True)[source]¶
Perform just the tensor network simplifications and contraction tree finding associated with computing a single amplitude (caching the results) but don’t perform the actual contraction.
- Parameters:
b ('random', str or sequence of int) – The bitstring to rehearse computing the transition amplitude for, if
'random'
(the default) a random bitstring will be used.optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
backend (str, optional) – Backend to perform the marginal contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
- Return type:
- partial_trace(keep, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, backend=None, dtype='complex128', rehearse=False)[source]¶
Perform the partial trace on the circuit wavefunction, retaining only qubits in
keep
, and making use of reverse lightcone cancellation:\[\rho_{\bar{q}} = Tr_{\bar{p}} |\psi_{\bar{q}} \rangle \langle \psi_{\bar{q}}|\]Where \(\bar{q}\) is the set of qubits to keep, \(\psi_{\bar{q}}\) is the circuit wavefunction only with gates in the causal cone of this set, and \(\bar{p}\) is the remaining qubits.
- Parameters:
keep (int or sequence of int) – The qubit(s) to keep as we trace out the rest.
optimize (str, optional) – Contraction path optimizer to use for the reduced density matrix, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
backend (str, optional) – Backend to perform the marginal contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
rehearse (bool or "tn", optional) – If
True
, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys"tn"
and'tree'
with the tensor network that will be contracted and the corresponding contraction tree if so.
- Return type:
array or dict
- local_expectation(G, where, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-12, simplify_equalize_norms=False, backend=None, dtype='complex128', rehearse=False)[source]¶
Compute the a single expectation value of operator
G
, acting on siteswhere
, making use of reverse lightcone cancellation.\[\langle \psi_{\bar{q}} | G_{\bar{q}} | \psi_{\bar{q}} \rangle\]where \(\bar{q}\) is the set of qubits \(G\) acts one and \(\psi_{\bar{q}}\) is the circuit wavefunction only with gates in the causal cone of this set. If you supply a tuple or list of gates then the expectations will be computed simultaneously.
- Parameters:
G (array or sequence[array]) – The raw operator(s) to find the expectation of.
where (int or sequence of int) – Which qubits the operator acts on.
optimize (str, optional) – Contraction path optimizer to use for the local expectation, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
backend (str, optional) – Backend to perform the marginal contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
gate_opts (None or dict_like) – Options to use when applying
G
to the wavefunction.rehearse (bool or "tn", optional) – If
True
, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys'tn'
and'tree'
with the tensor network that will be contracted and the corresponding contraction tree if so.
- Return type:
- compute_marginal(where, fix=None, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=True, rehearse=False)[source]¶
Compute the probability tensor of qubits in
where
, given possibly fixed qubits infix
and tracing everything else having removed redundant unitary gates.- Parameters:
where (sequence of int) – The qubits to compute the marginal probability distribution of.
fix (None or dict[int, str], optional) – Measurement results on other qubits to fix.
optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
backend (str, optional) – Backend to perform the marginal contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
rehearse (bool or "tn", optional) – Whether to perform the marginal contraction or just return the associated TN and contraction tree.
- calc_qubit_ordering(qubits=None, method='greedy-lightcone')[source]¶
Get a order to measure
qubits
in, by greedily choosing whichever has the smallest reverse lightcone followed by whichever expands this lightcone least.
- _parse_qubits_order(qubits=None, order=None)[source]¶
Simply initializes the default of measuring all qubits, and the default order, or checks that
order
is a permutation ofqubits
.
- _group_order(order, group_size=1)[source]¶
Take the qubit ordering
order
and batch it in groups of sizegroup_size
, sorting the qubits (for caching reasons) within each group.
- sample(C, qubits=None, order=None, group_size=1, max_marginal_storage=2**20, seed=None, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False)[source]¶
Sample the circuit given by
gates
,C
times, using lightcone cancelling and caching marginal distribution results. This is a generator. This proceeds as a chain of marginal computations.Assuming we have
group_size=1
, and some ordering of the qubits, \(\{q_0, q_1, q_2, q_3, \ldots\}\) we first compute:\[p(q_0) = \mathrm{diag} \mathrm{Tr}_{1, 2, 3,\ldots} | \psi_{0} \rangle \langle \psi_{0} |\]I.e. simply the probability distribution on a single qubit, conditioned on nothing. The subscript on \(\psi\) refers to the fact that we only need gates from the causal cone of qubit 0. From this we can sample an outcome, either 0 or 1, if we call this \(r_0\) we can then move on to the next marginal:
\[p(q_1 | r_0) = \mathrm{diag} \mathrm{Tr}_{2, 3,\ldots} \langle r_0 | \psi_{0, 1} \rangle \langle \psi_{0, 1} | r_0 \rangle\]I.e. the probability distribution of the next qubit, given our prior result. We can sample from this to get \(r_1\). Then we compute:
\[p(q_2 | r_0 r_1) = \mathrm{diag} \mathrm{Tr}_{3,\ldots} \langle r_0 r_1 | \psi_{0, 1, 2} \rangle \langle \psi_{0, 1, 2} | r_0 r_1 \rangle\]Eventually we will reach the ‘final marginal’, which we can compute as
\[|\langle r_0 r_1 r_2 r_3 \ldots | \psi \rangle|^2\]since there is nothing left to trace out.
- Parameters:
C (int) – The number of times to sample.
qubits (None or sequence of int, optional) – Which qubits to measure, defaults (
None
) to all qubits.order (None or sequence of int, optional) – Which order to measure the qubits in, defaults (
None
) to an order based on greedily expanding the smallest reverse lightcone. If specified it should be a permutation ofqubits
.group_size (int, optional) – How many qubits to group together into marginals, the larger this is the fewer marginals need to be computed, which can be faster at the cost of higher memory. The marginal themselves will each be of size
2**group_size
.max_marginal_storage (int, optional) – The total cumulative number of marginal probabilites to cache, once this is exceeded caching will be turned off.
seed (None or int, optional) – A random seed, passed to
numpy.random.seed
if given.optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a non-reusable path optimizer as called on many different TNs. Passed to
cotengra.array_contract_tree()
.backend (str, optional) – Backend to perform the marginal contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
- Yields:
bitstrings (sequence of str)
- sample_rehearse(qubits=None, order=None, group_size=1, result=None, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False, rehearse=True, progbar=False)[source]¶
Perform the preparations and contraction tree findings for
sample()
, caching various intermedidate objects, but don’t perform the main contractions.- Parameters:
qubits (None or sequence of int, optional) – Which qubits to measure, defaults (
None
) to all qubits.order (None or sequence of int, optional) – Which order to measure the qubits in, defaults (
None
) to an order based on greedily expanding the smallest reverse lightcone.group_size (int, optional) – How many qubits to group together into marginals, the larger this is the fewer marginals need to be computed, which can be faster at the cost of higher memory. The marginal’s size itself is exponential in
group_size
.result (None or dict[int, str], optional) – Explicitly check the computational cost of this result, assumed to be all zeros if not given.
optimize (str, optional) – Contraction path optimizer to use for the marginals, shouldn’t be a non-reusable path optimizer as called on many different TNs. Passed to
cotengra.array_contract_tree()
.simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
progbar (bool, optional) – Whether to show the progress of finding each contraction tree.
- Returns:
One contraction tree object per grouped marginal computation. The keys of the dict are the qubits the marginal is computed for, the values are a dict containing a representative simplified tensor network (key: ‘tn’) and the main contraction tree (key: ‘tree’).
- Return type:
- sample_chaotic(C, marginal_qubits, max_marginal_storage=2**20, seed=None, optimize='auto-hq', backend=None, dtype='complex64', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False)[source]¶
Sample from this circuit, assuming it to be chaotic. Which is to say, only compute and sample correctly from the final marginal, assuming that the distribution on the other qubits is uniform. Given
marginal_qubits=5
for instance, for each sample a random bit-string \(r_0 r_1 r_2 \ldots r_{N - 6}\) for the remaining \(N - 5\) qubits will be chosen, then the final marginal will be computed as\[p(q_{N-5}q_{N-4}q_{N-3}q_{N-2}q_{N-1} | r_0 r_1 r_2 \ldots r_{N-6}) = |\langle r_0 r_1 r_2 \ldots r_{N - 6} | \psi \rangle|^2\]and then sampled from. Note the expression on the right hand side has 5 open indices here and so is a tensor, however if
marginal_qubits
is not too big then the cost of contracting this is very similar to a single amplitude.Note
This method assumes the circuit is chaotic, if its not, then the samples produced will not be an accurate representation of the probability distribution.
- Parameters:
C (int) – The number of times to sample.
marginal_qubits (int or sequence of int) – The number of qubits to treat as marginal, or the actual qubits. If an int is given then the qubits treated as marginal will be
circuit.calc_qubit_ordering()[:marginal_qubits]
.seed (None or int, optional) – A random seed, passed to
numpy.random.seed
if given.optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
backend (str, optional) – Backend to perform the marginal contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype (str, optional) – Data type to cast the TN to before contraction.
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
- Yields:
str
- sample_chaotic_rehearse(marginal_qubits, result=None, optimize='auto-hq', simplify_sequence='ADCRS', simplify_atol=1e-06, simplify_equalize_norms=False, dtype='complex64', rehearse=True)[source]¶
Rehearse chaotic sampling (perform just the TN simplifications and contraction tree finding).
- Parameters:
marginal_qubits (int or sequence of int) – The number of qubits to treat as marginal, or the actual qubits. If an int is given then the qubits treated as marginal will be
circuit.calc_qubit_ordering()[:marginal_qubits]
.result (None or dict[int, str], optional) – Explicitly check the computational cost of this result, assumed to be all zeros if not given.
optimize (str, optional) – Contraction path optimizer to use for the marginal, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
dtype (str, optional) – Data type to cast the TN to before contraction.
- Returns:
The contraction path information for the main computation, the key is the qubits that formed the final marginal. The value is itself a dict with keys
'tn'
- a representative tensor network - and'tree'
- the contraction tree.- Return type:
- to_dense(reverse=False, optimize='auto-hq', simplify_sequence='R', simplify_atol=1e-12, simplify_equalize_norms=False, backend=None, dtype=None, rehearse=False)[source]¶
Generate the dense representation of the final wavefunction.
- Parameters:
reverse (bool, optional) – Whether to reverse the order of the subsystems, to match the convention of qiskit for example.
optimize (str, optional) – Contraction path optimizer to use for the contraction, can be a non-reusable path optimizer as only called once (though path won’t be cached for later use in that case).
dtype (str, optional) – If given, convert the tensors to this dtype prior to contraction.
simplify_sequence (str, optional) – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.simplify_equalize_norms (bool, optional) – Actively renormalize tensor norms during simplification.
backend (str, optional) – Backend to perform the contraction with, e.g.
'numpy'
,'cupy'
or'jax'
. Passed tocotengra
.dtype – Data type to cast the TN to before contraction.
rehearse (bool, optional) – If
True
, generate and cache the simplified tensor network and contraction tree but don’t actually perform the contraction. Returns a dict with keys'tn'
and'tree'
with the tensor network that will be contracted and the corresponding contraction tree if so.
- Returns:
psi – The densely represented wavefunction with
dtype
data.- Return type:
- simulate_counts(C, seed=None, reverse=False, **to_dense_opts)[source]¶
Simulate measuring all qubits in the computational basis many times. Unlike
sample()
, this generates all the samples simultaneously using the full wavefunction constructed fromto_dense()
, then callingsimulate_counts()
.Warning
Because this constructs the full wavefunction it always requires exponential memory in the number of qubits, regardless of circuit depth and structure.
- Parameters:
C (int) – The number of ‘experimental runs’, i.e. total counts.
seed (int, optional) – A seed for reproducibility.
reverse (bool, optional) – Whether to reverse the order of the subsystems, to match the convention of qiskit for example.
to_dense_opts – Suppled to
to_dense()
.
- Returns:
results – The number of recorded counts for each
- Return type:
- xeb(samples_or_counts, cache=None, cache_maxsize=2**20, progbar=False, **amplitude_opts)[source]¶
Compute the linear cross entropy benchmark (XEB) for samples or counts, amplitude per amplitude.
- Parameters:
samples_or_counts (Iterable[str] or Dict[str, int]) – Either the raw bitstring samples or a dict mapping bitstrings to the number of counts observed.
cache (dict, optional) – A dictionary to store the probabilities in, if not supplied
quimb.utils.LRU(cache_maxsize)
will be used.cache_maxsize – The maximum size of the cache to be used.
optional – The maximum size of the cache to be used.
progbar – Whether to show progress as the bitstrings are iterated over.
optional – Whether to show progress as the bitstrings are iterated over.
amplitude_opts – Supplied to
amplitude()
.
- xeb_ex(optimize='auto-hq', simplify_sequence='R', simplify_atol=1e-12, simplify_equalize_norms=False, dtype=None, backend=None, autojit=False, progbar=False, **contract_opts)[source]¶
Compute the exactly expected XEB for this circuit. The main feature here is that if you supply a cotengra optimizer that searches for sliced indices then the XEB will be computed without constructing the full wavefunction.
- Parameters:
optimize (str or PathOptimizer, optional) – Contraction path optimizer.
simplify_sequence (str, optional) – Simplifications to apply to tensor network prior to contraction.
simplify_sequence – Which local tensor network simplifications to perform and in which order, see
full_simplify()
.simplify_atol (float, optional) – The tolerance with which to compare to zero when applying
full_simplify()
.dtype (str, optional) – Data type to cast the TN to before contraction.
backend (str, optional) – Convert tensors to, and then use contractions from, this library.
autojit (bool, optional) – Apply
autoray.autojit
to the contraciton and map-reduce.progbar (bool, optional) – Show progress in terms of number of wavefunction chunks processed.
- update_params_from(tn)[source]¶
Assuming
tn
is a tensor network with tensors taggedGATE_{i}
corresponding to this circuit (e.g. fromcirc.psi
orcirc.uni
) but with updated parameters, update the current circuit parameters and tensors with those values.This is an inplace modification of the
Circuit
.- Parameters:
tn (TensorNetwork) – The tensor network to find the updated parameters from.
- class quimb.tensor.CircuitDense(N=None, psi0=None, gate_opts=None, gate_contract=True, tags=None)[source]¶
Bases:
Circuit
Quantum circuit simulation keeping the state in full dense form.
- gate_opts¶
- property psi¶
- Tensor network representation of the wavefunction.
- property uni¶
- class quimb.tensor.CircuitMPS(N=None, *, psi0=None, max_bond=None, cutoff=1e-10, gate_opts=None, gate_contract='auto-mps', **circuit_opts)[source]¶
Bases:
Circuit
Quantum circuit simulation keeping the state always in a MPS form. If you think the circuit will not build up much entanglement, or you just want to keep a rigorous handle on how much entanglement is present, this can be useful.
- Parameters:
N (int, optional) – The number of qubits in the circuit.
psi0 (TensorNetwork1DVector, optional) – The initial state, assumed to be
|00000....0>
if not given. The state is always copied and the tagPSI0
added.max_bond (int, optional) – The maximum bond dimension to truncate to when applying gates, if any. This is simply a shortcut for setting
gate_opts['max_bond']
.cutoff (float, optional) – The singular value cutoff to use when truncating the state. This is simply a shortcut for setting
gate_opts['cutoff']
.gate_opts (dict, optional) – Default options to pass to each gate, for example, “max_bond” and “cutoff” etc.
gate_contract (str, optional) –
The default method for applying gates. Relevant MPS options are:
'auto-mps'
: automatically choose a method that maintains the MPS form (default). This uses'swap+split'
for 2-qubit gates and'nonlocal'
for 3+ qubit gates.'swap+split'
: swap nonlocal qubits to be next to each other, before applying the gate, then swapping them back'nonlocal'
: turn the gate into a potentially nonlocal (sub) MPO and apply it directly. Seetensor_network_1d_compress()
.
circuit_opts – Supplied to
Circuit
.
- psi¶
The current state of the circuit, always in MPS form.
- Type:
Examples
Create a circuit object that always uses the “nonlocal” method for contracting in gates, and the “dm” compression method within that, using a large cutoff and maximum bond dimension:
circ = qtn.CircuitMPS( N=56, gate_opts=dict( contract="nonlocal", method="dm", max_bond=1024, cutoff=1e-3, ) )
- gate_opts¶
- apply_gates(gates, progbar=False, **gate_opts)[source]¶
Apply a sequence of gates to this tensor network quantum circuit.
- Parameters:
gates (Sequence[Gate] or Sequence[Tuple]) – The sequence of gates to apply.
gate_opts – Supplied to
apply_gate()
.
- property psi¶
- Tensor network representation of the wavefunction.
- property uni¶
- get_psi_reverse_lightcone(where, keep_psi0=False)[source]¶
Override
get_psi_reverse_lightcone
as for an MPS the lightcone is not meaningful.
- fidelity_estimate()[source]¶
Estimate the fidelity of the current state based on its norm, which tracks how much the state has been truncated:
\[\tilde{F} = \left| \langle \psi | \psi \rangle \right|^2 \approx \left|\langle \psi_\mathrm{ideal} | \psi \rangle\right|^2\]See also
- class quimb.tensor.CircuitPermMPS(N=None, psi0=None, gate_opts=None, gate_contract='swap+split', **circuit_opts)[source]¶
Bases:
CircuitMPS
Quantum circuit simulation keeping the state always in an MPS form, but lazily tracking the qubit ordering rather than ‘swapping back’ qubits after applying non-local gates. This can be useful for circuits with no expectation of locality. The qubit ordering is always tracked in the attribute
qubits
. Thepsi
attribute returns the TN with the sites reindexed and retagged according to the current qubit ordering, meaning it is no longer an MPS. Use circ.get_psi_unordered() to get the unpermuted MPS and use circ.qubits to get the current qubit ordering if you prefer.- gate_opts¶
- qubits¶
- _apply_gate(gate, tags=None, **gate_opts)[source]¶
Apply a
Gate
to thisCircuit
. This is the main method that all calls to apply a gate should go through.
- get_psi_unordered()[source]¶
Return the MPS representing the state but without reordering the sites.
- property psi¶
- Tensor network representation of the wavefunction.
- class quimb.tensor.Gate(label, params, qubits=None, controls=None, round=None, parametrize=False)[source]¶
A simple class for storing the details of a quantum circuit gate.
- Parameters:
label (str) – The name or ‘identifier’ of the gate.
params (Iterable[float]) – The parameters of the gate.
qubits (Iterable[int], optional) – Which qubits the gate acts on.
controls (Iterable[int], optional) – Which qubits are the controls.
round (int, optional) – If given, which round or layer the gate is part of.
parametrize (bool, optional) – Whether the gate will correspond to a parametrized tensor.
- __slots__ = ('_label', '_params', '_qubits', '_controls', '_round', '_parametrize', '_tag', '_special',...¶
- _label¶
- _params¶
- _round¶
- _parametrize¶
- _tag¶
- _special¶
- _constant¶
- _array = None¶
- property label¶
- property params¶
- property qubits¶
- property total_qubit_count¶
- property controls¶
- property round¶
- property special¶
- property parametrize¶
- property tag¶
- build_array()[source]¶
Build the array representation of the gate. For controlled gates this excludes the control qubits.
- property array¶
- quimb.tensor.circ_ansatz_1D_brickwork(n, depth, cyclic=False, gate2='cz', seed=None, **circuit_opts)[source]¶
A 1D circuit ansatz with odd and even layers of entangling gates interleaved with U3 single qubit unitaries:
| | | | | | u u u u u o++o o++o | u u u | o++o o++o u | u u u | u o++o o++o | u u u | o++o o++o u | u u u u u o++o o++o | u u u | o++o o++o u u u u u | | | | | |
- Parameters:
n (int) – The number of qubits.
depth (int) – The number of entangling gates per pair.
cyclic (bool, optional) – Whether to add entangling gates between qubits 0 and n - 1.
gate2 ({'cx', 'cy', 'cz', 'iswap', ..., str}, optional) – The gate to use for the entanling pairs.
seed (int, optional) – Random seed for parameters.
opts – Supplied to
gates_to_param_circuit()
.
- Return type:
See also
- quimb.tensor.circ_ansatz_1D_rand(n, depth, seed=None, cyclic=False, gate2='cz', avoid_doubling=True, **circuit_opts)[source]¶
A 1D circuit ansatz with randomly place entangling gates interleaved with U3 single qubit unitaries.
- Parameters:
n (int) – The number of qubits.
depth (int) – The number of entangling gates per pair.
seed (int, optional) – Random seed.
cyclic (bool, optional) – Whether to add entangling gates between qubits 0 and n - 1.
gate2 ({'cx', 'cy', 'cz', 'iswap', ..., str}, optional) – The gate to use for the entanling pairs.
avoid_doubling (bool, optional) – Whether to avoid placing an entangling gate directly above the same entangling gate (there will still be single qubit gates interleaved).
opts – Supplied to
gates_to_param_circuit()
.
- Return type:
- quimb.tensor.circ_ansatz_1D_zigzag(n, depth, gate2='cz', seed=None, **circuit_opts)[source]¶
A 1D circuit ansatz with forward and backward layers of entangling gates interleaved with U3 single qubit unitaries:
| | | | u u | | o++o u | | | | u | o++o | | | u | | | o++o u u u u | | o++o | | u | | o++o | | u | u o++o u | u u | | | | | |
- Parameters:
n (int) – The number of qubits.
depth (int) – The number of entangling gates per pair.
gate2 ({'cx', 'cy', 'cz', 'iswap', ..., str}, optional) – The gate to use for the entanling pairs.
seed (int, optional) – Random seed for parameters.
opts – Supplied to
gates_to_param_circuit()
.
- Return type:
See also
- quimb.tensor.circ_qaoa(terms, depth, gammas, betas, **circuit_opts)[source]¶
Generate the QAOA circuit for weighted graph described by
terms
.\[|{\bar{\gamma}, \bar{\beta}}\rangle = U_B (\beta _p) U_C (\gamma _p) \cdots U_B (\beta _1) U_C (\gamma _1) |{+}\rangle\]with
\[U_C (\gamma) = e^{-i \gamma \mathcal{C}} = \prod \limits_{i, j \in E(G)} e^{-i \gamma w_{i j} Z_i Z_j}\]and
\[U_B (\beta) = \prod \limits_{i \in G} e^{-i \beta X_i}\]- Parameters:
terms (dict[tuple[int], float]) – The mapping of integer pair keys
(i, j)
to the edge weight values,wij
. The integers should be a contiguous range enumerated from zero, with the total number of qubits being inferred from this.depth (int) – The number of layers of gates to apply,
p
above.gammas (iterable of float) – The interaction angles for each layer.
betas (iterable of float) – The rotation angles for each layer.
circuit_opts – Supplied to
Circuit
. Notegate_opts={'contract': False}
is set by default (it can be overridden) since the RZZ gate, even though it has a rank-2 decomposition, is also diagonal.
- quimb.tensor.array_contract(arrays, inputs, output=None, optimize=None, backend=None, **kwargs)[source]¶
- quimb.tensor.contract_backend(backend, set_globally=False)[source]¶
A context manager to temporarily set the default backend used for tensor contractions, via ‘cotengra’. By default, this only sets the contract backend for the current thread.
- Parameters:
set_globally (bool, optimize) – Whether to set the backend just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want
True
.
- quimb.tensor.contract_strategy(strategy, set_globally=False)[source]¶
A context manager to temporarily set the default contraction strategy supplied as
optimize
tocotengra
. By default, this only sets the contract strategy for the current thread.- Parameters:
set_globally (bool, optimize) – Whether to set the strategy just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want
True
.
- quimb.tensor.get_contract_backend()[source]¶
Get the default backend used for tensor contractions, via ‘cotengra’.
- quimb.tensor.get_contract_strategy()[source]¶
Get the default contraction strategy - the option supplied as
optimize
tocotengra
.
- quimb.tensor.get_tensor_linop_backend()[source]¶
Get the default backend used for tensor network linear operators, via ‘cotengra’. This is different from the default contraction backend as the contractions are likely repeatedly called many times.
See also
set_tensor_linop_backend
,set_contract_backend
,get_contract_backend
,TNLinearOperator
- quimb.tensor.inds_to_eq(inputs, output=None)[source]¶
Turn input and output indices of any sort into a single ‘equation’ string where each index is a single ‘symbol’ (unicode character).
- Parameters:
inputs (sequence of sequence of hashable) – The input indices per tensor.
output (sequence of hashable) – The output indices.
- Returns:
eq – The string to feed to einsum/contract.
- Return type:
- quimb.tensor.set_contract_backend(backend)[source]¶
Set the default backend used for tensor contractions, via ‘cotengra’.
- quimb.tensor.set_contract_strategy(strategy)[source]¶
Get the default contraction strategy - the option supplied as
optimize
tocotengra
.
- quimb.tensor.set_tensor_linop_backend(backend)[source]¶
Set the default backend used for tensor network linear operators, via ‘cotengra’. This is different from the default contraction backend as the contractions are likely repeatedly called many times.
See also
get_tensor_linop_backend
,set_contract_backend
,get_contract_backend
,TNLinearOperator
- quimb.tensor.tensor_linop_backend(backend, set_globally=False)[source]¶
A context manager to temporarily set the default backend used for tensor network linear operators, via ‘cotengra’. By default, this only sets the contract backend for the current thread.
- Parameters:
set_globally (bool, optimize) – Whether to set the backend just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want
True
.
- quimb.tensor.edges_1d_chain(L, cyclic=False)[source]¶
Return the graph edges of a finite 1D chain lattice.
- quimb.tensor.edges_2d_hexagonal(Lx, Ly, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 2D hexagonal lattice. There are two sites per cell, and note the cells do not form a square tiling. The nodes (sites) are labelled like
(i, j, s)
fors
in'AB'
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_2d_kagome(Lx, Ly, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 2D kagome lattice. There are three sites per cell, and note the cells do not form a square tiling. The nodes (sites) are labelled like
(i, j, s)
fors
in'ABC'
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_2d_square(Lx, Ly, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 2D square lattice. The nodes (sites) are labelled like
(i, j)
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_2d_triangular(Lx, Ly, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 2D triangular lattice. There is a single site per cell, and note the cells do not form a square tiling. The nodes (sites) are labelled like
(i, j)
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_2d_triangular_rectangular(Lx, Ly, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 2D triangular lattice tiled in a rectangular geometry. There are two sites per rectangular cell. The nodes (sites) are labelled like
(i, j, s)
fors
in'AB'
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_3d_cubic(Lx, Ly, Lz, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 3D cubic lattice. The nodes (sites) are labelled like
(i, j, k)
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
Lz (int) – The number of cells along the z-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly), range(Lz))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_3d_diamond(Lx, Ly, Lz, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 3D diamond lattice. There are two sites per cell, and note the cells do not form a cubic tiling. The nodes (sites) are labelled like
(i, j, k, s)
fors
in'AB'
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
Lz (int) – The number of cells along the z-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly), range(Lz))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_3d_diamond_cubic(Lx, Ly, Lz, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 3D diamond lattice tiled in a cubic geometry. There are eight sites per cubic cell. The nodes (sites) are labelled like
(i, j, k, s)
fors
in'ABCDEFGH'
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
Lz (int) – The number of cells along the z-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly), range(Lz))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_3d_pyrochlore(Lx, Ly, Lz, cyclic=False, cells=None)[source]¶
Return the graph edges of a finite 3D pyorchlore lattice. There are four sites per cell, and note the cells do not form a cubic tiling. The nodes (sites) are labelled like
(i, j, k, s)
fors
in'ABCD'
.- Parameters:
Lx (int) – The number of cells along the x-direction.
Ly (int) – The number of cells along the y-direction.
Lz (int) – The number of cells along the z-direction.
cyclic (bool, optional) – Whether to use periodic boundary conditions.
cells (list, optional) – A list of cells to use. If not given the cells used are
itertools.product(range(Lx), range(Ly), range(Lz))
.
- Returns:
edges
- Return type:
- quimb.tensor.edges_tree_rand(n, max_degree=None, seed=None)[source]¶
Return a random tree with
n
nodes. This a convenience function for testing purposes and the trees generated are not guaranteed to be uniformly random (for that seenetworkx.random_tree
).
- quimb.tensor.pack(obj)[source]¶
Take a tensor or tensor network like object and return a skeleton needed to reconstruct it, and a pytree of raw parameters.
- Parameters:
obj (Tensor, TensorNetwork, or similar) – Something that has
copy
,set_params
, andget_params
methods.- Returns:
params (pytree) – A pytree of raw parameter arrays.
skeleton (Tensor, TensorNetwork, or similar) – A copy of
obj
with all references to the original data removed.
- quimb.tensor.unpack(params, skeleton)[source]¶
Take a skeleton of a tensor or tensor network like object and a pytree of raw parameters and return a new reconstructed object with those parameters inserted.
- Parameters:
params (pytree) – A pytree of raw parameter arrays, with the same structure as the output of
skeleton.get_params()
.skeleton (Tensor, TensorNetwork, or similar) – Something that has
copy
,set_params
, andget_params
methods.
- Returns:
obj – A copy of
skeleton
with parameters inserted.- Return type:
Tensor, TensorNetwork, or similar
- class quimb.tensor.TNOptimizer(tn, loss_fn, norm_fn=None, loss_constants=None, loss_kwargs=None, tags=None, shared_tags=None, constant_tags=None, loss_target=None, optimizer='L-BFGS-B', progbar=True, bounds=None, autodiff_backend='AUTO', executor=None, callback=None, **backend_opts)[source]¶
Globally optimize tensors within a tensor network with respect to any loss function via automatic differentiation. If parametrized tensors are used, optimize the parameters rather than the raw arrays.
- Parameters:
tn (TensorNetwork) – The core tensor network structure within which to optimize tensors.
loss_fn (callable or sequence of callable) – The function that takes
tn
(as well asloss_constants
andloss_kwargs
) and returns a single real ‘loss’ to be minimized. For Hamiltonians which can be represented as a sum over terms, an iterable collection of terms (e.g. list) can be given instead. In that case each term is evaluated independently and the sum taken as loss_fn. This can reduce the total memory requirements or allow for parallelization (seeexecutor
).norm_fn (callable, optional) – A function to call before
loss_fn
that prepares or ‘normalizes’ the raw tensor network in some way.loss_constants (dict, optional) – Extra tensor networks, tensors, dicts/list/tuples of arrays, or arrays which will be supplied to
loss_fn
but also converted to the correct backend array type.loss_kwargs (dict, optional) – Extra options to supply to
loss_fn
(unlikeloss_constants
these are assumed to be simple options that don’t need conversion).tags (str, or sequence of str, optional) – If supplied, only optimize tensors with any of these tags.
shared_tags (str, or sequence of str, optional) – If supplied, each tag in
shared_tags
corresponds to a group of tensors to be optimized together.constant_tags (str, or sequence of str, optional) – If supplied, skip optimizing tensors with any of these tags. This ‘opt-out’ mode is overridden if either
tags
orshared_tags
is supplied.loss_target (float, optional) – Stop optimizing once this loss value is reached.
optimizer (str, optional) – Which
scipy.optimize.minimize
optimizer to use (the'method'
kwarg of that function). In addition,quimb
implements a few custom optimizers compatible with this interface that you can reference by name -{'adam', 'nadam', 'rmsprop', 'sgd'}
.executor (None or Executor, optional) – To be used with term-by-term Hamiltonians. If supplied, this executor is used to parallelize the evaluation. Otherwise each term is evaluated in sequence. It should implement the basic concurrent.futures (PEP 3148) interface.
progbar (bool, optional) – Whether to show live progress.
bounds (None or (float, float), optional) – Constrain the optimized tensor entries within this range (if the scipy optimizer supports it).
autodiff_backend ({'jax', 'autograd', 'tensorflow', 'torch'}, optional) – Which backend library to use to perform the automatic differentation (and computation).
callback (callable, optional) –
A function to call after each optimization step. It should take the current
TNOptimizer
instance as its only argument. Information such as the current loss and number of evaluations can then be accessed:def callback(tnopt): print(tnopt.nevals, tnopt.loss)
backend_opts – Supplied to the backend function compiler and array handler. For example
jit_fn=True
ordevice='cpu'
.
- progbar¶
- tags¶
- constant_tags¶
- _autodiff_backend¶
- _multiloss¶
- norm_fn¶
- loss_constants¶
- loss_kwargs¶
- kws¶
- property bounds¶
- property optimizer¶
- The underlying optimizer that works with the vectorized functions.
- callback¶
- reset(tn=None, clear_info=True, loss_target=None)[source]¶
Reset this optimizer without losing the compiled loss and gradient functions.
- Parameters:
tn (TensorNetwork, optional) – Set this tensor network as the current state of the optimizer, it must exactly match the original tensor network.
clear_info (bool, optional) – Clear the tracked losses and iterations.
- property d¶
- property nevals¶
- The number of gradient evaluations.
- get_tn_opt()[source]¶
Extract the optimized tensor network, this is a three part process:
inject the current optimized vector into the target tensor network,
run it through
norm_fn
,drop any tags used to identify variables.
- Returns:
tn_opt
- Return type:
- optimize(n, tol=None, jac=True, hessp=False, optlib='scipy', **options)[source]¶
Run the optimizer for
n
function evaluations, using by defaultscipy.optimize.minimize()
as the driver for the vectorized computation. Supplying the gradient and hessian vector product is controlled by thejac
andhessp
options respectively.- Parameters:
n (int) – Notionally the maximum number of iterations for the optimizer, note that depending on the optimizer being used, this may correspond to number of function evaluations rather than just iterations.
tol (None or float, optional) – Tolerance for convergence, note that various more specific tolerances can usually be supplied to
options
, depending on the optimizer being used.jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.
hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.
optlib ({'scipy', 'nlopt'}, optional) – Which optimization library to use.
options – Supplied to
scipy.optimize.minimize()
or whichever optimizer is being used.
- Returns:
tn_opt
- Return type:
- optimize_scipy(n, tol=None, jac=True, hessp=False, **options)[source]¶
Scipy based optimization, see
optimize()
for details.
- optimize_basinhopping(n, nhop, temperature=1.0, jac=True, hessp=False, **options)[source]¶
Run the optimizer for using
scipy.optimize.basinhopping()
as the driver for the vectorized computation. This performsnhop
local optimization each withn
iterations.- Parameters:
n (int) – Number of iterations per local optimization.
nhop (int) – Number of local optimizations to hop between.
temperature (float, optional) – H
options – Supplied to the inner
scipy.optimize.minimize()
call.
- Returns:
tn_opt
- Return type:
- optimize_nlopt(n, tol=None, jac=True, hessp=False, ftol_rel=None, ftol_abs=None, xtol_rel=None, xtol_abs=None)[source]¶
Run the optimizer for
n
function evaluations, usingnlopt
as the backend library to run the optimization. Whether the gradient is computed depends on whichoptimizer
is selected, see valid options at https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/.The following scipy
optimizer
options are automatically translated to the correspondingnlopt
algorithms: {“l-bfgs-b”, “slsqp”, “tnc”, “cobyla”}.- Parameters:
n (int) – The maximum number of iterations for the optimizer.
tol (None or float, optional) – Tolerance for convergence, here this is taken to be the relative tolerance for the loss (
ftol_rel
below overrides this).jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.
hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.
ftol_rel (float, optional) – Set relative tolerance on function value.
ftol_abs (float, optional) – Set absolute tolerance on function value.
xtol_rel (float, optional) – Set relative tolerance on optimization parameters.
xtol_abs (float, optional) – Set absolute tolerances on optimization parameters.
- Returns:
tn_opt
- Return type:
- optimize_ipopt(n, tol=None, **options)[source]¶
Run the optimizer for
n
function evaluations, usingipopt
as the backend library to run the optimization via the python packagecyipopt
.- Parameters:
n (int) – The maximum number of iterations for the optimizer.
- Returns:
tn_opt
- Return type:
- optimize_nevergrad(n)[source]¶
Run the optimizer for
n
function evaluations, usingnevergrad
as the backend library to run the optimization. As the name suggests, the gradient is not required for this method.- Parameters:
n (int) – The maximum number of iterations for the optimizer.
- Returns:
tn_opt
- Return type:
- plot(xscale='symlog', xscale_linthresh=20, zoom='auto', hlines=())[source]¶
Plot the loss function as a function of the number of iterations.
- Parameters:
xscale (str, optional) – The scale of the x-axis. Default is
"symlog"
, i.e. linear for the first part of the plot, and logarithmic for the rest, changing atxscale_linthresh
.xscale_linthresh (int, optional) – The threshold for the change from linear to logarithmic scale, if
xscale
is"symlog"
. Default is20
.zoom (None or int, optional) – If not
None
, show an inset plot of the lastzoom
iterations.hlines (dict, optional) – A dictionary of horizontal lines to plot. The keys are the labels of the lines, and the values are the y-values of the lines.
- Returns:
fig (matplotlib.figure.Figure) – The figure object.
ax (matplotlib.axes.Axes) – The axes object.
- class quimb.tensor.Dense1D(array, phys_dim=2, tags=None, site_ind_id='k{}', site_tag_id='I{}', **tn_opts)[source]¶
Bases:
TensorNetwork1DVector
Mimics other 1D tensor network structures, but really just keeps the full state in a single tensor. This allows e.g. applying gates in the same way for quantum circuit simulation as lazily represented hilbert spaces.
- Parameters:
array (array_like) – The full hilbert space vector - assumed to be made of equal hilbert spaces each of size
phys_dim
and will be reshaped as such.phys_dim (int, optional) – The hilbert space size of each site, default: 2.
tags (sequence of str, optional) – Extra tags to add to the tensor network.
site_ind_id (str, optional) – String formatter describing how to label the site indices.
site_tag_id (str, optional) – String formatter describing how to label the site tags.
tn_opts – Supplied to
TensorNetwork
.
- _EXTRA_PROPS = ('_site_ind_id', '_site_tag_id', '_L')¶
- _L¶
- dims¶
- data¶
- _site_ind_id¶
- site_inds¶
Return a tuple of all site indices.
- _site_tag_id¶
- site_tags¶
All of the site tags.
- T¶
- class quimb.tensor.MatrixProductOperator(arrays, *, sites=None, L=None, shape='lrud', tags=None, upper_ind_id='k{}', lower_ind_id='b{}', site_tag_id='I{}', **tn_opts)[source]¶
Bases:
TensorNetwork1DOperator
,TensorNetwork1DFlat
Initialise a matrix product operator, with auto labelling and tagging.
- Parameters:
arrays (sequence of arrays) – The tensor arrays to form into a MPO.
sites (sequence of int, optional) – Construct the MPO on these sites only. If not given, enumerate from zero. Should be monotonically increasing and match
arrays
.L (int, optional) – The number of sites the MPO should be defined on. If not given, this is taken as the max
sites
value plus one (i.e.g the number of arrays ifsites
is not given).shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrud’ (the default) indicates the shape corresponds left-bond, right-bond, ‘up’ physical index, ‘down’ physical index. End tensors have either ‘l’ or ‘r’ dropped from the string.
tags (str or sequence of str, optional) – Global tags to attach to all tensors.
upper_ind_id (str) – A string specifiying how to label the upper physical site indices. Should contain a
'{}'
placeholder. It is used to generate the actual indices like:map(upper_ind_id.format, range(len(arrays)))
.lower_ind_id (str) – A string specifiying how to label the lower physical site indices. Should contain a
'{}'
placeholder. It is used to generate the actual indices like:map(lower_ind_id.format, range(len(arrays)))
.site_tag_id (str) – A string specifiying how to tag the tensors at each site. Should contain a
'{}'
placeholder. It is used to generate the actual tags like:map(site_tag_id.format, range(len(arrays)))
.
- _EXTRA_PROPS = ('_site_tag_id', '_upper_ind_id', '_lower_ind_id', 'cyclic', '_L')¶
- arrays¶
Get the tuple of raw arrays containing all the tensor network data.
- _L¶
- _upper_ind_id¶
- _lower_ind_id¶
- _site_tag_id¶
- cyclic¶
- lrud_order¶
- tensors = []¶
Get the tuple of tensors in this tensor network.
- tags¶
- bonds¶
- classmethod from_fill_fn(fill_fn, L, bond_dim, phys_dim=2, sites=None, cyclic=False, shape='lrud', tags=None, upper_ind_id='k{}', lower_ind_id='b{}', site_tag_id='I{}')[source]¶
Create an MPO by supplying a ‘filling’ function to generate the data for each site.
- Parameters:
fill_fn (callable) – A function with signature
fill_fn(shape : tuple[int]) -> array_like
.L (int) – The number of sites.
bond_dim (int) – The bond dimension.
phys_dim (int or Sequence[int], optional) – The physical dimension(s) of each site, if a sequence it will be cycled over.
sites (None or sequence of int, optional) – Construct the MPO on these sites only. If not given, enumerate from zero.
cyclic (bool, optional) – Whether the MPO should be cyclic (periodic).
shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrud’ (the default) indicates the shape corresponds left-bond, right-bond, ‘up’ physical index, ‘down’ physical index. End tensors have either ‘l’ or ‘r’ dropped from the string.
tags (str or sequence of str, optional) – Global tags to attach to all tensors.
upper_ind_id (str) – A string specifiying how to label the upper physical site indices. Should contain a
'{}'
placeholder.lower_ind_id (str) – A string specifiying how to label the lower physical site indices. Should contain a
'{}'
placeholder.site_tag_id (str, optional) – How to tag the physical sites. Should contain a
'{}'
placeholder.
- Return type:
- classmethod from_dense(A, dims=2, sites=None, L=None, tags=None, site_tag_id='I{}', upper_ind_id='k{}', lower_ind_id='b{}', **split_opts)[source]¶
Build an MPO from a raw dense matrix.
- Parameters:
A (array) – The dense operator, it should be reshapeable to
(*dims, *dims)
.dims (int, sequence of int, optional) – The physical subdimensions of the operator. If any integer, assume all sites have the same dimension. If a sequence, the dimension of each site. Default is 2.
sites (sequence of int, optional) – The sites to place the operator on. If None, will place it on first len(dims) sites.
L (int, optional) – The total number of sites in the MPO, if the operator represents only a subset.
tags (str or sequence of str, optional) – Global tags to attach to all tensors.
site_tag_id (str, optional) – The string to use to label the site tags.
upper_ind_id (str, optional) – The string to use to label the upper physical indices.
lower_ind_id (str, optional) – The string to use to label the lower physical indices.
split_opts – Supplied to
tensor_split()
.
- Return type:
- fill_empty_sites(mode='full', phys_dim=None, fill_array=None, inplace=False)[source]¶
Fill any empty sites of this MPO with identity tensors, adding size 1 bonds or draping existing bonds where necessary such that the resulting tensor has nearest neighbor bonds only.
- Parameters:
mode ({'full', 'minimal'}, optional) – Whether to fill in all sites, including at either end, or simply the minimal range covering the min to max current sites present.
phys_dim (int, optional) – The physical dimension of the identity tensors to add. If not specified, will use the upper physical dimension of the first present site.
fill_array (array, optional) – The array to use for the identity tensors. If not specified, will use the identity array of the same dtype as the first present site.
inplace (bool, optional) – Whether to perform the operation inplace.
- Returns:
The modified MPO.
- Return type:
- apply(other, compress=False, **compress_opts)[source]¶
Act with this MPO on another MPO or MPS, such that the resulting object has the same tensor network structure/indices as
other
.For an MPS:
| | | | | | | | | | | | | | | | | | self: A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A | | | | | | | | | | | | | | | | | | other: x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x --> | | | | | | | | | | | | | | | | | | <- other.site_ind_id out: y=y=y=y=y=y=y=y=y=y=y=y=y=y=y=y=y=y
For an MPO:
| | | | | | | | | | | | | | | | | | self: A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A | | | | | | | | | | | | | | | | | | other: B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B | | | | | | | | | | | | | | | | | | --> | | | | | | | | | | | | | | | | | | <- other.upper_ind_id out: C=C=C=C=C=C=C=C=C=C=C=C=C=C=C=C=C=C | | | | | | | | | | | | | | | | | | <- other.lower_ind_id
The resulting TN will have the same structure/indices as
other
, but probably with larger bonds (depending on compression).- Parameters:
other (MatrixProductOperator or MatrixProductState) – The object to act on.
compress (bool, optional) – Whether to compress the resulting object.
compress_opts – Supplied to
TensorNetwork1DFlat.compress()
.
- Return type:
- permute_arrays(shape='lrud')[source]¶
Permute the indices of each tensor in this MPO to match
shape
. This doesn’t change how the overall object interacts with other tensor networks but may be useful for extracting the underlying arrays consistently. This is an inplace operation.- Parameters:
shape (str, optional) – A permutation of
'lrud'
specifying the desired order of the left, right, upper and lower (down) indices respectively.
- class quimb.tensor.MatrixProductState(arrays, *, sites=None, L=None, shape='lrp', tags=None, site_ind_id='k{}', site_tag_id='I{}', **tn_opts)[source]¶
Bases:
TensorNetwork1DVector
,TensorNetwork1DFlat
Initialise a matrix product state, with auto labelling and tagging.
- Parameters:
arrays (sequence of arrays) – The tensor arrays to form into a MPS.
sites (sequence of int, optional) – Construct the MPO on these sites only. If not given, enumerate from zero. Should be monotonically increasing and match
arrays
.L (int, optional) – The number of sites the MPO should be defined on. If not given, this is taken as the max
sites
value plus one (i.e.g the number of arrays ifsites
is not given).shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrp’ (the default) indicates the shape corresponds left-bond, right-bond, physical index. End tensors have either ‘l’ or ‘r’ dropped from the string.
tags (str or sequence of str, optional) – Global tags to attach to all tensors.
site_ind_id (str) – A string specifiying how to label the physical site indices. Should contain a
'{}'
placeholder. It is used to generate the actual indices like:map(site_ind_id.format, range(len(arrays)))
.site_tag_id (str) – A string specifiying how to tag the tensors at each site. Should contain a
'{}'
placeholder. It is used to generate the actual tags like:map(site_tag_id.format, range(len(arrays)))
.
- _EXTRA_PROPS = ('_site_tag_id', '_site_ind_id', 'cyclic', '_L')¶
- arrays¶
Get the tuple of raw arrays containing all the tensor network data.
- _L¶
- _site_ind_id¶
- _site_tag_id¶
- cyclic¶
- lrp_ord¶
- tensors = []¶
Get the tuple of tensors in this tensor network.
- tags¶
- bonds¶
- classmethod from_fill_fn(fill_fn, L, bond_dim, phys_dim=2, sites=None, cyclic=False, shape='lrp', site_ind_id='k{}', site_tag_id='I{}', tags=None)[source]¶
Create an MPS by supplying a ‘filling’ function to generate the data for each site.
- Parameters:
fill_fn (callable) – A function with signature
fill_fn(shape : tuple[int]) -> array_like
.L (int) – The number of sites.
bond_dim (int) – The bond dimension.
phys_dim (int or Sequence[int], optional) – The physical dimension(s) of each site, if a sequence it will be cycled over.
sites (None or sequence of int, optional) – Construct the MPS on these sites only. If not given, enumerate from zero.
cyclic (bool, optional) – Whether the MPS should be cyclic (periodic).
shape (str, optional) – What specific order to layout the indices in, should be a sequence of
'l'
,'r'
, and'p'
, corresponding to left, right, and physical indices respectively.site_ind_id (str, optional) – How to label the physical site indices.
site_tag_id (str, optional) – How to tag the physical sites.
tags (str or sequence of str, optional) – Global tags to attach to all tensors.
- Return type:
- classmethod from_dense(psi, dims=2, tags=None, site_ind_id='k{}', site_tag_id='I{}', **split_opts)[source]¶
Create a
MatrixProductState
directly from a dense vector- Parameters:
psi (array_like) – The dense state to convert to MPS from.
dims (int or sequence of int) – Physical subsystem dimensions of each site. If a single int, all sites have this same dimension, by default, 2.
tags (str or sequence of str, optional) – Global tags to attach to all tensors.
site_ind_id (str, optional) – How to index the physical sites, see
MatrixProductState
.site_tag_id (str, optional) – How to tag the physical sites, see
MatrixProductState
.split_opts – Supplied to
tensor_split()
to in order to partition the dense vector into tensors.absorb='left'
is set by default, to ensure the compression is canonical / optimal.
- Return type:
Examples
>>> dims = [2, 2, 2, 2, 2, 2] >>> psi = rand_ket(prod(dims)) >>> mps = MatrixProductState.from_dense(psi, dims) >>> mps.show() 2 4 8 4 2 o-o-o-o-o-o | | | | | |
- permute_arrays(shape='lrp')[source]¶
Permute the indices of each tensor in this MPS to match
shape
. This doesn’t change how the overall object interacts with other tensor networks but may be useful for extracting the underlying arrays consistently. This is an inplace operation.- Parameters:
shape (str, optional) – A permutation of
'lrp'
specifying the desired order of the left, right, and physical indices respectively.
- normalize(bra=None, eps=1e-15, insert=None)[source]¶
Normalize this MPS, optional with co-vector
bra
. For periodic MPS this uses transfer matrix SVD approximation with precisioneps
in order to be efficient. Inplace.- Parameters:
bra (MatrixProductState, optional) – If given, normalize this MPS with the same factor.
eps (float, optional) – If cyclic, precision to approximation transfer matrix with. Default: 1e-14.
insert (int, optional) – Insert the corrective normalization on this site, random if not given.
- Returns:
old_norm – The old norm
self.H @ self
.- Return type:
- gate_split(G, where, inplace=False, **compress_opts)[source]¶
Apply a two-site gate and then split resulting tensor to retrieve a MPS form:
-o-o-A-B-o-o- | | | | | | -o-o-GGG-o-o- -o-o-X~Y-o-o- | | GGG | | ==> | | | | | | ==> | | | | | | | | | | | | i j i j i j
As might be found in TEBD.
- Parameters:
G (array) – The gate, with shape
(d**2, d**2)
for physical dimensiond
.where ((int, int)) – Indices of the sites to apply the gate to.
compress_opts – Supplied to
tensor_split()
.
See also
gate
,gate_with_auto_swap
- swap_sites_with_compress(i, j, info=None, inplace=False, **compress_opts)[source]¶
Swap sites
i
andj
by contracting, then splitting with the physical indices swapped. If the sites are not adjacent, this will happen multiple times.- Parameters:
i (int) – The first site to swap.
j (int) – The second site to swap.
cur_orthog (int, sequence of int, or 'calc') – If known, the current orthogonality center.
info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be
"calc"
, a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.inplace (bond, optional) – Perform the swaps inplace.
compress_opts – Supplied to
tensor_split()
.
- swap_site_to(i, f, info=None, inplace=False, **compress_opts)[source]¶
Swap site
i
to sitef
, compressing the bond after each swap:i f 0 1 2 3 4 5 6 7 8 9 0 1 2 4 5 6 7 3 8 9 o-o-o-x-o-o-o-o-o-o >->->->->->->-x-<-< | | | | | | | | | | -> | | | | | | | | | |
- Parameters:
i (int) – The site to move.
f (int) – The new location for site
i
.info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be
"calc"
, a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.inplace (bond, optional) – Perform the swaps inplace.
compress_opts – Supplied to
tensor_split()
.
- gate_with_auto_swap(G, where, info=None, swap_back=True, inplace=False, **compress_opts)[source]¶
Perform a two site gate on this MPS by, if necessary, swapping and compressing the sites until they are adjacent, using
gate_split
, then unswapping the sites back to their original position.- Parameters:
G (array) – The gate, with shape
(d**2, d**2)
for physical dimensiond
.where ((int, int)) – Indices of the sites to apply the gate to.
info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be
"calc"
, a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.swap_back (bool, optional) – Whether to swap the sites back to their original position after applying the gate. If not, for sites
i < j
, the sitej
will remain swapped toi + 1
, and sites betweeni + 1
andj
will be shifted one place up.inplace (bond, optional) – Perform the swaps inplace.
compress_opts – Supplied to
tensor_split()
.
See also
gate
,gate_split
- gate_with_submpo(submpo, where=None, method='direct', transpose=False, info=None, inplace=False, inplace_mpo=False, **compress_opts)[source]¶
Apply an MPO, which only acts on a subset of sites, to this MPS, compressing the MPS with the MPO only on the minimal set of sites covering where, keeping the MPS form:
│ │ │ A───A─A │ │ │ -> │ │ │ │ │ │ │ │ >─>─O━O━O━O─<─< │ │ │ │ │ │ │ │ o─o─o─o─o─o─o─o
- Parameters:
submpo (MatrixProductOperator) – The MPO to apply.
where (sequence of int, optional) – The range of sites the MPO acts on, will be inferred from the support of the MPO if not given.
method ({'direct", 'dm', 'zipup', 'zipup-first', 'fit'}, optional) – The compression method to use.
transpose (bool, optional) – Whether to transpose the MPO before applying it. By default the lower inds of the MPO are contracted with the MPS, if transposed the upper inds are contracted.
info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be
"calc"
, a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.inplace (bool, optional) – Whether to perform the application and compression inplace.
compress_opts – Supplied to
tensor_network_1d_compress()
.
- Return type:
- gate_with_mpo(mpo, method='direct', transpose=False, inplace=False, inplace_mpo=False, **compress_opts)[source]¶
Gate this MPS with an MPO and compress the result with one of various methods back to MPS form:
│ │ │ │ │ │ │ │ A─A─A─A─A─A─A─A │ │ │ │ │ │ │ │ -> │ │ │ │ │ │ │ │ O━O━O━O━O━O━O━O │ │ │ │ │ │ │ │ o─o─o─o─o─o─o─o
- Parameters:
mpo (MatrixProductOperator) – The MPO to apply.
max_bond (int, optional) – A maximum bond dimension to keep when compressing.
cutoff (float, optional) – A singular value cutoff to use when compressing.
method ({'direct", 'dm', 'zipup', 'zipup-first', 'fit', ...}, optional) – The compression method to use.
transpose (bool, optional) – Whether to transpose the MPO before applying it. By default the lower inds of the MPO are contracted with the MPS, if transposed the upper inds are contracted.
inplace (bool, optional) – Whether to perform the compression inplace.
inplace_mpo (bool, optional) – Whether the modify the MPO in place, a minor performance gain.
compress_opts – Other options supplied to
tensor_network_1d_compress()
.
- Return type:
- gate_nonlocal(G, where, dims=None, method='direct', info=None, inplace=False, **compress_opts)[source]¶
Apply a potentially non-local gate to this MPS by first decomposing it into an MPO, then compressing the MPS with MPO only on the minimal set of sites covering where.
- Parameters:
G (array_like) – The gate to apply.
where (sequence of int) – The sites to apply the gate to.
max_bond (int, optional) – A maximum bond dimension to keep when compressing.
cutoff (float, optional) – A singular value cutoff to use when compressing.
dims (sequence of int, optional) – The factorized dimensions of the gate
G
, which should match the physical dimensions of the sites it acts on. Calculated if not supplied. If a single int, all sites are assumed to have this same dimension.method ({'direct", 'dm', 'zipup', 'zipup-first', 'fit', ...}, optional) – The compression method to use.
info (dict, optional) – If supplied, will be used to infer and store various extra information. Currently, the key “cur_orthog” is used to store the current orthogonality center. Its input value can be
"calc"
, a single site, or a pair of sites representing the min/max range, inclusive. It will be updated to the actual range after.inplace (bool, optional) – Whether to perform the compression inplace.
compress_opts – Supplied to
tensor_network_1d_compress()
.
- Return type:
- flip(inplace=False)[source]¶
Reverse the order of the sites in the MPS, such that site
i
is now at siteL - i - 1
.
- schmidt_values(i, info=None, method='svd')[source]¶
Find the schmidt values associated with the bipartition of this MPS between sites on either site of
i
. In other words,i
is the number of sites in the left hand partition:....L.... i o-o-o-o-o-S-o-o-o-o-o-o-o-o-o-o-o | | | | | | | | | | | | | | | | i-1 ..........R..........
The schmidt values,
S
, are the singular values associated with the(i - 1, i)
bond, squared, provided the MPS is mixed canonized at one of those sites.
- entropy(i, info=None, method='svd')[source]¶
The entropy of bipartition between the left block of
i
sites and the rest.
- schmidt_gap(i, info=None, method='svd')[source]¶
The schmidt gap of bipartition between the left block of
i
sites and the rest.
- partial_trace(keep, upper_ind_id='b{}', rescale_sites=True)[source]¶
Partially trace this matrix product state, producing a matrix product operator.
- Parameters:
keep (sequence of int or slice) – Indicies of the sites to keep.
upper_ind_id (str, optional) – The ind id of the (new) ‘upper’ inds, i.e. the ‘bra’ inds.
rescale_sites (bool, optional) – If
True
(the default), then the kept sites will be rescaled to(0, 1, 2, ...)
etc. rather than keeping their original site numbers.
- Returns:
rho – The density operator in MPO form.
- Return type:
- ptr(keep, upper_ind_id='b{}', rescale_sites=True)[source]¶
Alias of
partial_trace()
.
- bipartite_schmidt_state(sz_a, get='ket', info=None)[source]¶
Compute the reduced state for a bipartition of an OBC MPS, in terms of the minimal left/right schmidt basis:
A B ......... ........... >->->->->--s--<-<-<-<-<-< -> +-s-+ | | | | | | | | | | | | | k0 k1... kA kB
- Parameters:
sz_a (int) – The number of sites in subsystem A, must be
0 < sz_a < N
.get ({'ket', 'rho', 'ket-dense', 'rho-dense'}, optional) –
Get the:
’ket’: vector form as tensor.
’rho’: density operator form, i.e. vector outer product
’ket-dense’: like ‘ket’ but return
qarray
.’rho-dense’: like ‘rho’ but return
qarray
.
info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.
- static _do_lateral_compress(mps, kb, section, leave_short, ul, ll, heps, hmethod, hmax_bond, verbosity, compressed, **compress_opts)[source]¶
- static _do_vertical_decomp(mps, kb, section, sysa, sysb, compressed, ul, ur, ll, lr, vmethod, vmax_bond, veps, verbosity, **compress_opts)[source]¶
- partial_trace_compress(sysa, sysb, eps=1e-08, method=('isvd', None), max_bond=(None, 1024), leave_short=True, renorm=True, lower_ind_id='b{}', verbosity=0, **compress_opts)[source]¶
Perform a compressed partial trace using singular value lateral then vertical decompositions of transfer matrix products:
.....sysa...... ...sysb.... o-o-o-o-A-A-A-A-A-A-A-A-o-o-B-B-B-B-B-B-o-o-o-o-o-o-o-o-o | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ==> form inner product ............... ........... o-o-o-o-A-A-A-A-A-A-A-A-o-o-B-B-B-B-B-B-o-o-o-o-o-o-o-o-o | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o-o-o-o-A-A-A-A-A-A-A-A-o-o-B-B-B-B-B-B-o-o-o-o-o-o-o-o-o ==> lateral SVD on each section .....sysa...... ...sysb.... /\ /\ /\ /\ ... ~~~E A~~~~~~~~~~~A E~E B~~~~~~~B E~~~ ... \/ \/ \/ \/ ==> vertical SVD and unfold on A & B | | /-------A-------\ /-----B-----\ ... ~~~E E~E E~~~ ... \-------A-------/ \-----B-----/ | |
With various special cases including OBC or end spins included in subsytems.
- Parameters:
sysa (sequence of int) – The sites, which should be contiguous, defining subsystem A.
sysb (sequence of int) – The sites, which should be contiguous, defining subsystem B.
eps (float or (float, float), optional) – Tolerance(s) to use when compressing the subsystem transfer matrices and vertically decomposing.
method (str or (str, str), optional) – Method(s) to use for laterally compressing the state then vertially compressing subsytems.
max_bond (int or (int, int), optional) – The maximum bond to keep for laterally compressing the state then vertially compressing subsytems.
leave_short (bool, optional) – If True (the default), don’t try to compress short sections.
renorm (bool, optional) – If True (the default), renomalize the state so that
tr(rho)==1
.lower_ind_id (str, optional) – The index id to create for the new density matrix, the upper_ind_id is automatically taken as the current site_ind_id.
compress_opts (dict, optional) – If given, supplied to
partial_trace_compress
to govern how singular values are treated. Seetensor_split
.verbosity ({0, 1}, optional) – How much information to print while performing the compressed partial trace.
- Returns:
rho_ab – Density matrix tensor network with
outer_inds = ('k0', 'k1', 'b0', 'b1')
for example.- Return type:
- logneg_subsys(sysa, sysb, compress_opts=None, approx_spectral_opts=None, verbosity=0, approx_thresh=2**12)[source]¶
Compute the logarithmic negativity between subsytem blocks, e.g.:
sysa sysb ......... ..... ... -o-o-o-o-o-o-A-A-A-A-A-o-o-o-B-B-B-o-o-o-o-o-o-o- ... | | | | | | | | | | | | | | | | | | | | | | | |
- Parameters:
sysa (sequence of int) – The sites, which should be contiguous, defining subsystem A.
sysb (sequence of int) – The sites, which should be contiguous, defining subsystem B.
eps (float, optional) – Tolerance to use when compressing the subsystem transfer matrices.
method (str or (str, str), optional) – Method(s) to use for laterally compressing the state then vertially compressing subsytems.
compress_opts (dict, optional) – If given, supplied to
partial_trace_compress
to govern how singular values are treated. Seetensor_split
.approx_spectral_opts – Supplied to
approx_spectral_function()
.
- Returns:
ln – The logarithmic negativity.
- Return type:
See also
MatrixProductState.partial_trace_compress
,approx_spectral_function
- measure(site, remove=False, outcome=None, renorm=True, info=None, get=None, inplace=False)[source]¶
Measure this MPS at
site
, including projecting the state. Optionally remove the site afterwards, yielding an MPS with one less site. In either case the orthogonality center of the returned MPS ismin(site, new_L - 1)
.- Parameters:
site (int) – The site to measure.
remove (bool, optional) –
Whether to remove the site completely after projecting the measurement. If
True
, sites greater thansite
will be retagged and reindex one down, and the MPS will have one less site. E.g:0-1-2-3-4-5-6 / / / - measure and remove site 3 0-1-2-4-5-6 - reindex sites (4, 5, 6) to (3, 4, 5) 0-1-2-3-4-5
outcome (None or int, optional) – Specify the desired outcome of the measurement. If
None
, it will be randomly sampled according to the local density matrix.renorm (bool, optional) – Whether to renormalize the state post measurement.
info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.
get ({None, 'outcome'}, optional) – If
'outcome'
, simply return the outcome, and don’t perform any projection.inplace (bool, optional) – Whether to perform the measurement in place or not.
- Returns:
outcome (int) – The measurement outcome, drawn from
range(phys_dim)
.psi (MatrixProductState) – The measured state, if
get != 'outcome'
.
- sample(C, seed=None, info=None)[source]¶
Generate
C
samples rom this MPS, along with their probabilities.- Parameters:
C (int) – The number of samples to generate.
seed (None, int, or np.random.Generator, optional) – A random seed or generator to use.
info (dict, optional) – If given, will be used to infer and store various extra information. Currently the key “cur_orthog” is used to store the current orthogonality center.
- Yields:
config (sequence of int) – The sample configuration.
omega (float) – The probability of this configuration.
- class quimb.tensor.SuperOperator1D(arrays, shape='lrkud', site_tag_id='I{}', outer_upper_ind_id='kn{}', inner_upper_ind_id='k{}', inner_lower_ind_id='b{}', outer_lower_ind_id='bn{}', tags=None, tags_upper=None, tags_lower=None, **tn_opts)[source]¶
Bases:
TensorNetwork1D
A 1D tensor network super-operator class:
0 1 2 n-1 | | | | <-- outer_upper_ind_id O===O===O== =O |\ |\ |\ |\ <-- inner_upper_ind_id ) ) ) ... ) <-- K (size of local Kraus sum) |/ |/ |/ |/ <-- inner_lower_ind_id O===O===O== =O | | : | | <-- outer_lower_ind_id : chi (size of entangling bond dim)
- Parameters:
arrays (sequence of arrays) – The data arrays defining the superoperator, this should be a sequence of 2n arrays, such that the first two correspond to the upper and lower operators acting on site 0 etc. The arrays should be 5 dimensional unless OBC conditions are desired, in which case the first two and last two should be 4-dimensional. The dimensions of array can be should match the
shape
option.
- _EXTRA_PROPS = ('_site_tag_id', '_outer_upper_ind_id', '_inner_upper_ind_id', '_inner_lower_ind_id',...¶
- arrays¶
Get the tuple of raw arrays containing all the tensor network data.
- _L¶
- _outer_upper_ind_id¶
- _inner_upper_ind_id¶
- _inner_lower_ind_id¶
- _outer_lower_ind_id¶
- sites_present¶
- outer_upper_inds¶
- inner_upper_inds¶
- inner_lower_inds¶
- outer_lower_inds¶
- _site_tag_id¶
- tags¶
- tags_upper¶
- tags_lower¶
- cyclic¶
- classmethod rand(n, K, chi, phys_dim=2, herm=True, cyclic=False, dtype=complex, **superop_opts)[source]¶
- property outer_upper_ind_id¶
- property inner_upper_ind_id¶
- property inner_lower_ind_id¶
- property outer_lower_ind_id¶
- class quimb.tensor.TensorNetwork1D(ts=(), *, virtual=False, check_collisions=True)[source]¶
Bases:
quimb.tensor.tensor_arbgeom.TensorNetworkGen
Base class for tensor networks with a one-dimensional structure.
- _NDIMS = 1¶
- _EXTRA_PROPS = ('_site_tag_id', '_L')¶
- _CONTRACT_STRUCTURED = True¶
- _compatible_1d(other)[source]¶
Check whether
self
andother
are compatible 2D tensor networks such that they can remain a 2D tensor network when combined.
- combine(other, *, virtual=False, check_collisions=True)[source]¶
Combine this tensor network with another, returning a new tensor network. If the two are compatible, cast the resulting tensor network to a
TensorNetwork1D
instance.- Parameters:
other (TensorNetwork1D or TensorNetwork) – The other tensor network to combine with.
virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (
False
, the default), or view them as virtual (True
).check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If
True
(the default), any inner indices that clash will be mangled.
- Return type:
- property L¶
- The number of sites, i.e. length.
- property nsites¶
- The number of sites.
- slice2sites(tag_slice)[source]¶
Take a slice object, and work out its implied start, stop and step, taking into account cyclic boundary conditions.
Examples
Normal slicing:
>>> p = MPS_rand_state(10, bond_dim=7) >>> p.slice2sites(slice(5)) (0, 1, 2, 3, 4)
>>> p.slice2sites(slice(4, 8)) (4, 5, 6, 7)
Slicing from end backwards:
>>> p.slice2sites(slice(..., -3, -1)) (9, 8)
Slicing round the end:
>>> p.slice2sites(slice(7, 12)) (7, 8, 9, 0, 1)
>>> p.slice2sites(slice(-3, 2)) (7, 8, 9, 0, 1)
If the start point is > end point (before modulo n), then step needs to be negative to return anything.
- maybe_convert_coo(x)[source]¶
Check if
x
is an integer and convert to the corresponding site tag if so.
- contract_structured(tag_slice, structure_bsz=5, inplace=False, **opts)[source]¶
Perform a structured contraction, translating
tag_slice
from aslice
or … to a cumulative sequence of tags.- Parameters:
- Returns:
The result of the contraction, still a
TensorNetwork
if the contraction was only partial.- Return type:
TensorNetwork, Tensor or scalar
See also
contract
,contract_tags
,contract_cumulative
- class quimb.tensor.TNLinearOperator1D(tn, left_inds, right_inds, start, stop, ldims=None, rdims=None, is_conj=False, is_trans=False)[source]¶
Bases:
scipy.sparse.linalg.LinearOperator
A 1D tensor network linear operator like:
start stop - 1 . . :-O-O-O-O-O-O-O-O-O-O-O-O-: --+ : | | | | | | | | | | | | : | :-H-H-H-H-H-H-H-H-H-H-H-H-: acting on --V : | | | | | | | | | | | | : | :-O-O-O-O-O-O-O-O-O-O-O-O-: --+ left_inds^ ^right_inds
Like
TNLinearOperator
, but performs a structured contract from one end to the other than can handle very long chains possibly more efficiently by contracting in blocks from one end.- Parameters:
tn (TensorNetwork) – The tensor network to turn into a
LinearOperator
.left_inds (sequence of str) – The left indicies.
right_inds (sequence of str) – The right indicies.
start (int) – Index of starting site.
stop (int) – Index of stopping site (does not include this site).
ldims (tuple of int, optional) – If known, the dimensions corresponding to
left_inds
.rdims (tuple of int, optional) – If known, the dimensions corresponding to
right_inds
.
See also
TNLinearOperator
- tn¶
- tags¶
- is_conj¶
- is_trans¶
- _conj_linop = None¶
- _adjoint_linop = None¶
- _transpose_linop = None¶
- _matvec(vec)[source]¶
Default matrix-vector multiplication handler.
If self is a linear operator of shape (M, N), then this method will be called on a shape (N,) or (N, 1) ndarray, and should return a shape (M,) or (M, 1) ndarray.
This default implementation falls back on _matmat, so defining that will define matrix-vector multiplication as well.
- _matmat(mat)[source]¶
Default matrix-matrix multiplication handler.
Falls back on the user-defined _matvec method, so defining that will define matrix multiplication (though in a very suboptimal way).
- property A¶
- quimb.tensor.expec_TN_1D(*tns, compress=None, eps=1e-15)[source]¶
Compute the expectation of several 1D TNs, using transfer matrix compression if any are periodic.
- Parameters:
tns (sequence of TensorNetwork1D) – The MPS and MPO to find expectation of. Should start and begin with an MPS e.g.
(MPS, MPO, ..., MPS)
.compress ({None, False, True}, optional) – Whether to perform transfer matrix compression on cyclic systems. If set to
None
(the default), decide heuristically.eps (float, optional) – The accuracy of the transfer matrix compression.
- Returns:
x – The expectation value.
- Return type:
- quimb.tensor.gate_TN_1D(tn, G, where, contract=False, tags=None, propagate_tags='sites', info=None, inplace=False, cur_orthog=None, **compress_opts)[source]¶
Act with the gate
G
on siteswhere
, maintaining the outer indices of the 1D tensor network:contract=False contract=True . . . . <- where o-o-o-o-o-o-o o-o-o-GGG-o-o-o | | | | | | | | | | / \ | | | GGG | | contract='split-gate' contract='swap-split-gate' . . . . <- where o-o-o-o-o-o-o o-o-o-o-o-o-o | | | | | | | | | | | | | | G~G G~G | | \ / X / \ contract='swap+split' . . <- where o-o-o-G=G-o-o-o | | | | | | | |
Note that the sites in
where
do not have to be contiguous. By default, site tags will be propagated to the gate tensors, identifying a ‘light cone’.- Parameters:
tn (TensorNetwork1DVector) – The 1D vector-like tensor network, for example, and MPS.
G (array) – A square array to act with on sites
where
. It should have twice the number of dimensions as the number of sites. The second half of these will be contracted with the MPS, and the first half indexed with the correctsite_ind_id
. Sites are read left to right from the shape. A two-dimensional array is permissible if each dimension factorizes correctly.contract ({False, 'split-gate', 'swap-split-gate',) –
‘auto-split-gate’, True, ‘swap+split’}, optional Whether to contract the gate into the 1D tensor network. If,
False: leave the gate uncontracted, the default
’split-gate’: like False, but split the gate if it is two-site.
’swap-split-gate’: like ‘split-gate’, but decompose the gate as if a swap had first been applied
’auto-split-gate’: automatically select between the above three options, based on the rank of the gate.
True: contract the gate into the tensor network, if the gate acts on more than one site, this will produce an ever larger tensor.
’swap+split’: Swap sites until they are adjacent, then contract the gate and split the resulting tensor, then swap the sites back to their original position. In this way an MPS structure can be explicitly maintained at the cost of rising bond-dimension.
tags (str or sequence of str, optional) – Tag the new gate tensor with these tags.
propagate_tags ({'sites', 'register', False, True}, optional) –
Add any tags from the sites to the new gate tensor (only matters if
contract=False
else tags are merged anyway):If
'sites'
, then only propagate tags matching e.g. ‘I{}’ and ignore all others. I.e. just propagate the lightcone.If
'register'
, then only propagate tags matching the sites of where this gate was actually applied. I.e. ignore the lightcone, just keep track of which ‘registers’ the gate was applied to.If
False
, propagate nothing.If
True
, propagate all tags.
inplace – Perform the gate in place.
bool – Perform the gate in place.
optional – Perform the gate in place.
compress_opts – Supplied to
split()
ifcontract='swap+split'
orgate_with_auto_swap()
ifcontract='swap+split'
.
- Return type:
See also
Examples
>>> p = MPS_rand_state(3, 7) >>> p.gate_(spin_operator('X'), where=1, tags=['GX']) >>> p <MatrixProductState(tensors=4, L=3, max_bond=7)>
>>> p.outer_inds() ('k0', 'k1', 'k2')
- quimb.tensor.superop_TN_1D(tn_super, tn_op, upper_ind_id='k{}', lower_ind_id='b{}', so_outer_upper_ind_id=None, so_inner_upper_ind_id=None, so_inner_lower_ind_id=None, so_outer_lower_ind_id=None)[source]¶
Take a tensor network superoperator and act with it on a tensor network operator, maintaining the original upper and lower indices of the operator:
outer_upper_ind_id upper_ind_id | | | ... | | | | ... | +----------+ +----------+ | tn_super +---+ | tn_super +---+ +----------+ | upper_ind_id +----------+ | | | | ... | | | | | ... | | | | ... | | inner_upper_ind_id| +-----------+ +-----------+ | | + | tn_op | = | tn_op | | inner_lower_ind_id| +-----------+ +-----------+ | | | | ... | | | | | ... | | | | ... | | +----------+ | lower_ind_id +----------+ | | tn_super +---+ | tn_super +---+ +----------+ +----------+ | | | ... | <-- | | | ... | outer_lower_ind_id lower_ind_id
- Parameters:
tn_super (TensorNetwork) – The superoperator in the form of a 1D-like tensor network.
tn_op (TensorNetwork) – The operator to be acted on in the form of a 1D-like tensor network.
upper_ind_id (str, optional) – Current id of the upper operator indices, e.g. usually
'k{}'
.lower_ind_id (str, optional) – Current id of the lower operator indices, e.g. usually
'b{}'
.so_outer_upper_ind_id (str, optional) – Current id of the superoperator’s upper outer indices, these will be reindexed to form the new effective operators upper indices.
so_inner_upper_ind_id (str, optional) – Current id of the superoperator’s upper inner indices, these will be joined with those described by
upper_ind_id
.so_inner_lower_ind_id (str, optional) – Current id of the superoperator’s lower inner indices, these will be joined with those described by
lower_ind_id
.so_outer_lower_ind_id (str, optional) – Current id of the superoperator’s lower outer indices, these will be reindexed to form the new effective operators lower indices.
- Returns:
KAK – The tensornetwork of the superoperator acting on the operator.
- Return type:
- quimb.tensor.enforce_1d_like(tn, site_tags=None, fix_bonds=True, inplace=False)[source]¶
Check that
tn
is 1D-like with OBC, i.e. 1) that each tensor has exactly one of the givensite_tags
. If not, raise a ValueError. 2) That there are no hyper indices. And 3) that there are only bonds within sites or between nearest neighbor sites. This issue can be optionally automatically fixed by inserting a string of identity tensors.- Parameters:
tn (TensorNetwork) – The tensor network to check.
site_tags (sequence of str, optional) – The tags to use to group and order the tensors from
tn
. If not given, usestn.site_tags
.fix_bonds (bool, optional) – Whether to fix the bond structure by inserting identity tensors.
inplace (bool, optional) – Whether to perform the fix inplace or not.
- Raises:
ValueError – If the tensor network is not 1D-like.
- quimb.tensor.tensor_network_1d_compress(tn, max_bond=None, cutoff=1e-10, method='dm', site_tags=None, canonize=True, permute_arrays=True, optimize='auto-hq', sweep_reverse=False, equalize_norms=False, compress_opts=None, inplace=False, **kwargs)[source]¶
Compress a 1D-like tensor network using the specified method.
- Parameters:
tn (TensorNetwork) – The tensor network to compress. Every tensor should have exactly one of the site tags. Each site can have multiple tensors and output indices.
max_bond (int) – The maximum bond dimension to compress to.
cutoff (float, optional) – A dynamic threshold for discarding singular values when compressing.
method ({"direct", "dm", "zipup", "zipup-first", "fit", "projector", ...}) – The compression method to use.
site_tags (sequence of str, optional) – The tags to use to group and order the tensors from
tn
. If not given, usestn.site_tags
. The tensor network built will have one tensor per site, in the order given bysite_tags
.canonize (bool, optional) – Whether to perform canonicalization, pseudo or otherwise depending on the method, before compressing. Ignored for
method='dm'
andmethod='fit'
.permute_arrays (bool or str, optional) – Whether to permute the array indices of the final tensor network into canonical order. If
True
will use the default order, otherwise if a string this specifies a custom order.optimize (str, optional) – The contraction path optimizer to use.
sweep_reverse (bool, optional) – Whether to sweep in the reverse direction, resulting in a left canonical form instead of right canonical (for the fit method, this also depends on the last sweep direction).
equalize_norms (bool or float, optional) – Whether to equalize the norms of the tensors after compression. If an explicit value is give, then the norms will be set to that value, and the overall scaling factor will be accumulated into .exponent.
inplace (bool, optional) – Whether to perform the compression inplace.
kwargs – Supplied to the chosen compression method.
- Return type:
- class quimb.tensor.TEBD(p0, H, dt=None, tol=None, t0=0.0, split_opts=None, progbar=True, imag=False)[source]¶
Class implementing Time Evolving Block Decimation (TEBD) [1].
[1] Guifré Vidal, Efficient Classical Simulation of Slightly Entangled Quantum Computations, PRL 91, 147902 (2003)
- Parameters:
p0 (MatrixProductState) – Initial state.
H (LocalHam1D or array_like) – Dense hamiltonian representing the two body interaction. Should have shape
(d * d, d * d)
, whered
is the physical dimension ofp0
.dt (float, optional) – Default time step, cannot be set as well as
tol
.tol (float, optional) – Default target error for each evolution, cannot be set as well as
dt
, which will instead be calculated from the trotter orderm length of time, and hamiltonian norm.t0 (float, optional) – Initial time. Defaults to 0.0.
split_opts (dict, optional) – Compression options applied for splitting after gate application, see
tensor_split()
.imag (bool, optional) – Enable imaginary time evolution. Defaults to false.
See also
- _pt¶
- L¶
- H¶
- cyclic¶
- _ham_norm¶
- _err = 0.0¶
- tol¶
- imag¶
- progbar¶
- split_opts¶
- property pt¶
- The MPS state of the system at the current time.
- property err¶
- choose_time_step(tol, T, order)[source]¶
Trotter error is
~ (T / dt) * dt^(order + 1)
. Invert to find desired time step, and scale by norm of interaction term.
- _get_gate_from_ham(dt_frac, sites)[source]¶
Get the unitary (exponentiated) gate for fraction of timestep
dt_frac
and sitessites
, cached.
- sweep(direction, dt_frac, dt=None, queue=False)[source]¶
Perform a single sweep of gates and compression. This shifts the orthonognality centre along with the gates as they are applied and split.
- TARGET_TOL = 1e-13¶
- update_to(T, dt=None, tol=None, order=4, progbar=None)[source]¶
Update the state to time
T
.- Parameters:
- at_times(ts, dt=None, tol=None, order=4, progbar=None)[source]¶
Generate the time evolved state at each time in
ts
.- Parameters:
ts (sequence of float) – The times to evolve to and yield the state at.
dt (float, optional) – Time step to use. Can’t be set as well as
tol
.tol (float, optional) – Tolerance for whole evolution. Can’t be set as well as
dt
.order (int, optional) – Trotter order to use.
progbar (bool, optional) – Manually turn the progress bar off.
- Yields:
pt (MatrixProductState) – The state at each of the times in
ts
. This is a copy of internal state used, so inplace changes can be made to it.
- class quimb.tensor.LocalHam1D(L, H2, H1=None, cyclic=False)[source]¶
Bases:
quimb.tensor.tensor_arbgeom_tebd.LocalHamGen
An simple interacting hamiltonian object used, for instance, in TEBD. Once instantiated, the
LocalHam1D
hamiltonian stores a single term per pair of sites, cached versions of which can be retrieved likeH.get_gate_expm((i, i + 1), -1j * 0.5)
etc.- Parameters:
L (int) – The size of the hamiltonian.
H2 (array_like or dict[tuple[int], array_like]) – The sum of interaction terms. If a dict is given, the keys should be nearest neighbours like
(10, 11)
, apart from any default term which should have the keyNone
, and the values should be the sum of interaction terms for that interaction.H1 (array_like or dict[int, array_like], optional) – The sum of single site terms. If a dict is given, the keys should be integer sites, apart from any default term which should have the key
None
, and the values should be the sum of single site terms for that site.cyclic (bool, optional) – Whether the hamiltonian has periodic boundary conditions or not.
- terms¶
The terms in the hamiltonian, combined from the inputs such that there is a single term per pair.
Examples
A simple, translationally invariant, interaction-only
LocalHam1D
:>>> XX = pauli('X') & pauli('X') >>> YY = pauli('Y') & pauli('Y') >>> ham = LocalHam1D(L=100, H2=XX + YY)
The same, but with a translationally invariant field as well:
>>> Z = pauli('Z') >>> ham = LocalHam1D(L=100, H2=XX + YY, H1=Z)
Specifying a default interaction and field, with custom values set for some sites:
>>> H2 = {None: XX + YY, (49, 50): (XX + YY) / 2} >>> H1 = {None: Z, 49: 2 * Z, 50: 2 * Z} >>> ham = LocalHam1D(L=100, H2=H2, H1=H1)
Specifying the hamiltonian entirely through site specific interactions and fields:
>>> H2 = {(i, i + 1): XX + YY for i in range(99)} >>> H1 = {i: Z for i in range(100)} >>> ham = LocalHam1D(L=100, H2=H2, H1=H1)
See also
- L¶
- cyclic¶
- default_H2¶
- build_mpo_propagator_trotterized(x, site_tag_id='I{}', tags=None, upper_ind_id='k{}', lower_ind_id='b{}', shape='lrud', contract_sites=True, **split_opts)[source]¶
Build an MPO representation of
expm(H * x)
, i.e. the imaginary or real time propagator of this local 1D hamiltonian, using a first order trotterized decomposition.- Parameters:
x (float) – The time to evolve for. Note this does not include the imaginary prefactor of the Schrodinger equation, so real
x
corresponds to imaginary time evolution, and vice versa.site_tag_id (str) – A string specifiying how to tag the tensors at each site. Should contain a
'{}'
placeholder. It is used to generate the actual tags like:map(site_tag_id.format, range(len(arrays)))
.tags (str or sequence of str, optional) – Global tags to attach to all tensors.
upper_ind_id (str) – A string specifiying how to label the upper physical site indices. Should contain a
'{}'
placeholder. It is used to generate the actual indices like:map(upper_ind_id.format, range(len(arrays)))
.lower_ind_id (str) – A string specifiying how to label the lower physical site indices. Should contain a
'{}'
placeholder. It is used to generate the actual indices like:map(lower_ind_id.format, range(len(arrays)))
.shape (str, optional) – String specifying layout of the tensors. E.g. ‘lrud’ (the default) indicates the shape corresponds left-bond, right-bond, ‘up’ physical index, ‘down’ physical index. End tensors have either ‘l’ or ‘r’ dropped from the string if not periodic.
contract_sites (bool, optional) – Whether to contract all the decomposed factors at each site to yield a single tensor per site, by default True.
split_opts – Supplied to
tensor_split()
.
- class quimb.tensor.PEPO(arrays, *, shape='urdlbk', tags=None, upper_ind_id='k{},{}', lower_ind_id='b{},{}', site_tag_id='I{},{}', x_tag_id='X{}', y_tag_id='Y{}', **tn_opts)[source]¶
Bases:
TensorNetwork2DOperator
,TensorNetwork2DFlat
Projected Entangled Pair Operator object:
... │╱ │╱ │╱ │╱ │╱ │╱ ●────●────●────●────●────●── ╱│ ╱│ ╱│ ╱│ ╱│ ╱│ │╱ │╱ │╱ │╱ │╱ │╱ ●────●────●────●────●────●── ╱│ ╱│ ╱│ ╱│ ╱│ ╱│ │╱ │╱ │╱ │╱ │╱ │╱ ... ●────●────●────●────●────●── ╱│ ╱│ ╱│ ╱│ ╱│ ╱│ │╱ │╱ │╱ │╱ │╱ │╱ ●────●────●────●────●────●── ╱ ╱ ╱ ╱ ╱ ╱
- Parameters:
arrays (sequence of sequence of array) – The core tensor data arrays.
shape (str, optional) – Which order the dimensions of the arrays are stored in, the default
'urdlbk'
stands for (‘up’, ‘right’, ‘down’, ‘left’, ‘bra’, ‘ket’). Arrays on the edge of lattice are assumed to be missing the corresponding dimension.tags (set[str], optional) – Extra global tags to add to the tensor network.
upper_ind_id (str, optional) – String specifier for naming convention of upper site indices.
lower_ind_id (str, optional) – String specifier for naming convention of lower site indices.
site_tag_id (str, optional) – String specifier for naming convention of site tags.
x_tag_id (str, optional) – String specifier for naming convention of row (‘x’) tags.
y_tag_id (str, optional) – String specifier for naming convention of column (‘y’) tags.
- _EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly', '_upper_ind_id', '_lower_ind_id')¶
- tags¶
- _upper_ind_id¶
- _lower_ind_id¶
- _site_tag_id¶
- _x_tag_id¶
- _y_tag_id¶
- arrays¶
Get the tuple of raw arrays containing all the tensor network data.
- _Lx¶
- _Ly¶
- ix¶
- tensors = []¶
Get the tuple of tensors in this tensor network.
- cyclicx¶
- cyclicy¶
- classmethod from_fill_fn(fill_fn, Lx, Ly, bond_dim, phys_dim=2, cyclic=False, shape='urdlbk', **pepo_opts)[source]¶
Create a PEPO and fill the tensor entries with a supplied function matching signature
fill_fn(shape) -> array
.- Parameters:
fill_fn (callable) – A function that takes a shape tuple and returns a data array.
Lx (int) – The number of rows.
Ly (int) – The number of columns.
bond_dim (int) – The bond dimension.
phys_dim (int, optional) – The physical indices dimension.
cyclic (bool or tuple[bool, bool], optional) – Whether the lattice is cyclic in the x and y directions.
shape (str, optional) – How to layout the indices of the tensors, the default is
(up, right, down, left bra, ket) == 'urdlbk'
.pepo_opts – Supplied to
PEPO
.
- classmethod rand(Lx, Ly, bond_dim, phys_dim=2, herm=False, dist='normal', loc=0.0, dtype='float64', seed=None, **pepo_opts)[source]¶
Create a random PEPO.
- Parameters:
Lx (int) – The number of rows.
Ly (int) – The number of columns.
bond_dim (int) – The bond dimension.
physical (int, optional) – The physical index dimension.
herm (bool, optional) – Whether to symmetrize the tensors across the physical bonds to make the overall operator hermitian.
dtype (dtype, optional) – The dtype to create the arrays with, default is real double.
seed (int, optional) – A random seed.
pepo_opts – Supplied to
PEPO
.
- Returns:
X
- Return type:
- rand_herm¶
- classmethod zeros(Lx, Ly, bond_dim, phys_dim=2, dtype='float64', backend='numpy', **pepo_opts)[source]¶
Create a PEPO with all zero entries.
- Parameters:
Lx (int) – The number of rows.
Ly (int) – The number of columns.
bond_dim (int) – The bond dimension.
physical (int, optional) – The physical index dimension.
dtype (dtype, optional) – The dtype to create the arrays with, default is real double.
backend (str, optional) – Which backend to use, default is
'numpy'
.pepo_opts – Supplied to
PEPO
.
- apply(other, compress=False, **compress_opts)[source]¶
Act with this PEPO on
other
, returning a new TN likeother
with the same outer indices.- Parameters:
other (PEPS) – The TN to act on.
compress (bool, optional) – Whether to compress the resulting TN.
compress_opts – Supplied to
compress()
.
- Return type:
- class quimb.tensor.PEPS(arrays, *, shape='urdlp', tags=None, site_ind_id='k{},{}', site_tag_id='I{},{}', x_tag_id='X{}', y_tag_id='Y{}', **tn_opts)[source]¶
Bases:
TensorNetwork2DVector
,TensorNetwork2DFlat
Projected Entangled Pair States object (2D):
... │ │ │ │ │ │ ●────●────●────●────●────●── ╱│ ╱│ ╱│ ╱│ ╱│ ╱│ │ │ │ │ │ │ ●────●────●────●────●────●── ╱│ ╱│ ╱│ ╱│ ╱│ ╱│ │ │ │ │ │ │ ... ●────●────●────●────●────●── ╱│ ╱│ ╱│ ╱│ ╱│ ╱│ │ │ │ │ │ │ ●────●────●────●────●────●── ╱ ╱ ╱ ╱ ╱ ╱
- Parameters:
arrays (sequence of sequence of array_like) – The core tensor data arrays.
shape (str, optional) – Which order the dimensions of the arrays are stored in, the default
'urdlp'
stands for (‘up’, ‘right’, ‘down’, ‘left’, ‘physical’). Arrays on the edge of lattice are assumed to be missing the corresponding dimension.tags (set[str], optional) – Extra global tags to add to the tensor network.
site_ind_id (str, optional) – String specifier for naming convention of site indices.
site_tag_id (str, optional) – String specifier for naming convention of site tags.
x_tag_id (str, optional) – String specifier for naming convention of row (‘x’) tags.
y_tag_id (str, optional) – String specifier for naming convention of column (‘y’) tags.
- _EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly', '_site_ind_id')¶
- tags¶
- _site_ind_id¶
- _site_tag_id¶
- _x_tag_id¶
- _y_tag_id¶
- arrays¶
Get the tuple of raw arrays containing all the tensor network data.
- _Lx¶
- _Ly¶
- cyclicx¶
- cyclicy¶
- ix¶
- tensors = []¶
Get the tuple of tensors in this tensor network.
- classmethod from_fill_fn(fill_fn, Lx, Ly, bond_dim, phys_dim=2, cyclic=False, shape='urdlp', **peps_opts)[source]¶
Create a 2D PEPS from a filling function with signature
fill_fn(shape)
.- Parameters:
Lx (int) – The number of rows.
Ly (int) – The number of columns.
bond_dim (int) – The bond dimension.
phys_dim (int, optional) – The physical index dimension.
cyclic (bool or tuple[bool, bool], optional) – Whether the lattice is cyclic in the x and y directions.
shape (str, optional) – How to layout the indices of the tensors, the default is
(up, right, down, left, phys) == 'urdlbk'
. This is the order of the shape supplied to the filling function.peps_opts – Supplied to
PEPS
.
- Returns:
psi
- Return type:
- classmethod empty(Lx, Ly, bond_dim, phys_dim=2, like='numpy', **peps_opts)[source]¶
Create an empty 2D PEPS.
- Parameters:
- Returns:
psi
- Return type:
See also
- classmethod ones(Lx, Ly, bond_dim, phys_dim=2, like='numpy', **peps_opts)[source]¶
Create a 2D PEPS whose tensors are filled with ones.
- Parameters:
- Returns:
psi
- Return type:
See also
- classmethod rand(Lx, Ly, bond_dim, phys_dim=2, dist='normal', loc=0.0, dtype='float64', seed=None, **peps_opts)[source]¶
Create a random (un-normalized) PEPS.
- Parameters:
Lx (int) – The number of rows.
Ly (int) – The number of columns.
bond_dim (int) – The bond dimension.
physical (int, optional) – The physical index dimension.
dist ({'normal', 'uniform', 'rademacher', 'exp'}, optional) – Type of random number to generate, defaults to ‘normal’.
loc (float, optional) – An additive offset to add to the random numbers.
dtype (dtype, optional) – The dtype to create the arrays with, default is real double.
seed (int, optional) – A random seed.
peps_opts – Supplied to
PEPS
.
- Returns:
psi
- Return type:
See also
- class quimb.tensor.TensorNetwork2D(ts=(), *, virtual=False, check_collisions=True)[source]¶
Bases:
quimb.tensor.tensor_arbgeom.TensorNetworkGen
Mixin class for tensor networks with a square lattice two-dimensional structure, indexed by
[{row},{column}]
so that:'Y{j}' v i=Lx-1 ●──●──●──●──●──●── ──● | | | | | | | ... | | | | | | 'I{i},{j}' = 'I3,5' e.g. i=3 ●──●──●──●──●──●── | | | | | | | i=2 ●──●──●──●──●──●── ──● <== 'X{i}' | | | | | | ... | i=1 ●──●──●──●──●──●── ──● | | | | | | | i=0 ●──●──●──●──●──●── ──● j=0, 1, 2, 3, 4, 5 j=Ly-1
This implies the following conventions:
the ‘up’ bond is coordinates
(i, j), (i + 1, j)
the ‘down’ bond is coordinates
(i, j), (i - 1, j)
the ‘right’ bond is coordinates
(i, j), (i, j + 1)
the ‘left’ bond is coordinates
(i, j), (i, j - 1)
- _NDIMS = 2¶
- _EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly')¶
- _compatible_2d(other)[source]¶
Check whether
self
andother
are compatible 2D tensor networks such that they can remain a 2D tensor network when combined.
- combine(other, *, virtual=False, check_collisions=True)[source]¶
Combine this tensor network with another, returning a new tensor network. If the two are compatible, cast the resulting tensor network to a
TensorNetwork2D
instance.- Parameters:
other (TensorNetwork2D or TensorNetwork) – The other tensor network to combine with.
virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (
False
, the default), or view them as virtual (True
).check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If
True
(the default), any inner indices that clash will be mangled.
- Return type:
- property Lx¶
- The number of rows.
- property Ly¶
- The number of columns.
- property nsites¶
- The total number of sites.
- property x_tag_id¶
- The string specifier for tagging each row of this 2D TN.
- property x_tags¶
- A tuple of all of the ``Lx`` different row tags.
- row_tags¶
- property y_tag_id¶
- The string specifier for tagging each column of this 2D TN.
- property y_tags¶
- A tuple of all of the ``Ly`` different column tags.
- col_tags¶
- maybe_convert_coo(x)[source]¶
Check if
x
is a tuple of two ints and convert to the corresponding site tag if so.
- _get_tids_from_tags(tags, which='all')[source]¶
This is the function that lets coordinates such as
(i, j)
be used for many ‘tag’ based functions.
- gen_horizontal_even_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i, j + 1)
wherej
is even, which thus don’t overlap at all.
- gen_horizontal_odd_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i, j + 1)
wherej
is odd, which thus don’t overlap at all.
- gen_vertical_even_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i + 1, j)
wherei
is even, which thus don’t overlap at all.
- gen_vertical_odd_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i + 1, j)
wherei
is odd, which thus don’t overlap at all.
- gen_diagonal_left_even_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i + 1, j - 1)
wherej
is even, which thus don’t overlap at all.
- gen_diagonal_left_odd_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i + 1, j - 1)
wherej
is odd, which thus don’t overlap at all.
- gen_diagonal_right_even_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i + 1, j + 1)
wherei
is even, which thus don’t overlap at all.
- gen_diagonal_right_odd_bond_coos()[source]¶
Generate all coordinate pairs like
(i, j), (i + 1, j + 1)
wherei
is odd, which thus don’t overlap at all.
- is_cyclic_x(j=None, imin=None, imax=None)[source]¶
Check if the x dimension is cyclic (periodic), specifically whether a bond exists between
(imin, j)
and(imax, j)
, with default values ofimin = 0
andimax = Lx - 1
, andj
at the center of the lattice. Ifimin
andimax
are adjacent then this is considered False, since there is no ‘extra’ connectivity.
- is_cyclic_y(i=None, jmin=None, jmax=None)[source]¶
Check if the y dimension is cyclic (periodic), specifically whether a bond exists between
(i, jmin)
and(i, jmax)
, with default values ofjmin = 0
andjmax = Ly - 1
, andi
at the center of the lattice. Ifjmin
andjmax
are adjacent then this is considered False, since there is no ‘extra’ connectivity.
- _repr_info()[source]¶
General info to show in various reprs. Sublasses can add more relevant info to this dict.
- flatten(fuse_multibonds=True, inplace=False)[source]¶
Contract all tensors corresponding to each site into one.
- gen_pairs(xrange=None, yrange=None, xreverse=False, yreverse=False, coordinate_order='xy', xstep=None, ystep=None, stepping_order='xy', step_only=None)[source]¶
Helper function for generating pairs of cooordinates for all bonds within a certain range, optionally specifying an order.
- Parameters:
xrange ((int, int), optional) – The range of allowed values for the x and y coordinates.
yrange ((int, int), optional) – The range of allowed values for the x and y coordinates.
xreverse (bool, optional) – Whether to reverse the order of the x and y sweeps.
yreverse (bool, optional) – Whether to reverse the order of the x and y sweeps.
coordinate_order (str, optional) – The order in which to sweep the x and y coordinates. Earlier dimensions will change slower. If the corresponding range has size 1 then that dimension doesn’t need to be specified.
xstep (int, optional) – When generating a bond, step in this direction to yield the neighboring coordinate. By default, these follow
xreverse
andyreverse
respectively.ystep (int, optional) – When generating a bond, step in this direction to yield the neighboring coordinate. By default, these follow
xreverse
andyreverse
respectively.stepping_order (str, optional) – The order in which to step the x and y coordinates to generate bonds. Does not need to include all dimensions.
step_only (int, optional) – Only perform the ith steps in
stepping_order
, used to interleave canonizing and compressing for example.
- Yields:
coo_a, coo_b (((int, int), (int, int)))
- canonize_plane(xrange, yrange, equalize_norms=False, canonize_opts=None, **gen_pair_opts)[source]¶
Canonize every pair of tensors within a subrange, optionally specifying a order to visit those pairs in.
- canonize_row(i, sweep, yrange=None, **canonize_opts)[source]¶
Canonize all or part of a row.
If
sweep == 'right'
then:| | | | | | | | | | | | | | ─●──●──●──●──●──●──●─ ─●──●──●──●──●──●──●─ | | | | | | | | | | | | | | ─●──●──●──●──●──●──●─ ==> ─●──>──>──>──>──o──●─ row=i | | | | | | | | | | | | | | ─●──●──●──●──●──●──●─ ─●──●──●──●──●──●──●─ | | | | | | | | | | | | | | . . . . jstart jstop jstart jstop
If
sweep == 'left'
then:| | | | | | | | | | | | | | ─●──●──●──●──●──●──●─ ─●──●──●──●──●──●──●─ | | | | | | | | | | | | | | ─●──●──●──●──●──●──●─ ==> ─●──o──<──<──<──<──●─ row=i | | | | | | | | | | | | | | ─●──●──●──●──●──●──●─ ─●──●──●──●──●──●──●─ | | | | | | | | | | | | | | . . . . jstop jstart jstop jstart
Does not yield an orthogonal form in the same way as in 1D.
- canonize_column(j, sweep, xrange=None, **canonize_opts)[source]¶
Canonize all or part of a column.
If
sweep='up'
then:| | | | | | ─●──●──●─ ─●──●──●─ | | | | | | ─●──●──●─ ─●──o──●─ istop | | | ==> | | | ─●──●──●─ ─●──^──●─ | | | | | | ─●──●──●─ ─●──^──●─ istart | | | | | | ─●──●──●─ ─●──●──●─ | | | | | | . . j j
If
sweep='down'
then:| | | | | | ─●──●──●─ ─●──●──●─ | | | | | | ─●──●──●─ ─●──v──●─ istart | | | ==> | | | ─●──●──●─ ─●──v──●─ | | | | | | ─●──●──●─ ─●──o──●─ istop | | | | | | ─●──●──●─ ─●──●──●─ | | | | | | . . j j
Does not yield an orthogonal form in the same way as in 1D.
- compress_plane(xrange, yrange, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None, **gen_pair_opts)[source]¶
Compress every pair of tensors within a subrange, optionally specifying a order to visit those pairs in.
- compress_row(i, sweep, yrange=None, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None)[source]¶
Compress all or part of a row.
If
sweep == 'right'
then:| | | | | | | | | | | | | | ━●━━●━━●━━●━━●━━●━━●━ ━●━━●━━●━━●━━●━━●━━●━ | | | | | | | | | | | | | | ━●━━●━━●━━●━━●━━●━━●━ ━━> ━●━━>──>──>──>──o━━●━ row=i | | | | | | | | | | | | | | ━●━━●━━●━━●━━●━━●━━●━ ━●━━●━━●━━●━━●━━●━━●━ | | | | | | | | | | | | | | . . . . jstart jstop jstart jstop
If
sweep == 'left'
then:| | | | | | | | | | | | | | ━●━━●━━●━━●━━●━━●━━●━ ━●━━●━━●━━●━━●━━●━━●━ | | | | | | | | | | | | | | ━●━━●━━●━━●━━●━━●━━●━ ━━> ━●━━o──<──<──<──<━━●━ row=i | | | | | | | | | | | | | | ━●━━●━━●━━●━━●━━●━━●━ ━●━━●━━●━━●━━●━━●━━●━ | | | | | | | | | | | | | | . . . . jstop jstart jstop jstart
Does not yield an orthogonal form in the same way as in 1D.
- Parameters:
i (int) – Which row to compress.
sweep ({'right', 'left'}) – Which direction to sweep in.
yrange (tuple[int, int] or None) – The range of columns to compress.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
compress_opts (None or dict, optional) – Supplied to
compress_between()
.
- compress_column(j, sweep, xrange=None, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None)[source]¶
Compress all or part of a column.
If
sweep='up'
then:┃ ┃ ┃ ┃ ┃ ┃ ─●──●──●─ ─●──●──●─ ┃ ┃ ┃ ┃ ┃ ┃ ─●──●──●─ ─●──o──●─ . ┃ ┃ ┃ ==> ┃ | ┃ . ─●──●──●─ ─●──^──●─ . xrange ┃ ┃ ┃ ┃ | ┃ . ─●──●──●─ ─●──^──●─ . ┃ ┃ ┃ ┃ ┃ ┃ ─●──●──●─ ─●──●──●─ ┃ ┃ ┃ ┃ ┃ ┃ . . j j
If
sweep='down'
then:┃ ┃ ┃ ┃ ┃ ┃ ─●──●──●─ ─●──●──●─ ┃ ┃ ┃ ┃ ┃ ┃ ─●──●──●─ ─●──v──●─ . ┃ ┃ ┃ ==> ┃ | ┃ . ─●──●──●─ ─●──v──●─ . xrange ┃ ┃ ┃ ┃ | ┃ . ─●──●──●─ ─●──o──●─ . ┃ ┃ ┃ ┃ ┃ ┃ ─●──●──●─ ─●──●──●─ ┃ ┃ ┃ ┃ ┃ ┃ . . j j
Does not yield an orthogonal form in the same way as in 1D.
- Parameters:
j (int) – Which column to compress.
sweep ({'up', 'down'}) – Which direction to sweep in.
xrange (None or (int, int), optional) – The range of rows to compress.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
compress_opts (None or dict, optional) – Supplied to
compress_between()
.
- _contract_boundary_core_via_1d(xrange, yrange, from_which, max_bond, cutoff=1e-10, method='dm', layer_tags=None, **compress_opts)[source]¶
- _contract_boundary_core(xrange, yrange, from_which, max_bond, cutoff=1e-10, canonize=True, layer_tags=None, compress_late=True, sweep_reverse=False, equalize_norms=False, compress_opts=None, canonize_opts=None)[source]¶
- _contract_boundary_full_bond(xrange, yrange, from_which, max_bond, cutoff=0.0, method='eigh', renorm=False, optimize='auto-hq', opposite_envs=None, equalize_norms=False, contract_boundary_opts=None)[source]¶
Contract the boundary of this 2D TN using the ‘full bond’ environment information obtained from a boundary contraction in the opposite direction.
- Parameters:
xrange ((int, int) or None, optional) – The range of rows to contract and compress.
yrange ((int, int)) – The range of columns to contract and compress.
from_which ({'xmin', 'ymin', 'xmax', 'ymax'}) – Which direction to contract the rectangular patch from.
max_bond (int) – The maximum boundary dimension, AKA ‘chi’. By default used for the opposite direction environment contraction as well.
cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction - only for the opposite direction environment contraction.
method ({'eigh', 'eig', 'svd', 'biorthog'}, optional) – Which similarity decomposition method to use to compress the full bond environment.
renorm (bool, optional) – Whether to renormalize the isometric projection or not.
optimize (str or PathOptimize, optimize) – Contraction optimizer to use for the exact contractions.
opposite_envs (dict, optional) – If supplied, the opposite environments will be fetched or lazily computed into this dict depending on whether they are missing.
contract_boundary_opts – Other options given to the opposite direction environment contraction.
- _contract_boundary_projector(xrange, yrange, from_which, max_bond=None, cutoff=1e-10, lazy=False, equalize_norms=False, optimize='auto-hq', compress_opts=None)[source]¶
Contract the boundary of this 2D tensor network by explicitly computing and inserting explicit local projector tensors, which can optionally be left uncontracted. Multilayer networks are naturally supported.
- Parameters:
xrange (tuple) – The range of x indices to contract.
yrange (tuple) – The range of y indices to contract.
from_which ({'xmin', 'xmax', 'ymin', 'ymax'}) – From which boundary to contract.
max_bond (int, optional) – The maximum bond dimension to contract to. If
None
(default), compression is left tocutoff
.cutoff (float, optional) – The cutoff to use for boundary compression.
lazy (bool, optional) – Whether to leave the boundary tensors uncontracted. If
False
(the default), the boundary tensors are contracted and the resulting boundary has a single tensor per site.equalize_norms (bool, optional) – Whether to actively absorb the norm of modified tensors into
self.exponent
.optimize (str or PathOptimizer, optional) – The contract path optimization to use when forming the projector tensors.
compress_opts (dict, optional) – Other options to pass to
svd_truncated()
.
- contract_boundary_from(xrange, yrange, from_which, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]¶
Unified entrypoint for contracting any rectangular patch of tensors from any direction, with any boundary method.
- contract_boundary_from_xmin(xrange, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]¶
Contract a 2D tensor network inwards from the bottom, canonizing and compressing (left to right) along the way. If
layer_tags is None
this looks like:a) contract │ │ │ │ │ ●──●──●──●──● │ │ │ │ │ │ │ │ │ │ --> ●══●══●══●══● ●──●──●──●──● b) optionally canonicalize │ │ │ │ │ ●══●══<══<══< c) compress in opposite direction │ │ │ │ │ --> │ │ │ │ │ --> │ │ │ │ │ >──●══●══●══● --> >──>──●══●══● --> >──>──>──●══● . . --> . . --> . .
If
layer_tags
is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:a) first flatten the outer boundary only │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ ●─○●─○●─○●─○●─○ ●─○●─○●─○●─○●─○ │ ││ ││ ││ ││ │ ==> ╲│ ╲│ ╲│ ╲│ ╲│ ●─○●─○●─○●─○●─○ ●══●══●══●══● b) contract and compress a single layer only │ ││ ││ ││ ││ │ │ ○──○──○──○──○ │╱ │╱ │╱ │╱ │╱ ●══<══<══<══< c) contract and compress the next layer ╲│ ╲│ ╲│ ╲│ ╲│ >══>══>══>══●
- Parameters:
xrange ((int, int)) – The range of rows to compress (inclusive).
yrange ((int, int) or None, optional) – The range of columns to compress (inclusive), sweeping along with canonization and compression. Defaults to all columns.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
canonize (bool, optional) – Whether to sweep one way with canonization before compressing.
mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.
layer_tags (None or sequence[str], optional) – If
None
, all tensors at each coordinate pair[(i, j), (i + 1, j)]
will be first contracted. If specified, then the outer tensor at(i, j)
will be contracted with the tensor specified by[(i + 1, j), layer_tag]
, for eachlayer_tag
inlayer_tags
.sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.
compress_opts (None or dict, optional) – Supplied to
compress_between()
.inplace (bool, optional) – Whether to perform the contraction inplace or not.
- contract_boundary_from_xmax(xrange, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, inplace=False, sweep_reverse=False, compress_opts=None, **contract_boundary_opts)[source]¶
Contract a 2D tensor network inwards from the top, canonizing and compressing (right to left) along the way. If
layer_tags is None
this looks like:a) contract ●──●──●──●──● | | | | | --> ●══●══●══●══● ●──●──●──●──● | | | | | | | | | | b) optionally canonicalize ●══●══<══<══< | | | | | c) compress in opposite direction >──●══●══●══● --> >──>──●══●══● --> >──>──>──●══● | | | | | --> | | | | | --> | | | | | . . --> . . --> . .
If
layer_tags
is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:a) first flatten the outer boundary only ●─○●─○●─○●─○●─○ ●══●══●══●══● │ ││ ││ ││ ││ │ ==> ╱│ ╱│ ╱│ ╱│ ╱│ ●─○●─○●─○●─○●─○ ●─○●─○●─○●─○●─○ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ b) contract and compress a single layer only ●══<══<══<══< │╲ │╲ │╲ │╲ │╲ │ ○──○──○──○──○ │ ││ ││ ││ ││ │ c) contract and compress the next layer ●══●══●══●══● ╱│ ╱│ ╱│ ╱│ ╱│
- Parameters:
xrange ((int, int)) – The range of rows to compress (inclusive).
yrange ((int, int) or None, optional) – The range of columns to compress (inclusive), sweeping along with canonization and compression. Defaults to all columns.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
canonize (bool, optional) – Whether to sweep one way with canonization before compressing.
mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.
layer_tags (None or str, optional) – If
None
, all tensors at each coordinate pair[(i, j), (i - 1, j)]
will be first contracted. If specified, then the outer tensor at(i, j)
will be contracted with the tensor specified by[(i - 1, j), layer_tag]
, for eachlayer_tag
inlayer_tags
.sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.
compress_opts (None or dict, optional) – Supplied to
compress_between()
.inplace (bool, optional) – Whether to perform the contraction inplace or not.
- contract_boundary_from_ymin(yrange, xrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]¶
Contract a 2D tensor network inwards from the left, canonizing and compressing (bottom to top) along the way. If
layer_tags is None
this looks like:a) contract ●──●── ●── │ │ ║ ●──●── ==> ●── │ │ ║ ●──●── ●── b) optionally canonicalize ●── v── ║ ║ ●── ==> v── ║ ║ ●── ●── c) compress in opposite direction v── ●── ║ │ v── ==> ^── ║ │ ●── ^──
If
layer_tags
is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:a) first flatten the outer boundary only ○──○── ●──○── │╲ │╲ │╲ │╲ ●─○──○── ╰─●──○── ╲│╲╲│╲ ==> │╲╲│╲ ●─○──○── ╰─●──○── ╲│ ╲│ │ ╲│ ●──●── ╰──●── b) contract and compress a single layer only ○── ╱╱ ╲ ●─── ○── ╲ ╱╱ ╲ ^─── ○── ╲ ╱╱ ^───── c) contract and compress the next layer ●── │╲ ╰─●── │╲ ╰─●── │ ╰──
- Parameters:
yrange ((int, int)) – The range of columns to compress (inclusive).
xrange ((int, int) or None, optional) – The range of rows to compress (inclusive), sweeping along with canonization and compression. Defaults to all rows.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
canonize (bool, optional) – Whether to sweep one way with canonization before compressing.
mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.
layer_tags (None or str, optional) – If
None
, all tensors at each coordinate pair[(i, j), (i, j + 1)]
will be first contracted. If specified, then the outer tensor at(i, j)
will be contracted with the tensor specified by[(i + 1, j), layer_tag]
, for eachlayer_tag
inlayer_tags
.sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.
compress_opts (None or dict, optional) – Supplied to
compress_between()
.inplace (bool, optional) – Whether to perform the contraction inplace or not.
- contract_boundary_from_ymax(yrange, xrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]¶
Contract a 2D tensor network inwards from the left, canonizing and compressing (top to bottom) along the way. If
layer_tags is None
this looks like:a) contract ──●──● ──● │ │ ║ ──●──● ==> ──● │ │ ║ ──●──● ──● b) optionally canonicalize ──● ──v ║ ║ ──● ==> ──v ║ ║ ──● ──● c) compress in opposite direction ──v ──● ║ │ ──v ==> ──^ ║ │ ──● ──^
If
layer_tags
is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:a) first flatten the outer boundary only ──○──○ ──○──● ╱│ ╱│ ╱│ ╱│ ──○──○─● ──○──●─╯ ╱│╱╱│╱ ==> ╱│╱╱│ ──○──○─● ──○──●─╯ │╱ │╱ │╱ │ ──●──● ──●──╯ b) contract and compress a single layer only ──○ ╱ ╲╲ ──○────v ╱ ╲╲ ╱ ──○────v ╲╲ ╱ ─────● c) contract and compress the next layer ╲ ────v ╲ ╱ ────v ╲ ╱ ────●
- Parameters:
yrange ((int, int)) – The range of columns to compress (inclusive).
xrange ((int, int) or None, optional) – The range of rows to compress (inclusive), sweeping along with canonization and compression. Defaults to all rows.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
canonize (bool, optional) – Whether to sweep one way with canonization before compressing.
mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.
layer_tags (None or str, optional) – If
None
, all tensors at each coordinate pair[(i, j), (i, j - 1)]
will be first contracted. If specified, then the outer tensor at(i, j)
will be contracted with the tensor specified by[(i + 1, j), layer_tag]
, for eachlayer_tag
inlayer_tags
.sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.
compress_opts (None or dict, optional) – Supplied to
compress_between()
.inplace (bool, optional) – Whether to perform the contraction inplace or not.
- _contract_interleaved_boundary_sequence(*, contract_boundary_opts=None, sequence=None, xmin=None, xmax=None, ymin=None, ymax=None, max_separation=1, max_unfinished=1, around=None, equalize_norms=False, canonize=False, canonize_opts=None, final_contract=True, final_contract_opts=None, progbar=False, inplace=False)[source]¶
Unified handler for performing iterleaved contractions in a sequence of inwards boundary directions.
- contract_boundary(max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, compress_opts=None, sequence=None, xmin=None, xmax=None, ymin=None, ymax=None, max_separation=1, max_unfinished=1, around=None, equalize_norms=False, final_contract=True, final_contract_opts=None, progbar=None, inplace=False, **contract_boundary_opts)[source]¶
Contract the boundary of this 2D tensor network inwards:
●──●──●──● ●──●──●──● ●──●──● │ │ │ │ │ │ │ │ ║ │ │ ●──●──●──● ●──●──●──● ^──●──● >══>══● >──v │ │ij│ │ ==> │ │ij│ │ ==> ║ij│ │ ==> │ij│ │ ==> │ij║ ●──●──●──● ●══<══<══< ^──<──< ^──<──< ^──< │ │ │ │ ●──●──●──●
Optionally from any or all of the boundary, in multiple layers, and stopping around a region. The default is to contract the boundary from the two shortest opposing sides.
- Parameters:
around (None or sequence of (int, int), optional) – If given, don’t contract the square of sites bounding these coordinates.
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
canonize (bool, optional) – Whether to sweep one way with canonization before compressing.
mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.
layer_tags (None or sequence of str, optional) – If given, perform a multilayer contraction, contracting the inner sites in each layer into the boundary individually.
compress_opts (None or dict, optional) – Other low level options to pass to
compress_between()
.sequence (sequence of {'xmin', 'xmax', 'ymin', 'ymax'}, optional) – Which directions to cycle throught when performing the inwards contractions, i.e. from that direction. If
around
is specified you will likely need all of these! Default is to contract from the two shortest opposing sides.xmin (int, optional) – The initial bottom boundary row, defaults to 0.
xmax (int, optional) – The initial top boundary row, defaults to
Lx - 1
.ymin (int, optional) – The initial left boundary column, defaults to 0.
ymax (int, optional) – The initial right boundary column, defaults to
Ly - 1
..max_separation (int, optional) – If
around is None
, when any two sides become this far apart simply contract the remaining tensor network.max_unfinished (int, optional) – If
around is None
, when this many sides are still not withinmax_separation
simply contract the remaining tensor network.around – If given, don’t contract the square of sites bounding these coordinates.
equalize_norms (bool or float, optional) – Whether to equalize the norms of the boundary tensors after each contraction, gathering the overall scaling coefficient, log10, in
tn.exponent
.final_contract (bool, optional) – Whether to exactly contract the remaining tensor network after the boundary contraction.
final_contract_opts (None or dict, optional) – Options to pass to
contract()
,optimize
defaults to'auto-hq'
.progbar (bool, optional) – Whether to show a progress bar.
inplace (bool, optional) – Whether to perform the contraction in place or not.
contract_boundary_opts – Supplied to
contract_boundary_from()
, including compression and canonization options.
- contract_mps_sweep(max_bond=None, *, cutoff=1e-10, canonize=True, direction=None, **contract_boundary_opts)[source]¶
Contract this 2D tensor network by sweeping an MPS across from one side to the other.
- Parameters:
max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of
None
means truncation is left purely tocutoff
and is not recommended in 2D.cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.
canonize (bool, optional) – Whether to sweep one way with canonization before compressing.
direction ({'xmin', 'xmax', 'ymin', 'ymax'}, optional) – Which direction to sweep from. If
None
(default) then the shortest boundary is chosen.contract_boundary_opts – Supplied to
contract_boundary_from()
, including compression and canonization options.
- compute_environments(from_which, xrange=None, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, dense=False, compress_opts=None, envs=None, **contract_boundary_opts)[source]¶
Compute the 1D boundary tensor networks describing the environments of rows and columns.
- Parameters:
from_which ({'xmin', 'xmax', 'ymin', 'ymax'}) – Which boundary to compute the environments from.
xrange (tuple[int], optional) – The range of rows to compute the environments for.
yrange (tuple[int], optional) – The range of columns to compute the environments for.
max_bond (int, optional) – The maximum bond dimension of the environments.
cutoff (float, optional) – The cutoff for the singular values of the environments.
canonize (bool, optional) – Whether to canonicalize along each MPS environment before compressing.
mode ({'mps', 'projector', 'full-bond'}, optional) – Which contraction method to use for the environments.
layer_tags (str or iterable[str], optional) – If this 2D TN is multi-layered (e.g. a bra and a ket), and
mode == 'mps'
, contract and compress each specified layer separately, for a cheaper contraction.dense (bool, optional) – Whether to use dense tensors for the environments.
compress_opts (dict, optional) – Other options to pass to
tensor_compress_bond()
.envs (dict, optional) – An existing dictionary to store the environments in.
contract_boundary_opts – Other options to pass to
contract_boundary_from()
.
- Returns:
envs – A dictionary of the environments, with keys of the form
(from_which, row_or_col_index)
.- Return type:
- compute_xmin_environments[source]¶
Compute the
self.Lx
1D boundary tensor networks describing the lower environments of each row in this 2D tensor network. Seecompute_x_environments()
for full details.
- compute_xmax_environments[source]¶
Compute the
self.Lx
1D boundary tensor networks describing the upper environments of each row in this 2D tensor network. Seecompute_x_environments()
for full details.
- compute_ymin_environments[source]¶
Compute the
self.Ly
1D boundary tensor networks describing the left environments of each column in this 2D tensor network. Seecompute_y_environments()
for full details.
- compute_ymax_environments[source]¶
Compute the
self.Ly
1D boundary tensor networks describing the right environments of each column in this 2D tensor network. Seecompute_y_environments()
for full details.
- compute_x_environments(max_bond=None, *, cutoff=1e-10, canonize=True, dense=False, mode='mps', layer_tags=