quimb.tensor.tensor_2d

Classes and algorithms related to 2D tensor networks.

Attributes

Classes

TensorNetworkGen

A tensor network which notionally has a single tensor per 'site',

TensorNetworkGenOperator

A tensor network which notionally has a single tensor and two outer

TensorNetworkGenVector

A tensor network which notionally has a single tensor and outer index

Tensor

A labelled, tagged n-dimensional array. The index labels are used

TensorNetwork

A collection of (as yet uncontracted) Tensors.

oset

An ordered set which stores elements as the keys of dict (ordered as of

Rotator2D

Object for rotating coordinates and various contraction functions so

TensorNetwork2D

Mixin class for tensor networks with a square lattice two-dimensional

TensorNetwork2DVector

Mixin class for a 2D square lattice vector TN, i.e. one with a single

TensorNetwork2DOperator

Mixin class for a 2D square lattice TN operator, i.e. one with both

TensorNetwork2DFlat

Mixin class for a 2D square lattice tensor network with a single tensor

PEPS

Projected Entangled Pair States object (2D):

PEPO

Projected Entangled Pair Operator object:

Functions

swap([dim, dtype])

The SWAP operator acting on subsystems of dimension dim.

randn([shape, dtype, scale, loc, num_threads, seed, dist])

Fast multithreaded generation of random normally distributed data.

seed_rand(seed)

See the random number generators, by instantiating a new set of bit

check_opt(name, value, valid)

Check whether value takes one of valid options, and raise an

deprecated(fn, old_name, new_name)

Mark a function as deprecated, and indicate the new name.

ensure_dict(x)

Make sure x is a dict, creating an empty one if x is None.

pairwise(iterable)

Iterate over each pair of neighbours in iterable.

print_multi_line(*lines[, max_width])

Print multiple lines, with a maximum width.

maybe_factor_gate_into_tensor(G, phys_dim, nsites, where)

tensor_network_ag_sum(tna, tnb[, site_tags, negate, ...])

Add two tensor networks with arbitrary, but matching, geometries. They

tensor_network_apply_op_vec(A, x[, which_A, contract, ...])

Apply a general a general tensor network representing an operator (has

bonds(t1, t2)

Getting any indices connecting the Tensor(s) or TensorNetwork(s) t1

bonds_size(t1, t2)

Get the size of the bonds linking tensors or tensor networks t1 and

oset_union(xs)

Non-variadic ordered set union taking any sequence of iterables.

rand_uuid([base])

Return a guaranteed unique, shortish identifier, optional appended

tags_to_oset(tags)

Parse a tags argument into an ordered set.

tensor_contract(*tensors[, output_inds, optimize, ...])

Contract a collection of tensors into a scalar or tensor, automatically

manhattan_distance(coo_a, coo_b)

nearest_neighbors(coo)

gen_2d_bonds(Lx, Ly[, steppers, coo_filter, cyclic])

Convenience function for tiling pairs of bond coordinates on a 2D

gen_2d_plaquette(coo0, steps)

Generate a plaquette at site coo0 by stepping first in steps and

gen_2d_plaquettes(Lx, Ly, tiling)

Generate a tiling of plaquettes in a square 2D lattice.

gen_2d_strings(Lx, Ly)

Generate all length-wise strings in a square 2D lattice.

parse_boundary_sequence(sequence)

Ensure sequence is a tuple of boundary sequence strings from

is_lone_coo(where)

Check if where has been specified as a single coordinate pair.

gate_string_split_(TG, where, string, original_ts, ...)

gate_string_reduce_split_(TG, where, string, ...)

show_2d(tn_2d[, show_lower, show_upper])

Base function for printing a unicode schematic of flat 2D TNs.

calc_plaquette_sizes(coo_groups[, autogroup])

Find a sequence of plaquette blocksizes that will cover all the terms

plaquette_to_sites(p)

Turn a plaquette ((i0, j0), (di, dj)) into the sites it contains.

calc_plaquette_map(plaquettes)

Generate a dictionary of all the coordinate pairs in plaquettes

gen_long_range_path(ij_a, ij_b[, sequence])

Generate a string of coordinates, in order, from ij_a to ij_b.

gen_long_range_swap_path(ij_a, ij_b[, sequence])

Generate the coordinates or a series of swaps that would bring ij_a

swap_path_to_long_range_path(swap_path, ij_a)

Generates the ordered long-range path - a sequence of coordinates - from

get_swap(dp, dtype, backend)

Module Contents

quimb.tensor.tensor_2d.swap(dim=2, dtype=complex, **kwargs)[source]

The SWAP operator acting on subsystems of dimension dim.

quimb.tensor.tensor_2d.randn(shape=(), dtype=float, scale=1.0, loc=0.0, num_threads=None, seed=None, dist='normal')[source]

Fast multithreaded generation of random normally distributed data.

Parameters:
  • shape (tuple[int]) – The shape of the output random array.

  • dtype ({'complex128', 'float64', 'complex64' 'float32'}, optional) – The data-type of the output array.

  • scale (float, optional) – A multiplicative scale for the random numbers.

  • loc (float, optional) – An additive location for the random numbers.

  • num_threads (int, optional) – How many threads to use. If None, decide automatically.

  • dist ({'normal', 'uniform', 'rademacher', 'exp'}, optional) – Type of random number to generate.

quimb.tensor.tensor_2d.seed_rand(seed)[source]

See the random number generators, by instantiating a new set of bit generators with a ‘seed sequence’.

quimb.tensor.tensor_2d.check_opt(name, value, valid)[source]

Check whether value takes one of valid options, and raise an informative error if not.

quimb.tensor.tensor_2d.deprecated(fn, old_name, new_name)[source]

Mark a function as deprecated, and indicate the new name.

quimb.tensor.tensor_2d.ensure_dict(x)[source]

Make sure x is a dict, creating an empty one if x is None.

quimb.tensor.tensor_2d.pairwise(iterable)[source]

Iterate over each pair of neighbours in iterable.

quimb.tensor.tensor_2d.print_multi_line(*lines, max_width=None)[source]

Print multiple lines, with a maximum width.

quimb.tensor.tensor_2d.maybe_factor_gate_into_tensor(G, phys_dim, nsites, where)[source]
class quimb.tensor.tensor_2d.TensorNetworkGen(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: quimb.tensor.tensor_core.TensorNetwork

A tensor network which notionally has a single tensor per ‘site’, though these could be labelled arbitrarily could also be linked in an arbitrary geometry by bonds.

_NDIMS = 1
_EXTRA_PROPS = ('_sites', '_site_tag_id')
_compatible_arbgeom(other)[source]

Check whether self and other represent the same set of sites and are tagged equivalently.

combine(other, *, virtual=False, check_collisions=True)[source]

Combine this tensor network with another, returning a new tensor network. If the two are compatible, cast the resulting tensor network to a TensorNetworkGen instance.

Parameters:
  • other (TensorNetworkGen or TensorNetwork) – The other tensor network to combine with.

  • virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (False, the default), or view them as virtual (True).

  • check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If True (the default), any inner indices that clash will be mangled.

Return type:

TensorNetworkGen or TensorNetwork

property nsites
The total number of sites.
gen_site_coos()[source]

Generate the coordinates of all sites, same as self.sites.

property sites
Tuple of the possible sites in this tensor network.
_get_site_set()[source]

The set of all sites.

gen_sites_present()[source]

Generate the sites which are currently present (e.g. if a local view of a larger tensor network), based on whether their tags are present.

Examples

>>> tn = qtn.TN3D_rand(4, 4, 4, 2)
>>> tn_sub = tn.select_local('I1,2,3', max_distance=1)
>>> list(tn_sub.gen_sites_present())
[(0, 2, 3), (1, 1, 3), (1, 2, 2), (1, 2, 3), (1, 3, 3), (2, 2, 3)]
property site_tag_id
The string specifier for tagging each site of this tensor network.
site_tag(site)[source]

The name of the tag specifiying the tensor at site.

retag_sites(new_id, where=None, inplace=False)[source]

Modify the site tags for all or some tensors in this tensor network (without changing the site_tag_id).

Parameters:
  • new_id (str) – A string with a format placeholder to accept a site, e.g. “S{}”.

  • where (None or sequence) – Which sites to update the index labels on. If None (default) all sites.

  • inplace (bool) – Whether to retag in place.

property site_tags
All of the site tags.
property site_tags_present
All of the site tags still present in this tensor network.
retag_all(new_id, inplace=False)[source]

Retag all sites and change the site_tag_id.

retag_all_[source]
_get_site_tag_set()[source]

The oset of all site tags.

filter_valid_site_tags(tags)[source]

Get the valid site tags from tags.

maybe_convert_coo(x)[source]

Check if x is a valid site and convert to the corresponding site tag if so, else return x.

gen_tags_from_coos(coos)[source]

Generate the site tags corresponding to the given coordinates.

_get_tids_from_tags(tags, which='all')[source]

This is the function that lets coordinates such as site be used for many ‘tag’ based functions.

reset_cached_properties()[source]

Reset any cached properties, one should call this when changing the actual geometry of a TN inplace, for example.

align(*args, inplace=False, **kwargs)[source]
align_[source]
__add__(other)[source]
__sub__(other)[source]
__iadd__(other)[source]
__isub__(other)[source]
class quimb.tensor.tensor_2d.TensorNetworkGenOperator(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: TensorNetworkGen

A tensor network which notionally has a single tensor and two outer indices per ‘site’, though these could be labelled arbitrarily and could also be linked in an arbitrary geometry by bonds. By convention, if converted to a dense matrix, the ‘upper’ indices would be on the left and the ‘lower’ indices on the right.

_EXTRA_PROPS = ('_sites', '_site_tag_id', '_upper_ind_id', '_lower_ind_id')
property upper_ind_id
The string specifier for the upper phyiscal indices.
upper_ind(site)[source]

Get the upper physical index name of site.

reindex_upper_sites(new_id, where=None, inplace=False)[source]

Modify the upper site indices for all or some tensors in this operator tensor network (without changing the upper_ind_id).

Parameters:
  • new_id (str) – A string with a format placeholder to accept a site, e.g. “up{}”.

  • where (None or sequence) – Which sites to update the index labels on. If None (default) all sites.

  • inplace (bool) – Whether to reindex in place.

reindex_upper_sites_[source]
property upper_inds
Return a tuple of all upper indices.
property upper_inds_present
Return a tuple of all upper indices still present in the tensor
network.
property lower_ind_id
The string specifier for the lower phyiscal indices.
lower_ind(site)[source]

Get the lower physical index name of site.

reindex_lower_sites(new_id, where=None, inplace=False)[source]

Modify the lower site indices for all or some tensors in this operator tensor network (without changing the lower_ind_id).

Parameters:
  • new_id (str) – A string with a format placeholder to accept a site, e.g. “up{}”.

  • where (None or sequence) – Which sites to update the index labels on. If None (default) all sites.

  • inplace (bool) – Whether to reindex in place.

reindex_lower_sites_[source]
property lower_inds
Return a tuple of all lower indices.
property lower_inds_present
Return a tuple of all lower indices still present in the tensor
network.
to_dense(*inds_seq, to_qarray=False, **contract_opts)[source]

Contract this tensor network ‘operator’ into a dense array.

Parameters:
  • inds_seq (sequence of sequences of str) – How to group the site indices into the dense array. By default, use a single group ordered like sites, but only containing those sites which are still present.

  • to_qarray (bool) – Whether to turn the dense array into a qarray, if the backend would otherwise be 'numpy'.

  • contract_opts – Options to pass to contract().

Return type:

array

to_qarray[source]
phys_dim(site=None, which='upper')[source]

Get the physical dimension of site.

gate_upper_with_op_lazy(A, transpose=False, inplace=False)[source]

Act lazily with the operator tensor network A, which should have matching structure, on this operator tensor network (B), like A @ B. The returned tensor network will have the same structure as this one, but with the operator gated in lazily, i.e. uncontracted.

\[B \rightarrow A B\]

or (if transpose=True):

\[B \rightarrow A^T B\]
Parameters:
  • A (TensorNetworkGenOperator) – The operator tensor network to gate with, or apply to this tensor network.

  • transpose (bool, optional) – Whether to contract the lower or upper indices of A with the upper indices of B. If False (the default), the lower indices of A will be contracted with the upper indices of B, if True the upper indices of A will be contracted with the upper indices of B, which is like applying the transpose first.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on this tensor network.

Return type:

TensorNetworkGenOperator

gate_upper_with_op_lazy_[source]
gate_lower_with_op_lazy(A, transpose=False, inplace=False)[source]

Act lazily ‘from the right’ with the operator tensor network A, which should have matching structure, on this operator tensor network (B), like B @ A. The returned tensor network will have the same structure as this one, but with the operator gated in lazily, i.e. uncontracted.

\[B \rightarrow B A\]

or (if transpose=True):

\[B \rightarrow B A^T\]
Parameters:
  • A (TensorNetworkGenOperator) – The operator tensor network to gate with, or apply to this tensor network.

  • transpose (bool, optional) – Whether to contract the upper or lower indices of A with the lower indices of this TN. If False (the default), the upper indices of A will be contracted with the lower indices of B, if True the lower indices of A will be contracted with the lower indices of this TN, which is like applying the transpose first.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on this tensor network.

Return type:

TensorNetworkGenOperator

gate_lower_with_op_lazy_[source]
gate_sandwich_with_op_lazy(A, inplace=False)[source]

Act lazily with the operator tensor network A, which should have matching structure, on this operator tensor network (B), like \(B \rightarrow A B A^\dagger\). The returned tensor network will have the same structure as this one, but with the operator gated in lazily, i.e. uncontracted.

Parameters:
  • A (TensorNetworkGenOperator) – The operator tensor network to gate with, or apply to this tensor network.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on this tensor

Return type:

TensorNetworkGenOperator

gate_sandwich_with_op_lazy_[source]
class quimb.tensor.tensor_2d.TensorNetworkGenVector(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: TensorNetworkGen

A tensor network which notionally has a single tensor and outer index per ‘site’, though these could be labelled arbitrarily and could also be linked in an arbitrary geometry by bonds.

_EXTRA_PROPS = ('_sites', '_site_tag_id', '_site_ind_id')
property site_ind_id
The string specifier for the physical indices.
site_ind(site)[source]
property site_inds
Return a tuple of all site indices.
property site_inds_present
All of the site inds still present in this tensor network.
reset_cached_properties()[source]

Reset any cached properties, one should call this when changing the actual geometry of a TN inplace, for example.

reindex_sites(new_id, where=None, inplace=False)[source]

Modify the site indices for all or some tensors in this vector tensor network (without changing the site_ind_id).

Parameters:
  • new_id (str) – A string with a format placeholder to accept a site, e.g. “ket{}”.

  • where (None or sequence) – Which sites to update the index labels on. If None (default) all sites.

  • inplace (bool) – Whether to reindex in place.

reindex_sites_[source]
reindex_all(new_id, inplace=False)[source]

Reindex all physical sites and change the site_ind_id.

reindex_all_[source]
gen_inds_from_coos(coos)[source]

Generate the site inds corresponding to the given coordinates.

phys_dim(site=None)[source]

Get the physical dimension of site, defaulting to the first site if not specified.

to_dense(*inds_seq, to_qarray=False, to_ket=None, **contract_opts)[source]

Contract this tensor network ‘vector’ into a dense array. By default, turn into a ‘ket’ qarray, i.e. column vector of shape (d, 1).

Parameters:
  • inds_seq (sequence of sequences of str) – How to group the site indices into the dense array. By default, use a single group ordered like sites, but only containing those sites which are still present.

  • to_qarray (bool) – Whether to turn the dense array into a qarray, if the backend would otherwise be 'numpy'.

  • to_ket (None or str) – Whether to reshape the dense array into a ket (shape (d, 1) array). If None (default), do this only if the inds_seq is not supplied.

  • contract_opts – Options to pass to contract().

Return type:

array

to_qarray[source]
gate_with_op_lazy(A, transpose=False, inplace=False, **kwargs)[source]

Act lazily with the operator tensor network A, which should have matching structure, on this vector/state tensor network, like A @ x. The returned tensor network will have the same structure as this one, but with the operator gated in lazily, i.e. uncontracted.

\[| x \rangle \rightarrow A | x \rangle\]

or (if transpose=True):

\[| x \rangle \rightarrow A^T | x \rangle\]
Parameters:
  • A (TensorNetworkGenOperator) – The operator tensor network to gate with, or apply to this tensor network.

  • transpose (bool, optional) – Whether to contract the lower or upper indices of A with the site indices of x. If False (the default), the lower indices of A will be contracted with the site indices of x, if True the upper indices of A will be contracted with the site indices of x, which is like applying A.T @ x.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on this tensor network.

Return type:

TensorNetworkGenVector

gate_with_op_lazy_[source]
gate(G, where, contract=False, tags=None, propagate_tags=False, info=None, inplace=False, **compress_opts)[source]

Apply a gate to this vector tensor network at sites where. This is essentially a wrapper around gate_inds() apart from where can be specified as a list of sites, and tags can be optionally, intelligently propagated to the new gate tensor.

\[| \psi \rangle \rightarrow G_\mathrm{where} | \psi \rangle\]
Parameters:
  • G (array_ike) – The gate array to apply, should match or be factorable into the shape (*phys_dims, *phys_dims).

  • where (node or sequence[node]) – The sites to apply the gate to.

  • contract ({False, True, 'split', 'reduce-split', 'split-gate',) – ‘swap-split-gate’, ‘auto-split-gate’}, optional How to apply the gate, see gate_inds().

  • tags (str or sequence of str, optional) – Tags to add to the new gate tensor.

  • propagate_tags ({False, True, 'register', 'sites'}, optional) –

    Whether to propagate tags to the new gate tensor:

    - False: no tags are propagated
    - True: all tags are propagated
    - 'register': only site tags corresponding to ``where`` are
      added.
    - 'sites': all site tags on the current sites are propgated,
      resulting in a lightcone like tagging.
    

  • info (None or dict, optional) – Used to store extra optional information such as the singular values if not absorbed.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on the tensor network or not.

  • compress_opts – Supplied to tensor_split() for any contract methods that involve splitting. Ignored otherwise.

Return type:

TensorNetworkGenVector

gate_[source]
gate_simple_(G, where, gauges, renorm=True, **gate_opts)[source]

Apply a gate to this vector tensor network at sites where, using simple update style gauging of the tensors first, as supplied in gauges. The new singular values for the bond are reinserted into gauges.

Parameters:
  • G (array_like) – The gate to be applied.

  • where (node or sequence[node]) – The sites to apply the gate to.

  • gauges (dict[str, array_like]) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.

  • renorm (bool, optional) – Whether to renormalise the singular after the gate is applied, before reinserting them into gauges.

gate_fit_local_(G, where, max_distance=0, fillin=0, gauges=None, **fit_opts)[source]
local_expectation_cluster(G, where, normalized=True, max_distance=0, fillin=False, gauges=None, optimize='auto', max_bond=None, rehearse=False, **contract_opts)[source]

Approximately compute a single local expectation value of the gate G at sites where, either treating the environment beyond max_distance as the identity, or using simple update style bond gauges as supplied in gauges.

This selects a local neighbourhood of tensors up to distance max_distance away from where, then traces over dangling bonds after potentially inserting the bond gauges, to form an approximate version of the reduced density matrix.

\[\langle \psi | G | \psi \rangle \approx \frac{ \mathrm{Tr} [ G \tilde{\rho}_\mathrm{where} ] }{ \mathrm{Tr} [ \tilde{\rho}_\mathrm{where} ] }\]

assuming normalized==True.

Parameters:
  • G (array_like) – The gate to compute the expecation of.

  • where (node or sequence[node]) – The sites to compute the expectation at.

  • normalized (bool, optional) – Whether to locally normalize the result, i.e. divide by the expectation value of the identity.

  • max_distance (int, optional) – The maximum graph distance to include tensors neighboring where when computing the expectation. The default 0 means only the tensors at sites where are used.

  • fillin (bool or int, optional) – When selecting the local tensors, whether and how many times to ‘fill-in’ corner tensors attached multiple times to the local region. On a lattice this fills in the corners. See select_local().

  • gauges (dict[str, array_like], optional) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the local tensors.

  • max_bond (None or int, optional) – If specified, use compressed contraction.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

Returns:

expectation

Return type:

float

local_expectation_simple[source]
compute_local_expectation_cluster(terms, *, max_distance=0, fillin=False, normalized=True, gauges=None, optimize='auto', max_bond=None, return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)[source]

Compute all local expectations of the given terms, either treating the environment beyond max_distance as the identity, or using simple update style bond gauges as supplied in gauges.

This selects a local neighbourhood of tensors up to distance max_distance away from each term’s sites, then traces over dangling bonds after potentially inserting the bond gauges, to form an approximate version of the reduced density matrix.

\[\sum_\mathrm{i} \langle \psi | G_\mathrm{i} | \psi \rangle \approx \sum_\mathrm{i} \frac{ \mathrm{Tr} [ G_\mathrm{i} \tilde{\rho}_\mathrm{i} ] }{ \mathrm{Tr} [ \tilde{\rho}_\mathrm{i} ] }\]

assuming normalized==True.

Parameters:
  • terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.

  • max_distance (int, optional) – The maximum graph distance to include tensors neighboring each term’s sites when computing the expectation. The default 0 means only the tensors at sites of each term are used.

  • fillin (bool or int, optional) – When selecting the local tensors, whether and how many times to ‘fill-in’ corner tensors attached multiple times to the local region. On a lattice this fills in the corners. See select_local().

  • normalized (bool, optional) – Whether to locally normalize the result, i.e. divide by the expectation value of the identity. This implies that a different normalization factor is used for each term.

  • gauges (dict[str, array_like], optional) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the local tensors.

  • max_bond (None or int, optional) – If specified, use compressed contraction.

  • return_all (bool, optional) – Whether to return all results, or just the summed expectation.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

  • executor (Executor, optional) – If supplied compute the terms in parallel using this executor.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Supplied to contract().

Returns:

expecs – If return_all==False, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value.

Return type:

float or dict[node or (node, node), float]

compute_local_expectation_simple[source]
local_expectation_exact(G, where, optimize='auto-hq', normalized=True, rehearse=False, **contract_opts)[source]

Compute the local expectation of operator G at site(s) where by exactly contracting the full overlap tensor network.

compute_local_expectation_exact(terms, optimize='auto-hq', *, normalized=True, return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)[source]

Compute the local expectations of many operators, by exactly contracting the full overlap tensor network.

Parameters:
  • terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the full tensor network.

  • normalized (bool, optional) – Whether to normalize the result.

  • return_all (bool, optional) – Whether to return all results, or just the summed expectation.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

  • executor (Executor, optional) – If supplied compute the terms in parallel using this executor.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Supplied to contract().

Returns:

expecs – If return_all==False, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value.

Return type:

float or dict[node or (node, node), float]

partial_trace(keep, max_bond, optimize, flatten=True, reduce=False, normalized=True, symmetrized='auto', rehearse=False, method='contract_compressed', **contract_compressed_opts)[source]

Partially trace this tensor network state, keeping only the sites in keep, using compressed contraction.

Parameters:
  • keep (iterable of hashable) – The sites to keep.

  • max_bond (int) – The maximum bond dimensions to use while compressed contracting.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.

  • flatten ({False, True, 'all'}, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually improved. If 'all' also flatten the tensors in keep.

  • reduce (bool, optional) – Whether to first ‘pull’ the physical indices off their respective tensors using QR reduction. Experimental.

  • normalized (bool, optional) – Whether to normalize the reduced density matrix at the end.

  • symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computation or not:

    - False: perform the computation.
    - 'tn': return the tensor network without running the path
      optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree``..
    - True: run the path optimizer and return the ``PathInfo``.
    

  • contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to contract_compressed().

Returns:

rho – The reduce density matrix of sites in keep.

Return type:

array_like

local_expectation(G, where, max_bond, optimize, flatten=True, normalized=True, symmetrized='auto', reduce=False, rehearse=False, **contract_compressed_opts)[source]

Compute the local expectation of operator G at site(s) where by approximately contracting the full overlap tensor network.

Parameters:
  • G (array_like) – The local operator to compute the expectation of.

  • where (node or sequence of nodes) – The sites to compute the expectation for.

  • max_bond (int) – The maximum bond dimensions to use while compressed contracting.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.

  • method ({'rho', 'rho-reduced'}, optional) – The method to use to compute the expectation value.

  • flatten (bool, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually much improved.

  • normalized (bool, optional) – If computing via partial_trace, whether to normalize the reduced density matrix at the end.

  • symmetrized ({'auto', True, False}, optional) – If computing via partial_trace, whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computation or not:

    - False: perform the computation.
    - 'tn': return the tensor network without running the path
      optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree``..
    - True: run the path optimizer and return the ``PathInfo``.
    

  • contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to contract_compressed().

Returns:

expec

Return type:

float

compute_local_expectation(terms, max_bond, optimize, *, flatten=True, normalized=True, symmetrized='auto', reduce=False, return_all=False, rehearse=False, executor=None, progbar=False, **contract_compressed_opts)[source]

Compute the local expectations of many local operators, by approximately contracting the full overlap tensor network.

Parameters:
  • terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.

  • max_bond (int) – The maximum bond dimension to use during contraction.

  • optimize (str or PathOptimizer) – The compressed contraction path optimizer to use.

  • method ({'rho', 'rho-reduced'}, optional) –

    The method to use to compute the expectation value.

    • ’rho’: compute the expectation value via the reduced density matrix.

    • ’rho-reduced’: compute the expectation value via the reduced density matrix, having reduced the physical indices onto the bonds first.

  • flatten (bool, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy can often be much improved.

  • normalized (bool, optional) – Whether to locally normalize the result.

  • symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • return_all (bool, optional) – Whether to return all results, or just the summed expectation. If rehease is not False, this is ignored and a dict is always returned.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

  • executor (Executor, optional) – If supplied compute the terms in parallel using this executor.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_compressed_opts – Supplied to contract_compressed().

Returns:

expecs – If return_all==False, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value.

Return type:

float or dict[node or (node, node), float]

compute_local_expectation_rehearse[source]
compute_local_expectation_tn[source]
quimb.tensor.tensor_2d.tensor_network_ag_sum(tna, tnb, site_tags=None, negate=False, compress=False, inplace=False, **compress_opts)[source]

Add two tensor networks with arbitrary, but matching, geometries. They should have the same site tags, with a single tensor per site and sites connected by a single index only (but the name of this index can differ in the two TNs).

Parameters:
  • tna (TensorNetworkGen) – The first tensor network to add.

  • tnb (TensorNetworkGen) – The second tensor network to add.

  • site_tags (None or sequence of str, optional) – Which tags to consider as ‘sites’, by default uses tna.site_tags.

  • negate (bool, optional) – Whether to negate the second tensor network before adding.

  • compress (bool, optional) – Whether to compress the resulting tensor network, by calling the compress method with the given options.

  • inplace (bool, optional) – Whether to modify the first tensor network inplace.

Returns:

The resulting tensor network.

Return type:

TensorNetworkGen

quimb.tensor.tensor_2d.tensor_network_apply_op_vec(A, x, which_A='lower', contract=False, fuse_multibonds=True, compress=False, inplace=False, inplace_A=False, **compress_opts)[source]

Apply a general a general tensor network representing an operator (has upper_ind_id and lower_ind_id) to a tensor network representing a vector (has site_ind_id), by contracting each pair of tensors at each site then compressing the resulting tensor network. How the compression takes place is determined by the type of tensor network passed in. The returned tensor network has the same site indices as x, and it is the lower_ind_id of A that is contracted.

This is like performing A.to_dense() @ x.to_dense(), or the transpose thereof, depending on the value of which_A.

Parameters:
  • A (TensorNetworkGenOperator) – The tensor network representing the operator.

  • x (TensorNetworkGenVector) – The tensor network representing the vector.

  • which_A ({"lower", "upper"}, optional) – Whether to contract the lower or upper indices of A with the site indices of x.

  • contract (bool) – Whether to contract the tensors at each site after applying the operator, yielding a single tensor at each site.

  • fuse_multibonds (bool) – If contract=True, whether to fuse any multibonds after contracting the tensors at each site.

  • compress (bool) – Whether to compress the resulting tensor network.

  • inplace (bool) – Whether to modify x, the input vector tensor network inplace.

  • inplace_A (bool) – Whether to modify A, the operator tensor network inplace.

  • compress_opts – Options to pass to tn.compress, where tn is the resulting tensor network, if compress=True.

Returns:

The same type as x.

Return type:

TensorNetworkGenVector

class quimb.tensor.tensor_2d.Tensor(data=1.0, inds=(), tags=None, left_inds=None)[source]

A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.

Parameters:
  • data (numpy.ndarray) – The n-dimensional data.

  • inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of data.

  • tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a oset.

  • left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.

Examples

Basic construction:

>>> from quimb import randn
>>> from quimb.tensor import Tensor
>>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'})
>>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})

Indices are automatically aligned, and tags combined, when contracting:

>>> X @ Y
Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
__slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')
_set_data(data)[source]
_set_inds(inds)[source]
_set_tags(tags)[source]
_set_left_inds(left_inds)[source]
get_params()[source]

A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

set_params(params)[source]

A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

copy(deep=False, virtual=False)[source]

Copy this tensor.

Note

By default (deep=False), the underlying array will not be copied.

Parameters:
  • deep (bool, optional) – Whether to copy the underlying data as well.

  • virtual (bool, optional) – To conveniently mimic the behaviour of taking a virtual copy of tensor network, this simply returns self.

__copy__[source]
property data
property inds
property tags
property left_inds
check()[source]

Do some basic diagnostics on this tensor, raising errors if something is wrong.

property owners
add_owner(tn, tid)[source]

Add tn as owner of this Tensor - it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.

remove_owner(tn)[source]

Remove TensorNetwork tn as an owner of this Tensor.

check_owners()[source]

Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.

_apply_function(fn)[source]
modify(**kwargs)[source]

Overwrite the data of this tensor in place.

Parameters:
  • data (array, optional) – New data.

  • apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.

  • inds (sequence of str, optional) – New tuple of indices.

  • tags (sequence of str, optional) – New tags.

  • left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.

apply_to_arrays(fn)[source]

Apply the function fn to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.

isel(selectors, inplace=False)[source]

Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to X[:, :, 3, :, :] with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice.

Parameters:
  • selectors (dict[str, int], dict[str, slice]) – Mapping of index(es) to which value to take.

  • inplace (bool, optional) – Whether to select inplace or not.

Return type:

Tensor

Examples

>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c'))
>>> T.isel({'b': -1})
Tensor(shape=(2, 4), inds=('a', 'c'), tags=())
isel_[source]
add_tag(tag)[source]

Add a tag or multiple tags to this tensor. Unlike self.tags.add this also updates any TensorNetwork objects viewing this Tensor.

expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace increase the size of the dimension of ind, the new array entries will be filled with zeros by default.

Parameters:
  • name (str) – Name of the index to expand.

  • size (int, optional) – Size of the expanded index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace add a new index - a named dimension. If size is specified to be greater than one then the new array entries will be filled with zeros.

Parameters:
  • name (str) – Name of the new index.

  • size (int, optional) – Size of the new index.

  • axis (int, optional) – Position of the new index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_bond[source]
new_ind_with_identity(name, left_inds, right_inds, axis=0)[source]

Inplace add a new index, where the newly stacked array entries form the identity from left_inds to right_inds. Selecting 0 or 1 for the new index name thus is like ‘turning off’ this tensor if viewed as an operator.

Parameters:
  • name (str) – Name of the new index.

  • left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.

  • right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of left_inds.

  • axis (int, optional) – Position of the new index.

new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)[source]

Expand this tensor with two new indices of size d, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.

Parameters:
  • new_left_ind (str) – Name of the new left index.

  • new_right_ind (str) – Name of the new right index.

  • d (int) – Size of the new indices.

  • inplace (bool, optional) – Whether to perform the expansion inplace.

Return type:

Tensor

new_ind_pair_with_identity_[source]
conj(inplace=False)[source]

Conjugate this tensors data (does nothing to indices).

conj_[source]
property H
Conjugate this tensors data (does nothing to indices).
property shape
The size of each dimension.
property ndim
The number of dimensions.
property size
The total number of array elements.
property dtype
The data type of the array elements.
property backend
The backend inferred from the data.
iscomplex()[source]
astype(dtype, inplace=False)[source]

Change the type of this tensor to dtype.

astype_[source]
max_dim()[source]

Return the maximum size of any dimension, or 1 if scalar.

ind_size(ind)[source]

Return the size of dimension corresponding to ind.

inds_size(inds)[source]

Return the total size of dimensions corresponding to inds.

shared_bond_size(other)[source]

Get the total size of the shared index(es) with other.

inner_inds()[source]

Get all indices that appear on two or more tensors.

transpose(*output_inds, inplace=False)[source]

Transpose this tensor - permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.

Parameters:
  • output_inds (sequence of str) – The desired output sequence of indices.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

transpose_[source]
transpose_like(other, inplace=False)[source]

Transpose this tensor to match the indices of other, allowing for one index to be different. E.g. if self.inds = ('a', 'b', 'c', 'x') and other.inds = ('b', 'a', 'd', 'c') then ‘x’ will be aligned with ‘d’ and the output inds will be ('b', 'a', 'x', 'c')

Parameters:
  • other (Tensor) – The tensor to match.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

See also

transpose

transpose_like_[source]
moveindex(ind, axis, inplace=False)[source]

Move the index ind to position axis. Like transpose, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Parameters:
  • ind (str) – The index to move.

  • axis (int) – The new position to move ind to. Can be negative.

  • inplace (bool, optional) – Whether to perform the move inplace or not.

Return type:

Tensor

moveindex_[source]
item()[source]

Return the scalar value of this tensor, if it has a single element.

trace(left_inds, right_inds, preserve_tensor=False, inplace=False)[source]

Trace index or indices left_inds with right_inds, removing them.

Parameters:
  • left_inds (str or sequence of str) – The left indices to trace, order matching right_inds.

  • right_inds (str or sequence of str) – The right indices to trace, order matching left_inds.

  • preserve_tensor (bool, optional) – If True, a tensor will be returned even if no indices remain.

  • inplace (bool, optional) – Perform the trace inplace.

Returns:

z

Return type:

Tensor or scalar

sum_reduce(ind, inplace=False)[source]

Sum over index ind, removing it from this tensor.

Parameters:
  • ind (str) – The index to sum over.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

sum_reduce_[source]
vector_reduce(ind, v, inplace=False)[source]

Contract the vector v with the index ind of this tensor, removing it.

Parameters:
  • ind (str) – The index to contract.

  • v (array_like) – The vector to contract with.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

vector_reduce_[source]
collapse_repeated(inplace=False)[source]

Take the diagonals of any repeated indices, such that each index only appears once.

collapse_repeated_[source]
contract(*others, output_inds=None, **opts)[source]
direct_product(other, sum_inds=(), inplace=False)[source]
direct_product_[source]
split(*args, **kwargs)[source]
compute_reduced_factor(side, left_inds, right_inds, **split_opts)[source]
distance(other, **contract_opts)[source]
distance_normalized[source]
gate(G, ind, preserve_inds=True, inplace=False)[source]

Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike contract, G is a raw array and the tensor remains with the same set of indices.

Parameters:
  • G (2D array_like) – The matrix to gate the tensor index with.

  • ind (str) – Which index to apply the gate to.

Return type:

Tensor

Examples

Create a random tensor of 4 qubits:

>>> t = qtn.rand_tensor(
...    shape=[2, 2, 2, 2],
...    inds=['k0', 'k1', 'k2', 'k3'],
... )

Create another tensor with an X gate applied to qubit 2:

>>> Gt = t.gate(qu.pauli('X'), 'k2')

The contraction of these two tensors is now the expectation of that operator:

>>> t.H @ Gt
-4.108910576149794
gate_[source]
singular_values(left_inds, method='svd')[source]

Return the singular values associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Returns:

The singular values.

Return type:

1d-array

entropy(left_inds, method='svd')[source]

Return the entropy associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Return type:

float

retag(retag_map, inplace=False)[source]

Rename the tags of this tensor, optionally, in-place.

Parameters:
  • retag_map (dict-like) – Mapping of pairs {old_tag: new_tag, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed tags will be returned.

retag_[source]
reindex(index_map, inplace=False)[source]

Rename the indices of this tensor, optionally in-place.

Parameters:
  • index_map (dict-like) – Mapping of pairs {old_ind: new_ind, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed inds will be returned.

reindex_[source]
fuse(fuse_map, inplace=False)[source]

Combine groups of indices into single indices.

Parameters:

fuse_map (dict_like or sequence of tuples.) – Mapping like: {new_ind: sequence of existing inds, ...} or an ordered mapping like [(new_ind_1, old_inds_1), ...] in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused.

Returns:

The transposed, reshaped and re-labeled tensor.

Return type:

Tensor

fuse_[source]
unfuse(unfuse_map, shape_map, inplace=False)[source]

Reshape single indices into groups of multiple indices

Parameters:
  • unfuse_map (dict_like or sequence of tuples.) – Mapping like: {existing_ind: sequence of new inds, ...} or an ordered mapping like [(old_ind_1, new_inds_1), ...] in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shape

  • shape_map (dict_like or sequence of tuples) – Mapping like: {old_ind: new_ind_sizes, ...} or an ordered mapping like [(old_ind_1, new_ind_sizes_1), ...].

Returns:

The transposed, reshaped and re-labeled tensor

Return type:

Tensor

unfuse_[source]
to_dense(*inds_seq, to_qarray=False)[source]

Convert this Tensor into an dense array, with a single dimension for each of inds in inds_seqs. E.g. to convert several sites into a density matrix: T.to_dense(('k0', 'k1'), ('b0', 'b1')).

to_qarray[source]
squeeze(include=None, exclude=None, inplace=False)[source]

Drop any singlet dimensions from this tensor.

Parameters:
  • inplace (bool, optional) – Whether modify the original or return a new tensor.

  • include (sequence of str, optional) – Only squeeze dimensions with indices in this list.

  • exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.

  • inplace – Whether to perform the squeeze inplace or not.

Return type:

Tensor

squeeze_[source]
largest_element()[source]

Return the largest element, in terms of absolute magnitude, of this tensor.

idxmin(f=None)[source]

Get the index configuration of the minimum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the minimum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the minimum element.

Return type:

dict[str, int]

idxmax(f=None)[source]

Get the index configuration of the maximum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the maximum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the maximum element.

Return type:

dict[str, int]

norm()[source]

Frobenius norm of this tensor:

\[\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]

where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.

normalize(inplace=False)[source]
normalize_[source]
symmetrize(ind1, ind2, inplace=False)[source]

Hermitian symmetrize this tensor for indices ind1 and ind2. I.e. T = (T + T.conj().T) / 2, where the transpose is taken only over the specified indices.

symmetrize_[source]
isometrize(left_inds=None, method='qr', inplace=False)[source]

Make this tensor unitary (or isometric) with respect to left_inds. The underlying method is set by method.

Parameters:
  • left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.

  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

  • inplace (bool, optional) – Whether to perform the unitization inplace.

Return type:

Tensor

isometrize_[source]
unitize[source]
unitize_
randomize(dtype=None, inplace=False, **randn_opts)[source]

Randomize the entries of this tensor.

Parameters:
  • dtype ({None, str}, optional) – The data type of the random entries. If left as the default None, then the data type of the current array will be used.

  • inplace (bool, optional) – Whether to perform the randomization inplace, by default False.

  • randn_opts – Supplied to randn().

Return type:

Tensor

randomize_[source]
flip(ind, inplace=False)[source]

Reverse the axis on this tensor corresponding to ind. Like performing e.g. X[:, :, ::-1, :].

flip_[source]
multiply_index_diagonal(ind, x, inplace=False)[source]

Multiply this tensor by 1D array x as if it were a diagonal tensor being contracted into index ind.

multiply_index_diagonal_[source]
almost_equals(other, **kwargs)[source]

Check if this tensor is almost the same as another.

drop_tags(tags=None)[source]

Drop certain tags, defaulting to all, from this tensor.

bonds(other)[source]

Return a tuple of the shared indices between this tensor and other.

filter_bonds(other)[source]

Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.

Parameters:

other (Tensor) – The other tensor.

Returns:

shared, unshared – The shared and unshared indices.

Return type:

(tuple[str], tuple[str])

__imul__(other)[source]
__itruediv__(other)[source]
__and__(other)[source]

Combine with another Tensor or TensorNetwork into a new TensorNetwork.

__or__(other)[source]

Combine virtually (no copies made) with another Tensor or TensorNetwork into a new TensorNetwork.

__matmul__(other)[source]

Explicitly contract with another tensor. Avoids some slight overhead of calling the full tensor_contract().

negate(inplace=False)[source]

Negate this tensor.

negate_[source]
__neg__()[source]

Negate this tensor.

as_network(virtual=True)[source]

Return a TensorNetwork with only this tensor.

draw(*args, **kwargs)[source]

Plot a graph of this tensor and its indices.

graph[source]
visualize[source]
__getstate__()[source]

Helper for pickle.

__setstate__(state)[source]
_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_extra()[source]

General detailed info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_str(normal=True, extra=False)[source]

Render the general info as a string.

_repr_html_()[source]

Render this Tensor as HTML, for Jupyter notebooks.

__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

class quimb.tensor.tensor_2d.TensorNetwork(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: object

A collection of (as yet uncontracted) Tensors.

Parameters:
  • ts (sequence of Tensor or TensorNetwork) – The objects to combine. The new network will copy these (but not the underlying data) by default. For a view set virtual=True.

  • virtual (bool, optional) – Whether the TensorNetwork should be a view onto the tensors it is given, or a copy of them. E.g. if a virtual TN is constructed, any changes to a Tensor’s indices or tags will propagate to all TNs viewing that Tensor.

  • check_collisions (bool, optional) – If True, the default, then TensorNetwork instances with double indices which match another TensorNetwork instances double indices will have those indices’ names mangled. Can be explicitly turned off when it is known that no collisions will take place – i.e. when not adding any new tensors.

tensor_map

Mapping of unique ids to tensors, like``{tensor_id: tensor, …}``. I.e. this is where the tensors are ‘stored’ by the network.

Type:

dict

tag_map

Mapping of tags to a set of tensor ids which have those tags. I.e. {tag: {tensor_id_1, tensor_id_2, ...}}. Thus to select those tensors could do: map(tensor_map.__getitem__, tag_map[tag]).

Type:

dict

ind_map

Like tag_map but for indices. So ind_map[ind]] returns the tensor ids of those tensors with ind.

Type:

dict

exponent

A scalar prefactor for the tensor network, stored in base 10 like 10**exponent. This is mostly for conditioning purposes and will be 0.0 unless you use use equalize_norms(value) or tn.strip_exponent(tid_or_tensor).

Type:

float

_EXTRA_PROPS = ()
_CONTRACT_STRUCTURED = False
combine(other, *, virtual=False, check_collisions=True)[source]

Combine this tensor network with another, returning a new tensor network. This can be overriden by subclasses to check for a compatible structured type.

Parameters:
  • other (TensorNetwork) – The other tensor network to combine with.

  • virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (False, the default), or view them as virtual (True).

  • check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If True (the default), any inner indices that clash will be mangled.

Return type:

TensorNetwork

__and__(other)[source]

Combine this tensor network with more tensors, without contracting. Copies the tensors.

__or__(other)[source]

Combine this tensor network with more tensors, without contracting. Views the constituent tensors.

_update_properties(cls, like=None, current=None, **kwargs)[source]
classmethod new(like=None, **kwargs)[source]

Create a new tensor network, without any tensors, of type cls, with all the requisite properties specified by kwargs or inherited from like.

classmethod from_TN(tn, like=None, inplace=False, **kwargs)[source]

Construct a specific tensor network subclass (i.e. one with some promise about structure/geometry and tags/inds such as an MPS) from a generic tensor network which should have that structure already.

Parameters:
  • cls (class) – The TensorNetwork subclass to convert tn to.

  • tn (TensorNetwork) – The TensorNetwork to convert.

  • like (TensorNetwork, optional) – If specified, try and retrieve the neccesary attribute values from this tensor network.

  • inplace (bool, optional) – Whether to perform the conversion inplace or not.

  • kwargs – Extra properties of the TN subclass that should be specified.

view_as(cls, inplace=False, **kwargs)[source]

View this tensor network as subclass cls.

view_as_[source]
view_like(like, inplace=False, **kwargs)[source]

View this tensor network as the same subclass cls as like inheriting its extra properties as well.

view_like_[source]
copy(virtual=False, deep=False)[source]

Copy this TensorNetwork. If deep=False, (the default), then everything but the actual numeric data will be copied.

__copy__[source]
get_params()[source]

Get a pytree of the ‘parameters’, i.e. all underlying data arrays.

set_params(params)[source]

Take a pytree of the ‘parameters’, i.e. all underlying data arrays, as returned by get_params and set them.

Link tid to each of tags.

“Unlink tid from each of tags.

Link tid to each of inds.

“Unlink tid from each of inds.

_reset_inner_outer(inds)[source]
_next_tid()[source]
add_tensor(tensor, tid=None, virtual=False)[source]

Add a single tensor to this network - mangle its tid if neccessary.

add_tensor_network(tn, virtual=False, check_collisions=True)[source]
add(t, virtual=False, check_collisions=True)[source]

Add Tensor, TensorNetwork or sequence thereof to self.

make_tids_consecutive(tid0=0)[source]

Reset the tids - node identifies - to be consecutive integers.

__iand__(tensor)[source]

Inplace, but non-virtual, addition of a Tensor or TensorNetwork to this network. It should not have any conflicting indices.

__ior__(tensor)[source]

Inplace, virtual, addition of a Tensor or TensorNetwork to this network. It should not have any conflicting indices.

_modify_tensor_tags(old, new, tid)[source]
_modify_tensor_inds(old, new, tid)[source]
property num_tensors
The total number of tensors in the tensor network.
property num_indices
The total number of indices in the tensor network.
pop_tensor(tid)[source]

Remove tensor with tid from this network, and return it.

remove_all_tensors()[source]

Remove all tensors from this network.

_pop_tensor[source]
delete(tags, which='all')[source]

Delete any tensors which match all or any of tags.

Parameters:
  • tags (str or sequence of str) – The tags to match.

  • which ({'all', 'any'}, optional) – Whether to match all or any of the tags.

check()[source]

Check some basic diagnostics of the tensor network.

add_tag(tag, where=None, which='all')[source]

Add tag to every tensor in this network, or if where is specified, the tensors matching those tags – i.e. adds the tag to all tensors in self.select_tensors(where, which=which).

drop_tags(tags=None)[source]

Remove a tag or tags from this tensor network, defaulting to all. This is an inplace operation.

Parameters:

tags (str or sequence of str or None, optional) – The tag or tags to drop. If None, drop all tags.

retag(tag_map, inplace=False)[source]

Rename tags for all tensors in this network, optionally in-place.

Parameters:
  • tag_map (dict-like) – Mapping of pairs {old_tag: new_tag, ...}.

  • inplace (bool, optional) – Perform operation inplace or return copy (default).

retag_[source]
reindex(index_map, inplace=False)[source]

Rename indices for all tensors in this network, optionally in-place.

Parameters:

index_map (dict-like) – Mapping of pairs {old_ind: new_ind, ...}.

reindex_[source]
mangle_inner_(append=None, which=None)[source]

Generate new index names for internal bonds, meaning that when this tensor network is combined with another, there should be no collisions.

Parameters:
  • append (None or str, optional) – Whether and what to append to the indices to perform the mangling. If None a whole new random UUID will be generated.

  • which (sequence of str, optional) – Which indices to rename, if None (the default), all inner indices.

conj(mangle_inner=False, inplace=False)[source]

Conjugate all the tensors in this network (leaves all indices).

conj_[source]
property H
Conjugate all the tensors in this network (leaves all indices).
item()[source]

Return the scalar value of this tensor network, if it is a scalar.

largest_element()[source]

Return the ‘largest element’, in terms of absolute magnitude, of this tensor network. This is defined as the product of the largest elements of each tensor in the network, which would be the largest single term occuring if the TN was summed explicitly.

norm(**contract_opts)[source]

Frobenius norm of this tensor network. Computed by exactly contracting the TN with its conjugate:

\[\|T\|_F = \sqrt{\mathrm{Tr} \left(T^{\dagger} T\right)}\]

where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.

make_norm(mangle_append='*', layer_tags=('KET', 'BRA'), return_all=False)[source]

Make the norm tensor network of this tensor network tn.H & tn.

Parameters:
  • mangle_append ({str, False or None}, optional) – How to mangle the inner indices of the bra.

  • layer_tags ((str, str), optional) – The tags to identify the top and bottom.

  • return_all (bool, optional) – Return the norm, the ket and the bra.

multiply(x, inplace=False, spread_over=8)[source]

Scalar multiplication of this tensor network with x.

Parameters:
  • x (scalar) – The number to multiply this tensor network by.

  • inplace (bool, optional) – Whether to perform the multiplication inplace.

  • spread_over (int, optional) – How many tensors to try and spread the multiplication over, in order that the effect of multiplying by a very large or small scalar is not concentrated.

multiply_[source]
multiply_each(x, inplace=False)[source]

Scalar multiplication of each tensor in this tensor network with x. If trying to spread a multiplicative factor fac uniformly over all tensors in the network and the number of tensors is large, then calling multiply(fac) can be inaccurate due to precision loss. If one has a routine that can precisely compute the x to be applied to each tensor, then this function avoids the potential inaccuracies in multiply().

Parameters:
  • x (scalar) – The number that multiplies each tensor in the network

  • inplace (bool, optional) – Whether to perform the multiplication inplace.

multiply_each_[source]
negate(inplace=False)[source]

Negate this tensor network.

negate_[source]
__mul__(other)[source]

Scalar multiplication.

__rmul__(other)[source]

Right side scalar multiplication.

__imul__(other)[source]

Inplace scalar multiplication.

__truediv__(other)[source]

Scalar division.

__itruediv__(other)[source]

Inplace scalar division.

__neg__()[source]

Negate this tensor network.

__iter__()[source]
property tensors
Get the tuple of tensors in this tensor network.
property arrays
Get the tuple of raw arrays containing all the tensor network data.
get_symbol_map()[source]

Get the mapping of the current indices to einsum style single unicode characters. The symbols are generated in the order they appear on the tensors.

get_equation(output_inds=None)[source]

Get the ‘equation’ describing this tensor network, in einsum style with a single unicode letter per index. The symbols are generated in the order they appear on the tensors.

Parameters:

output_inds (None or sequence of str, optional) – Manually specify which are the output indices.

Returns:

eq

Return type:

str

Examples

>>> tn = qtn.TN_rand_reg(10, 3, 2)
>>> tn.get_equation()
'abc,dec,fgb,hia,jke,lfk,mnj,ing,omd,ohl->'
get_inputs_output_size_dict(output_inds=None)[source]

Get a tuple of inputs, output and size_dict suitable for e.g. passing to path optimizers. The symbols are generated in the order they appear on the tensors.

Parameters:

output_inds (None or sequence of str, optional) – Manually specify which are the output indices.

Returns:

  • inputs (tuple[str])

  • output (str)

  • size_dict (dict[str, ix])

geometry_hash(output_inds=None, strict_index_order=False)[source]

A hash of this tensor network’s shapes & geometry. A useful check for determinism. Moreover, if this matches for two tensor networks then they can be contracted using the same tree for the same cost. Order of tensors matters for this - two isomorphic tensor networks with shuffled tensor order will not have the same hash value. Permuting the indices of individual of tensors or the output does not matter unless you set strict_index_order=True.

Parameters:
  • output_inds (None or sequence of str, optional) – Manually specify which indices are output indices and their order, otherwise assumed to be all indices that appear once.

  • strict_index_order (bool, optional) – If False, then the permutation of the indices of each tensor and the output does not matter.

Return type:

str

Examples

If we transpose some indices, then only the strict hash changes:

>>> tn = qtn.TN_rand_reg(100, 3, 2, seed=0)
>>> tn.geometry_hash()
'18c702b2d026dccb1a69d640b79d22f3e706b6ad'
>>> tn.geometry_hash(strict_index_order=True)
'c109fdb43c5c788c0aef7b8df7bb83853cf67ca1'
>>> t = tn['I0']
>>> t.transpose_(t.inds[2], t.inds[1], t.inds[0])
>>> tn.geometry_hash()
'18c702b2d026dccb1a69d640b79d22f3e706b6ad'
>>> tn.geometry_hash(strict_index_order=True)
'52c32c1d4f349373f02d512f536b1651dfe25893'
tensors_sorted()[source]

Return a tuple of tensors sorted by their respective tags, such that the tensors of two networks with the same tag structure can be iterated over pairwise.

apply_to_arrays(fn)[source]

Modify every tensor’s array inplace by applying fn to it. This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.

_get_tids_from(xmap, xs, which)[source]
_get_tids_from_tags(tags, which='all')[source]

Return the set of tensor ids that match tags.

Parameters:
  • tags (seq or str, str, None, ..., int, slice) – Tag specifier(s).

  • which ({'all', 'any', '!all', '!any'}) –

    How to select based on the tags, if:

    • ’all’: get ids of tensors matching all tags

    • ’any’: get ids of tensors matching any tags

    • ’!all’: get ids of tensors not matching all tags

    • ’!any’: get ids of tensors not matching any tags

Return type:

set[str]

_get_tids_from_inds(inds, which='all')[source]

Like _get_tids_from_tags but specify inds instead.

_tids_get(*tids)[source]

Convenience function that generates unique tensors from tids.

_inds_get(*inds)[source]

Convenience function that generates unique tensors from inds.

_tags_get(*tags)[source]

Convenience function that generates unique tensors from tags.

select_tensors(tags, which='all')[source]

Return the sequence of tensors that match tags. If which='all', each tensor must contain every tag. If which='any', each tensor can contain any of the tags.

Parameters:
  • tags (str or sequence of str) – The tag or tag sequence.

  • which ({'all', 'any'}) – Whether to require matching all or any of the tags.

Returns:

tagged_tensors – The tagged tensors.

Return type:

tuple of Tensor

_select_tids(tids, virtual=True)[source]

Get a copy or a virtual copy (doesn’t copy the tensors) of this TensorNetwork, only with the tensors corresponding to tids.

_select_without_tids(tids, virtual=True)[source]

Get a copy or a virtual copy (doesn’t copy the tensors) of this TensorNetwork, without the tensors corresponding to tids.

select(tags, which='all', virtual=True)[source]

Get a TensorNetwork comprising tensors that match all or any of tags, inherit the network properties/structure from self. This returns a view of the tensors not a copy.

Parameters:
  • tags (str or sequence of str) – The tag or tag sequence.

  • which ({'all', 'any'}) – Whether to require matching all or any of the tags.

  • virtual (bool, optional) – Whether the returned tensor network views the same tensors (the default) or takes copies (virtual=False) from self.

Returns:

tagged_tn – A tensor network containing the tagged tensors.

Return type:

TensorNetwork

select_any[source]
select_all[source]
select_neighbors(tags, which='any')[source]

Select any neighbouring tensors to those specified by tags.self

Parameters:
  • tags (sequence of str, int) – Tags specifying tensors.

  • which ({'any', 'all'}, optional) – How to select tensors based on tags.

Returns:

The neighbouring tensors.

Return type:

tuple[Tensor]

_select_local_tids(tids, max_distance=1, fillin=False, reduce_outer=None, inwards=False, virtual=True, include=None, exclude=None)[source]
select_local(tags, which='all', max_distance=1, fillin=False, reduce_outer=None, virtual=True, include=None, exclude=None)[source]

Select a local region of tensors, based on graph distance max_distance to any tagged tensors.

Parameters:
  • tags (str or sequence of str) – The tag or tag sequence defining the initial region.

  • which ({'all', 'any', '!all', '!any'}, optional) – Whether to require matching all or any of the tags.

  • max_distance (int, optional) – The maximum distance to the initial tagged region.

  • fillin (bool or int, optional) –

    Once the local region has been selected based on graph distance, whether and how many times to ‘fill-in’ corners by adding tensors connected multiple times. For example, if R is an initially tagged tensor and x are locally selected tensors:

      fillin=0       fillin=1       fillin=2
    
     | | | | |      | | | | |      | | | | |
    -o-o-x-o-o-    -o-x-x-x-o-    -x-x-x-x-x-
     | | | | |      | | | | |      | | | | |
    -o-x-x-x-o-    -x-x-x-x-x-    -x-x-x-x-x-
     | | | | |      | | | | |      | | | | |
    -x-x-R-x-x-    -x-x-R-x-x-    -x-x-R-x-x-
    

  • reduce_outer ({'sum', 'svd', 'svd-sum', 'reflect'}, optional) – Whether and how to reduce any outer indices of the selected region.

  • virtual (bool, optional) – Whether the returned tensor network should be a view of the tensors or a copy (virtual=False).

  • include (sequence of int, optional) – Only include tensor with these tids.

  • exclude (sequence of int, optional) – Only include tensor without these tids.

Return type:

TensorNetwork

__getitem__(tags)[source]

Get the tensor(s) associated with tags.

Parameters:

tags (str or sequence of str) – The tags used to select the tensor(s).

Return type:

Tensor or sequence of Tensors

__setitem__(tags, tensor)[source]

Set the single tensor uniquely associated with tags.

__delitem__(tags)[source]

Delete any tensors which have all of tags.

partition_tensors(tags, inplace=False, which='any')[source]

Split this TN into a list of tensors containing any or all of tags and a TensorNetwork of the the rest.

Parameters:
  • tags (sequence of str) – The list of tags to filter the tensors by. Use ... (Ellipsis) to filter all.

  • inplace (bool, optional) – If true, remove tagged tensors from self, else create a new network with the tensors removed.

  • which ({'all', 'any'}) – Whether to require matching all or any of the tags.

Returns:

(u_tn, t_ts) – The untagged tensor network, and the sequence of tagged Tensors.

Return type:

(TensorNetwork, tuple of Tensors)

partition(tags, which='any', inplace=False)[source]

Split this TN into two, based on which tensors have any or all of tags. Unlike partition_tensors, both results are TNs which inherit the structure of the initial TN.

Parameters:
  • tags (sequence of str) – The tags to split the network with.

  • which ({'any', 'all'}) – Whether to split based on matching any or all of the tags.

  • inplace (bool) – If True, actually remove the tagged tensors from self.

Returns:

untagged_tn, tagged_tn – The untagged and tagged tensor networs.

Return type:

(TensorNetwork, TensorNetwork)

_split_tensor_tid(tid, left_inds, **split_opts)[source]
split_tensor(tags, left_inds, **split_opts)[source]

Split the single tensor uniquely identified by tags, adding the resulting tensors from the decomposition back into the network. Inplace operation.

replace_with_identity(where, which='any', inplace=False)[source]

Replace all tensors marked by where with an identity. E.g. if X denote where tensors:

---1  X--X--2---         ---1---2---
   |  |  |  |      ==>          |
   X--X--X  |                   |
Parameters:
  • where (tag or seq of tags) – Tags specifying the tensors to replace.

  • which ({'any', 'all'}) – Whether to replace tensors matching any or all the tags where.

  • inplace (bool) – Perform operation in place.

Returns:

The TN, with section replaced with identity.

Return type:

TensorNetwork

See also

replace_with_svd

replace_with_svd(where, left_inds, eps, *, which='any', right_inds=None, method='isvd', max_bond=None, absorb='both', cutoff_mode='rel', renorm=None, ltags=None, rtags=None, keep_tags=True, start=None, stop=None, inplace=False)[source]

Replace all tensors marked by where with an iteratively constructed SVD. E.g. if X denote where tensors:

                        :__       ___:
---X  X--X  X---        :  \     /   :
   |  |  |  |      ==>  :   U~s~VH---:
---X--X--X--X---        :__/     \   :
      |     +---        :         \__:
      X              left_inds       :
                                 right_inds
Parameters:
  • where (tag or seq of tags) – Tags specifying the tensors to replace.

  • left_inds (ind or sequence of inds) – The indices defining the left hand side of the SVD.

  • eps (float) – The tolerance to perform the SVD with, affects the number of singular values kept. See quimb.linalg.rand_linalg.estimate_rank().

  • which ({'any', 'all', '!any', '!all'}, optional) – Whether to replace tensors matching any or all the tags where, prefix with ‘!’ to invert the selection.

  • right_inds (ind or sequence of inds, optional) – The indices defining the right hand side of the SVD, these can be automatically worked out, but for hermitian decompositions the order is important and thus can be given here explicitly.

  • method (str, optional) – How to perform the decomposition, if not an iterative method the subnetwork dense tensor will be formed first, see tensor_split() for options.

  • max_bond (int, optional) – The maximum bond to keep, defaults to no maximum (-1).

  • ltags (sequence of str, optional) – Tags to add to the left tensor.

  • rtags (sequence of str, optional) – Tags to add to the right tensor.

  • keep_tags (bool, optional) – Whether to propagate tags found in the subnetwork to both new tensors or drop them, defaults to True.

  • start (int, optional) – If given, assume can use TNLinearOperator1D.

  • stop (int, optional) – If given, assume can use TNLinearOperator1D.

  • inplace (bool, optional) – Perform operation in place.

Return type:

TensorNetwork

replace_with_svd_[source]
replace_section_with_svd(start, stop, eps, **replace_with_svd_opts)[source]

Take a 1D tensor network, and replace a section with a SVD. See replace_with_svd().

Parameters:
  • start (int) – Section start index.

  • stop (int) – Section stop index, not included itself.

  • eps (float) – Precision of SVD.

  • replace_with_svd_opts – Supplied to replace_with_svd().

Return type:

TensorNetwork

convert_to_zero()[source]

Inplace conversion of this network to an all zero tensor network.

_contract_between_tids(tid1, tid2, equalize_norms=False, gauges=None, output_inds=None, **contract_opts)[source]
contract_between(tags1, tags2, **contract_opts)[source]

Contract the two tensors specified by tags1 and tags2 respectively. This is an inplace operation. No-op if the tensor specified by tags1 and tags2 is the same tensor.

Parameters:
  • tags1 – Tags uniquely identifying the first tensor.

  • tags2 (str or sequence of str) – Tags uniquely identifying the second tensor.

  • contract_opts – Supplied to tensor_contract().

contract_ind(ind, output_inds=None, **contract_opts)[source]

Contract tensors connected by ind.

gate_inds[source]
gate_inds_[source]
gate_inds_with_tn(inds, gate, gate_inds_inner, gate_inds_outer, inplace=False)[source]

Gate some indices of this tensor network with another tensor network. That is, rewire and then combine them such that the new tensor network has the same outer indices as before, but now includes gate:

gate_inds_outer
 :
 :         gate_inds_inner
 :         :
 :         :   inds               inds
 :  ┌────┐ :   : ┌────┬───        : ┌───────┬───
 ───┤    ├──  a──┤    │          a──┤       │
    │    │       │    ├───          │       ├───
 ───┤gate├──  b──┤self│     -->  b──┤  new  │
    │    │       │    ├───          │       ├───
 ───┤    ├──  c──┤    │          c──┤       │
    └────┘       └────┴───          └───────┴───

Where there can be arbitrary structure of tensors within both self and gate.

The case where some of target inds are not present is handled as so (here ‘c’ is missing so ‘x’ and ‘y’ are kept):

gate_inds_outer
 :
 :         gate_inds_inner
 :         :
 :         :   inds               inds
 :  ┌────┐ :   : ┌────┬───        : ┌───────┬───
 ───┤    ├──  a──┤    │          a──┤       │
    │    │       │    ├───          │       ├───
 ───┤gate├──  b──┤self│     -->  b──┤  new  │
    │    │       │    ├───          │       ├───
x───┤    ├──y    └────┘          x──┤    ┌──┘
    └────┘                          └────┴───y

Which enables convinient construction of various tensor networks, for example propagators, from scratch.

Parameters:
  • inds (str or sequence of str) – The current indices to gate. If an index is not present on the target tensor network, it is ignored and instead the resulting tensor network will have both the corresponding inner and outer index of the gate tensor network.

  • gate (Tensor or TensorNetwork) – The tensor network to gate with.

  • gate_inds_inner (sequence of str) – The indices of gate to join to the old inds, must be the same length as inds.

  • gate_inds_outer (sequence of str) – The indices of gate to make the new outer inds, must be the same length as inds.

Returns:

tn_gated

Return type:

TensorNetwork

gate_inds_with_tn_[source]
_compute_tree_gauges(tree, outputs)[source]

Given a tree of connected tensors, absorb the gauges from outside inwards, finally outputing the gauges associated with the outputs.

Parameters:
  • tree (sequence of (tid_outer, tid_inner, distance)) – The tree of connected tensors, see get_tree_span().

  • outputs (sequence of (tid, ind)) – Each output is specified by a tensor id and an index, such that having absorbed all gauges in the tree, the effective reduced factor of the tensor with respect to the index is returned.

Returns:

Gouts – The effective reduced factors of the tensor index pairs specified in outputs, each a matrix.

Return type:

sequence of array

_compress_between_virtual_tree_tids(tidl, tidr, max_bond, cutoff, r, absorb='both', include=None, exclude=None, span_opts=None, **compress_opts)[source]
_compute_bond_env(tid1, tid2, select_local_distance=None, select_local_opts=None, max_bond=None, cutoff=None, method='contract_around', contract_around_opts=None, contract_compressed_opts=None, optimize='auto-hq', include=None, exclude=None)[source]

Compute the local tensor environment of the bond(s), if cut, between two tensors.

_compress_between_full_bond_tids(tid1, tid2, max_bond, cutoff=0.0, absorb='both', renorm=False, method='eigh', select_local_distance=None, select_local_opts=None, env_max_bond='max_bond', env_cutoff='cutoff', env_method='contract_around', contract_around_opts=None, contract_compressed_opts=None, env_optimize='auto-hq', include=None, exclude=None)[source]
_compress_between_local_fit(tid1, tid2, max_bond, cutoff=0.0, absorb='both', method='als', select_local_distance=1, select_local_opts=None, include=None, exclude=None, **fit_opts)[source]
_compress_between_tids(tid1, tid2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, mode='basic', equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback=None, **compress_opts)[source]
compress_between(tags1, tags2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=0, canonize_opts=None, equalize_norms=False, **compress_opts)[source]

Compress the bond between the two single tensors in this network specified by tags1 and tags2 using tensor_compress_bond():

  |    |    |    |           |    |    |    |
==●====●====●====●==       ==●====●====●====●==
 /|   /|   /|   /|          /|   /|   /|   /|
  |    |    |    |           |    |    |    |
==●====1====2====●==  ==>  ==●====L----R====●==
 /|   /|   /|   /|          /|   /|   /|   /|
  |    |    |    |           |    |    |    |
==●====●====●====●==       ==●====●====●====●==
 /|   /|   /|   /|          /|   /|   /|   /|

This is an inplace operation. The compression is unlikely to be optimal with respect to the frobenius norm, unless the TN is already canonicalized at the two tensors. The absorb kwarg can be specified to yield an isometry on either the left or right resulting tensors.

Parameters:
  • tags1 – Tags uniquely identifying the first (‘left’) tensor.

  • tags2 (str or sequence of str) – Tags uniquely identifying the second (‘right’) tensor.

  • max_bond (int or None, optional) – The maxmimum bond dimension.

  • cutoff (float, optional) – The singular value cutoff to use.

  • canonize_distance (int, optional) – How far to locally canonize around the target tensors first.

  • canonize_opts (None or dict, optional) – Other options for the local canonization.

  • equalize_norms (bool or float, optional) – If set, rescale the norms of all tensors modified to this value, stripping the rescaling factor into the exponent attribute.

  • compress_opts – Supplied to tensor_compress_bond().

See also

canonize_between

compress_all(max_bond=None, cutoff=1e-10, canonize=True, tree_gauge_distance=None, canonize_distance=None, canonize_after_distance=None, mode='auto', inplace=False, **compress_opts)[source]

Compress all bonds one by one in this network.

Parameters:
  • max_bond (int or None, optional) – The maxmimum bond dimension to compress to.

  • cutoff (float, optional) – The singular value cutoff to use.

  • tree_gauge_distance (int, optional) – How far to include local tree gauge information when compressing. If the local geometry is a tree, then each compression will be locally optimal up to this distance.

  • canonize_distance (int, optional) – How far to locally canonize around the target tensors first, this is set automatically by tree_gauge_distance if not specified.

  • canonize_after_distance (int, optional) – How far to locally canonize around the target tensors after, this is set automatically by tree_gauge_distance, depending on mode if not specified.

  • mode ({'auto', 'basic', 'virtual-tree'}, optional) – The mode to use for compressing the bonds. If ‘auto’, will use ‘basic’ if tree_gauge_distance == 0 else ‘virtual-tree’.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • compress_opts – Supplied to compress_between().

Return type:

TensorNetwork

See also

compress_between, canonize_all

compress_all_[source]
compress_all_tree(inplace=False, **compress_opts)[source]

Canonically compress this tensor network, assuming it to be a tree. This generates a tree spanning out from the most central tensor, then compresses all bonds inwards in a depth-first manner, using an infinite canonize_distance to shift the orthogonality center.

compress_all_tree_[source]
compress_all_1d(max_bond=None, cutoff=1e-10, canonize=True, inplace=False, **compress_opts)[source]

Compress a tensor network that you know has a 1D topology, this proceeds by generating a spanning ‘tree’ from around the least central tensor, then optionally canonicalizing all bonds outwards and compressing inwards.

Parameters:
  • max_bond (int, optional) – The maximum bond dimension to compress to.

  • cutoff (float, optional) – The singular value cutoff to use.

  • canonize (bool, optional) – Whether to canonize all bonds outwards first.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • compress_opts – Supplied to tensor_compress_bond().

Return type:

TensorNetwork

compress_all_1d_[source]
compress_all_simple(max_bond=None, cutoff=1e-10, gauges=None, max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, inplace=False, **gauge_simple_opts)[source]
compress_all_simple_[source]
_canonize_between_tids(tid1, tid2, absorb='right', gauges=None, gauge_smudge=1e-06, equalize_norms=False, **canonize_opts)[source]
canonize_between(tags1, tags2, absorb='right', **canonize_opts)[source]

‘Canonize’ the bond between the two single tensors in this network specified by tags1 and tags2 using tensor_canonize_bond:

  |    |    |    |           |    |    |    |
--●----●----●----●--       --●----●----●----●--
 /|   /|   /|   /|          /|   /|   /|   /|
  |    |    |    |           |    |    |    |
--●----1----2----●--  ==>  --●---->~~~~R----●--
 /|   /|   /|   /|          /|   /|   /|   /|
  |    |    |    |           |    |    |    |
--●----●----●----●--       --●----●----●----●--
 /|   /|   /|   /|          /|   /|   /|   /|

This is an inplace operation. This can only be used to put a TN into truly canonical form if the geometry is a tree, such as an MPS.

Parameters:
  • tags1 – Tags uniquely identifying the first (‘left’) tensor, which will become an isometry.

  • tags2 (str or sequence of str) – Tags uniquely identifying the second (‘right’) tensor.

  • absorb ({'left', 'both', 'right'}, optional) – Which side of the bond to absorb the non-isometric operator.

  • canonize_opts – Supplied to tensor_canonize_bond().

See also

compress_between

reduce_inds_onto_bond(inda, indb, tags=None, drop_tags=False, combine=True, ndim_cutoff=3)[source]

Use QR factorization to ‘pull’ the indices inda and indb off of their respective tensors and onto the bond between them. This is an inplace operation.

_get_neighbor_tids(tids, exclude_inds=())[source]

Get the tids of tensors connected to the tensor(s) at tids.

Parameters:
  • tids (int or sequence of int) – The tensor identifier(s) to get the neighbors of.

  • exclude_inds (sequence of str, optional) – Exclude these indices from being considered as connections.

Return type:

oset[int]

_get_neighbor_inds(inds)[source]

Get the indices connected to the index(es) at inds.

Parameters:

inds (str or sequence of str) – The index(es) to get the neighbors of.

Return type:

oset[str]

_get_subgraph_tids(tids)[source]

Get the tids of tensors connected, by any distance, to the tensor or region of tensors tids.

_ind_to_subgraph_tids(ind)[source]

Get the tids of tensors connected, by any distance, to the index ind.

istree()[source]

Check if this tensor network has a tree structure, (treating multibonds as a single edge).

Examples

>>> MPS_rand_state(10, 7).istree()
True
>>> MPS_rand_state(10, 7, cyclic=True).istree()
False
isconnected()[source]

Check whether this tensor network is connected, i.e. whether there is a path between any two tensors, (including size 1 indices).

subgraphs(virtual=False)[source]

Split this tensor network into disconneceted subgraphs.

Parameters:

virtual (bool, optional) – Whether the tensor networks should view the original tensors or not - by default take copies.

Return type:

list[TensorNetwork]

get_tree_span(tids, min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', sorter=None, weight_bonds=True, inwards=True)[source]

Generate a tree on the tensor network graph, fanning out from the tensors identified by tids, up to a maximum of max_distance away. The tree can be visualized with draw_tree_span().

Parameters:
  • tids (sequence of str) – The nodes that define the region to span out of.

  • min_distance (int, optional) – Don’t add edges to the tree until this far from the region. For example, 1 will not include the last merges from neighboring tensors in the region defined by tids.

  • max_distance (None or int, optional) – Terminate branches once they reach this far away. If None there is no limit,

  • include (sequence of str, optional) – If specified, only tids specified here can be part of the tree.

  • exclude (sequence of str, optional) – If specified, tids specified here cannot be part of the tree.

  • ndim_sort ({'min', 'max', 'none'}, optional) – When expanding the tree, how to choose what nodes to expand to next, once connectivity to the current surface has been taken into account.

  • distance_sort ({'min', 'max', 'none'}, optional) – When expanding the tree, how to choose what nodes to expand to next, once connectivity to the current surface has been taken into account.

  • weight_bonds (bool, optional) – Whether to weight the ‘connection’ of a candidate tensor to expand out to using bond size as well as number of bonds.

Returns:

The ordered list of merges, each given as tuple (tid1, tid2, d) indicating merge tid1 -> tid2 at distance d.

Return type:

list[(str, str, int)]

See also

draw_tree_span

_draw_tree_span_tids(tids, span=None, min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', sorter=None, weight_bonds=True, color='order', colormap='Spectral', **draw_opts)[source]
draw_tree_span(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', weight_bonds=True, color='order', colormap='Spectral', **draw_opts)[source]

Visualize a generated tree span out of the tensors tagged by tags.

Parameters:
  • tags (str or sequence of str) – Tags specifiying a region of tensors to span out of.

  • which ({'all', 'any': '!all', '!any'}, optional) – How to select tensors based on the tags.

  • min_distance (int, optional) – See get_tree_span().

  • max_distance (None or int, optional) – See get_tree_span().

  • include (sequence of str, optional) – See get_tree_span().

  • exclude (sequence of str, optional) – See get_tree_span().

  • distance_sort ({'min', 'max'}, optional) – See get_tree_span().

  • color ({'order', 'distance'}, optional) – Whether to color nodes based on the order of the contraction or the graph distance from the specified region.

  • colormap (str) – The name of a matplotlib colormap to use.

See also

get_tree_span

graph_tree_span[source]
_canonize_around_tids(tids, min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', inwards=True, gauges=None, gauge_smudge=1e-06, **canonize_opts)[source]
canonize_around(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', equalize_norms=False, inplace=False, **canonize_opts)[source]

Expand a locally canonical region around tags:

          --●---●--
        |   |   |   |
      --●---v---v---●--
    |   |   |   |   |   |
  --●--->---v---v---<---●--
|   |   |   |   |   |   |   |
●--->--->---O---O---<---<---●
|   |   |   |   |   |   |   |
  --●--->---^---^---^---●--
    |   |   |   |   |   |
      --●---^---^---●--
        |   |   |   |
          --●---●--

                 <=====>
                 max_distance = 2 e.g.

Shown on a grid here but applicable to arbitrary geometry. This is a way of gauging a tensor network that results in a canonical form if the geometry is described by a tree (e.g. an MPS or TTN). The canonizations proceed inwards via QR decompositions.

The sequence generated by round-robin expanding the boundary of the originally specified tensors - it will only be unique for trees.

Parameters:
  • tags (str, or sequence or str) – Tags defining which set of tensors to locally canonize around.

  • which ({'all', 'any', '!all', '!any'}, optional) – How select the tensors based on tags.

  • min_distance (int, optional) – How close, in terms of graph distance, to canonize tensors away. See get_tree_span().

  • max_distance (None or int, optional) – How far, in terms of graph distance, to canonize tensors away. See get_tree_span().

  • include (sequence of str, optional) – How to build the spanning tree to canonize along. See get_tree_span().

  • exclude (sequence of str, optional) – How to build the spanning tree to canonize along. See get_tree_span().

  • {'min' (distance_sort) – How to build the spanning tree to canonize along. See get_tree_span().

  • 'max'} – How to build the spanning tree to canonize along. See get_tree_span().

  • optional – How to build the spanning tree to canonize along. See get_tree_span().

  • absorb ({'right', 'left', 'both'}, optional) – As we canonize inwards from tensor A to tensor B which to absorb the singular values into.

  • gauge_links (bool, optional) – Whether to gauge the links between branches of the spanning tree generated (in a Simple Update like fashion).

  • link_absorb ({'both', 'right', 'left'}, optional) – If performing the link gauging, how to absorb the singular values.

  • equalize_norms (bool or float, optional) – Scale the norms of tensors acted on to this value, accumulating the log10 scaled factors in self.exponent.

  • inplace (bool, optional) – Whether to perform the canonization inplace.

Return type:

TensorNetwork

See also

get_tree_span

canonize_around_[source]
gauge_all_canonize(max_iterations=5, absorb='both', gauges=None, gauge_smudge=1e-06, equalize_norms=False, inplace=False, **canonize_opts)[source]

Iterative gauge all the bonds in this tensor network with a basic ‘canonization’ strategy.

gauge_all_canonize_[source]
gauge_all_simple(max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, gauges=None, equalize_norms=False, progbar=False, inplace=False)[source]

Iterative gauge all the bonds in this tensor network with a ‘simple update’ like strategy.

gauge_all_simple_[source]
gauge_all_random(max_iterations=1, unitary=True, seed=None, inplace=False)[source]

Gauge all the bonds in this network randomly. This is largely for testing purposes.

gauge_all_random_[source]
gauge_all(method='canonize', **gauge_opts)[source]

Gauge all bonds in this network using one of several strategies.

Parameters:
  • method (str, optional) – The method to use for gauging. One of “canonize”, “simple”, or “random”. Default is “canonize”.

  • gauge_opts (dict, optional) – Additional keyword arguments to pass to the chosen method.

gauge_all_[source]
_gauge_local_tids(tids, max_distance=1, max_iterations='max_distance', method='canonize', inwards=False, include=None, exclude=None, **gauge_local_opts)[source]

Iteratively gauge all bonds in the local tensor network defined by tids according to one of several strategies.

gauge_local(tags, which='all', max_distance=1, max_iterations='max_distance', method='canonize', inplace=False, **gauge_local_opts)[source]

Iteratively gauge all bonds in the tagged sub tensor network according to one of several strategies.

gauge_local_[source]
gauge_simple_insert(gauges, remove=False, smudge=0.0, power=1.0)[source]

Insert the simple update style bond gauges found in gauges if they are present in this tensor network. The gauges inserted are also returned so that they can be removed later.

Parameters:
  • gauges (dict[str, array_like]) – The store of bond gauges, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be gauged.

  • remove (bool, optional) – Whether to remove the gauges from the store after inserting them.

  • smudge (float, optional) – A small value to add to the gauge vectors to avoid singularities.

Returns:

  • outer (list[(Tensor, str, array_like)]) – The sequence of gauges applied to outer indices, each a tuple of the tensor, the index and the gauge vector.

  • inner (list[((Tensor, Tensor), str, array_like)]) – The sequence of gauges applied to inner indices, each a tuple of the two inner tensors, the inner bond and the gauge vector applied.

static gauge_simple_remove(outer=None, inner=None)[source]

Remove the simple update style bond gauges inserted by gauge_simple_insert.

gauge_simple_temp(gauges, smudge=1e-12, ungauge_outer=True, ungauge_inner=True)[source]

Context manager that temporarily inserts simple update style bond gauges into this tensor network, before optionally ungauging them.

Parameters:
  • self (TensorNetwork) – The TensorNetwork to be gauge-bonded.

  • gauges (dict[str, array_like]) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be gauged.

  • ungauge_outer (bool, optional) – Whether to ungauge the outer bonds.

  • ungauge_inner (bool, optional) – Whether to ungauge the inner bonds.

Yields:
  • outer (list[(Tensor, int, array_like)]) – The tensors, indices and gauges that were performed on outer indices.

  • inner (list[((Tensor, Tensor), int, array_like)]) – The tensors, indices and gauges that were performed on inner bonds.

Examples

>>> tn = TN_rand_reg(10, 4, 3)
>>> tn ^ all
-51371.66630218866
>>> gauges = {}
>>> tn.gauge_all_simple_(gauges=gauges)
>>> len(gauges)
20
>>> tn ^ all
28702551.673767876
>>> with gauged_bonds(tn, gauges):
...     # temporarily insert gauges
...     print(tn ^ all)
-51371.66630218887
>>> tn ^ all
28702551.67376789
_contract_compressed_tid_sequence(seq, max_bond=None, cutoff=1e-10, output_inds=None, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_opts=None, compress_late=True, compress_mode='auto', compress_min_size=None, compress_span=False, compress_matrices=True, compress_exclude=None, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, preserve_tensor=False, progbar=False, inplace=False)[source]
_contract_around_tids(tids, seq=None, min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, canonize_opts=None, **kwargs)[source]

Contract around tids, by following a greedily generated spanning tree, and compressing whenever two tensors in the outer ‘boundary’ share more than one index.

compute_centralities()[source]
most_central_tid()[source]
least_central_tid()[source]
contract_around_center(**opts)[source]
contract_around_corner(**opts)[source]
contract_around(tags, which='all', min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=True, compress_min_size=None, compress_opts=None, compress_span=False, compress_matrices=True, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, inplace=False, **kwargs)[source]

Perform a compressed contraction inwards towards the tensors identified by tags.

contract_around_[source]
contract_compressed(optimize, output_inds=None, max_bond=None, cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=True, compress_min_size=None, compress_opts=None, compress_span=True, compress_matrices=True, compress_exclude=None, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, progbar=False, **kwargs)[source]
contract_compressed_[source]
new_bond(tags1, tags2, **opts)[source]

Inplace addition of a dummmy (size 1) bond between the single tensors specified by by tags1 and tags2.

Parameters:
  • tags1 (sequence of str) – Tags identifying the first tensor.

  • tags2 (sequence of str) – Tags identifying the second tensor.

  • opts – Supplied to new_bond().

See also

new_bond

_cut_between_tids(tid1, tid2, left_ind, right_ind)[source]
cut_between(left_tags, right_tags, left_ind, right_ind)[source]

Cut the bond between the tensors specified by left_tags and right_tags, giving them the new inds left_ind and right_ind respectively.

cut_bond(bond, new_left_ind=None, new_right_ind=None)[source]

Cut the bond index specified by bond between the tensors it connects. Use cut_between for control over which tensor gets which new index new_left_ind or new_right_ind. The index must connect exactly two tensors.

Parameters:
  • bond (str) – The index to cut.

  • new_left_ind (str, optional) – The new index to give to the left tensor (lowest tid value).

  • new_right_ind (str, optional) – The new index to give to the right tensor (highest tid value).

drape_bond_between(tagsa, tagsb, tags_target, left_ind=None, right_ind=None, inplace=False)[source]

Take the bond(s) connecting the tensors tagged at tagsa and tagsb, and ‘drape’ it through the tensor tagged at tags_target, effectively adding an identity tensor between the two and contracting it with the third:

 ┌─┐    ┌─┐      ┌─┐     ┌─┐
─┤A├─Id─┤B├─    ─┤A├─┐ ┌─┤B├─
 └─┘    └─┘      └─┘ │ │ └─┘
             left_ind│ │right_ind
     ┌─┐     -->     ├─┤
    ─┤C├─           ─┤D├─
     └┬┘             └┬┘     where D = C ⊗ Id
      │               │

This increases the size of the target tensor by d**2, and disconnects the tensors at tagsa and tagsb.

Parameters:
  • tagsa (str or sequence of str) – The tag(s) identifying the first tensor.

  • tagsb (str or sequence of str) – The tag(s) identifying the second tensor.

  • tags_target (str or sequence of str) – The tag(s) identifying the target tensor.

  • left_ind (str, optional) – The new index to give to the left tensor.

  • right_ind (str, optional) – The new index to give to the right tensor.

  • inplace (bool, optional) – Whether to perform the draping inplace.

Return type:

TensorNetwork

drape_bond_between_[source]
isel(selectors, inplace=False)[source]

Select specific values for some dimensions/indices of this tensor network, thereby removing them.

Parameters:
  • selectors (dict[str, int]) – Mapping of index(es) to which value to take.

  • inplace (bool, optional) – Whether to select inplace or not.

Return type:

TensorNetwork

See also

Tensor.isel

isel_[source]
sum_reduce(ind, inplace=False)[source]

Sum over the index ind of this tensor network, removing it. This is like contracting a vector of ones in, or marginalizing a classical probability distribution.

Parameters:
  • ind (str) – The index to sum over.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

sum_reduce_[source]
vector_reduce(ind, v, inplace=False)[source]

Contract the vector v with the index ind of this tensor network, removing it.

Parameters:
  • ind (str) – The index to contract.

  • v (array_like) – The vector to contract with.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

TensorNetwork

vector_reduce_[source]
cut_iter(*inds)[source]

Cut and iterate over one or more indices in this tensor network. Each network yielded will have that index removed, and the sum of all networks will equal the original network. This works by iterating over the product of all combinations of each bond supplied to isel. As such, the number of networks produced is exponential in the number of bonds cut.

Parameters:

inds (sequence of str) – The bonds to cut.

Yields:

TensorNetwork

Examples

Here we’ll cut the two extra bonds of a cyclic MPS and sum the contraction of the resulting 49 OBC MPS norms:

>>> psi = MPS_rand_state(10, bond_dim=7, cyclic=True)
>>> norm = psi.H & psi
>>> bnds = bonds(norm[0], norm[-1])
>>> sum(tn ^ all for tn in norm.cut_iter(*bnds))
1.0
insert_operator(A, where1, where2, tags=None, inplace=False)[source]

Insert an operator on the bond between the specified tensors, e.g.:

  |   |              |   |
--1---2--    ->    --1-A-2--
  |                  |
Parameters:
  • A (array) – The operator to insert.

  • where1 (str, sequence of str, or int) – The tags defining the ‘left’ tensor.

  • where2 (str, sequence of str, or int) – The tags defining the ‘right’ tensor.

  • tags (str or sequence of str) – Tags to add to the new operator’s tensor.

  • inplace (bool, optional) – Whether to perform the insertion inplace.

insert_operator_[source]
_insert_gauge_tids(U, tid1, tid2, Uinv=None, tol=1e-10, bond=None)[source]
insert_gauge(U, where1, where2, Uinv=None, tol=1e-10)[source]

Insert the gauge transformation U^-1 @ U into the bond between the tensors, T1 and T2, defined by where1 and where2. The resulting tensors at those locations will be T1 @ U^-1 and U @ T2.

Parameters:
  • U (array) – The gauge to insert.

  • where1 (str, sequence of str, or int) – Tags defining the location of the ‘left’ tensor.

  • where2 (str, sequence of str, or int) – Tags defining the location of the ‘right’ tensor.

  • Uinv (array) – The inverse gauge, U @ Uinv == Uinv @ U == eye, to insert. If not given will be calculated using numpy.linalg.inv().

contract_tags(tags, which='any', output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, inplace=False, **contract_opts)[source]

Contract the tensors that match any or all of tags.

Parameters:
  • tags (sequence of str) – The list of tags to filter the tensors by. Use all or ... (Ellipsis) to contract all tensors.

  • which ({'all', 'any'}) – Whether to require matching all or any of the tags.

  • output_inds (sequence of str, optional) – The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once.

  • optimize ({None, str, path_like, PathOptimizer}, optional) –

    The contraction path optimization strategy to use.

    • None: use the default strategy,

    • str: use the preset strategy with the given name,

    • path_like: use this exact path,

    • cotengra.HyperOptimizer: find the contraction using this optimizer, supports slicing,

    • cotengra.ContractionTree: use this exact tree, supports

    slicing, - opt_einsum.PathOptimizer: find the path using this

    optimizer.

    Contraction with cotengra might be a bit more efficient but the main reason would be to handle sliced contraction automatically.

  • get (str, optional) –

    What to return. If:

    • None (the default) - return the resulting scalar or Tensor.

    • 'expression' - return a callbable expression that performs the contraction and operates on the raw arrays.

    • 'tree' - return the cotengra.ContractionTree describing the contraction.

    • 'path' - return the raw ‘path’ as a list of tuples.

    • 'symbol-map' - return the dict mapping indices to ‘symbols’ (single unicode letters) used internally by cotengra

    • 'path-info' - return the opt_einsum.PathInfo path object with detailed information such as flop cost. The symbol-map is also added to the quimb_symbol_map attribute.

  • backend ({'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional) – Which backend to use to perform the contraction. Supplied to cotengra.

  • preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.

  • inplace (bool, optional) – Whether to perform the contraction inplace.

  • contract_opts – Passed to tensor_contract().

Returns:

The result of the contraction, still a TensorNetwork if the contraction was only partial.

Return type:

TensorNetwork, Tensor or scalar

contract_tags_[source]
contract(tags=..., output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, max_bond=None, inplace=False, **opts)[source]

Contract some, or all, of the tensors in this network. This method dispatches to contract_tags, contract_structured, or contract_compressed based on the various arguments.

Parameters:
  • tags (sequence of str, all, or Ellipsis, optional) – Any tensors with any of these tags with be contracted. Use all or ... (Ellipsis) to contract all tensors. ... will try and use a ‘structured’ contract method if possible.

  • output_inds (sequence of str, optional) – The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once.

  • optimize ({None, str, path_like, PathOptimizer}, optional) –

    The contraction path optimization strategy to use.

    • None: use the default strategy,

    • str: use the preset strategy with the given name,

    • path_like: use this exact path,

    • cotengra.HyperOptimizer: find the contraction using this optimizer, supports slicing,

    • cotengra.ContractionTree: use this exact tree, supports slicing,

    • opt_einsum.PathOptimizer: find the path using this optimizer.

    Contraction with cotengra might be a bit more efficient but the main reason would be to handle sliced contraction automatically.

  • get (str, optional) –

    What to return. If:

    • None (the default) - return the resulting scalar or Tensor.

    • 'expression' - return a callbable expression that performs the contraction and operates on the raw arrays.

    • 'tree' - return the cotengra.ContractionTree describing the contraction.

    • 'path' - return the raw ‘path’ as a list of tuples.

    • 'symbol-map' - return the dict mapping indices to ‘symbols’ (single unicode letters) used internally by cotengra

    • 'path-info' - return the opt_einsum.PathInfo path object with detailed information such as flop cost. The symbol-map is also added to the quimb_symbol_map attribute.

  • backend ({'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional) – Which backend to use to perform the contraction. Supplied to cotengra.

  • preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.

  • inplace (bool, optional) – Whether to perform the contraction inplace. This is only valid if not all tensors are contracted (which doesn’t produce a TN).

  • opts – Passed to tensor_contract(), contract_compressed() .

Returns:

The result of the contraction, still a TensorNetwork if the contraction was only partial.

Return type:

TensorNetwork, Tensor or scalar

contract_[source]
contract_cumulative(tags_seq, output_inds=None, preserve_tensor=False, equalize_norms=False, inplace=False, **opts)[source]

Cumulative contraction of tensor network. Contract the first set of tags, then that set with the next set, then both of those with the next and so forth. Could also be described as an manually ordered contraction of all tags in tags_seq.

Parameters:
  • tags_seq (sequence of sequence of str) – The list of tag-groups to cumulatively contract.

  • output_inds (sequence of str, optional) – The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once.

  • preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.

  • inplace (bool, optional) – Whether to perform the contraction inplace.

  • opts – Passed to tensor_contract().

Returns:

The result of the contraction, still a TensorNetwork if the contraction was only partial.

Return type:

TensorNetwork, Tensor or scalar

contraction_path(optimize=None, **contract_opts)[source]

Compute the contraction path, a sequence of (int, int), for the contraction of this entire tensor network using path optimizer optimize.

contraction_info(optimize=None, **contract_opts)[source]

Compute the opt_einsum.PathInfo object decsribing the contraction of this entire tensor network using path optimizer optimize.

contraction_tree(optimize=None, output_inds=None, **kwargs)[source]

Return the cotengra.ContractionTree corresponding to contracting this entire tensor network with path finder optimize.

contraction_width(optimize=None, **contract_opts)[source]

Compute the ‘contraction width’ of this tensor network. This is defined as log2 of the maximum tensor size produced during the contraction sequence. If every index in the network has dimension 2 this corresponds to the maximum rank tensor produced.

contraction_cost(optimize=None, **contract_opts)[source]

Compute the ‘contraction cost’ of this tensor network. This is defined as log10 of the total number of scalar operations during the contraction sequence.

__rshift__(tags_seq)[source]

Overload of ‘>>’ for TensorNetwork.contract_cumulative.

__irshift__(tags_seq)[source]

Overload of ‘>>=’ for inplace TensorNetwork.contract_cumulative.

__xor__(tags)[source]

Overload of ‘^’ for TensorNetwork.contract.

__ixor__(tags)[source]

Overload of ‘^=’ for inplace TensorNetwork.contract.

__matmul__(other)[source]

Overload “@” to mean full contraction with another network.

as_network(virtual=True)[source]

Matching method (for ensuring object is a tensor network) to as_network(), which simply returns self if virtual=True.

aslinearoperator(left_inds, right_inds, ldims=None, rdims=None, backend=None, optimize=None)[source]

View this TensorNetwork as a TNLinearOperator.

split(left_inds, right_inds=None, **split_opts)[source]

Decompose this tensor network across a bipartition of outer indices.

This method matches Tensor.split by converting to a TNLinearOperator first. Note unless an iterative method is passed to method, the full dense tensor will be contracted.

trace(left_inds, right_inds, **contract_opts)[source]

Trace over left_inds joined with right_inds

to_dense(*inds_seq, to_qarray=False, **contract_opts)[source]

Convert this network into an dense array, with a single dimension for each of inds in inds_seqs. E.g. to convert several sites into a density matrix: TN.to_dense(('k0', 'k1'), ('b0', 'b1')).

to_qarray[source]
compute_reduced_factor(side, left_inds, right_inds, optimize='auto-hq', **contract_opts)[source]

Compute either the left or right ‘reduced factor’ of this tensor network. I.e., view as an operator, X, mapping left_inds to right_inds and compute L or R such that X = U_R @ R or X = L @ U_L, with U_R and U_L unitary operators that are not computed. Only dag(X) @ X or X @ dag(X) is contracted, which is generally cheaper than contracting X itself.

Parameters:
  • self (TensorNetwork) – The tensor network to compute the reduced factor of.

  • side ({'left', 'right'}) – Whether to compute the left or right reduced factor. If ‘right’ then dag(X) @ X is contracted, otherwise X @ dag(X).

  • left_inds (sequence of str) – The indices forming the left side of the operator.

  • right_inds (sequence of str) – The indices forming the right side of the operator.

  • contract_opts (dict, optional) – Options to pass to to_dense().

Return type:

array_like

insert_compressor_between_regions(ltags, rtags, max_bond=None, cutoff=1e-10, select_which='any', insert_into=None, new_tags=None, new_ltags=None, new_rtags=None, bond_ind=None, optimize='auto-hq', inplace=False, **compress_opts)[source]

Compute and insert a pair of ‘oblique’ projection tensors (see for example https://arxiv.org/abs/1905.02351) that effectively compresses between two regions of the tensor network. Useful for various approximate contraction methods such as HOTRG and CTMRG.

Parameters:
  • ltags (sequence of str) – The tags of the tensors in the left region.

  • rtags (sequence of str) – The tags of the tensors in the right region.

  • max_bond (int or None, optional) – The maximum bond dimension to use for the compression (i.e. shared by the two projection tensors). If None then the maximum is controlled by cutoff.

  • cutoff (float, optional) – The cutoff to use for the compression.

  • select_which ({'any', 'all', 'none'}, optional) – How to select the regions based on the tags, see select().

  • insert_into (TensorNetwork, optional) – If given, insert the new tensors into this tensor network, assumed to have the same relevant indices as self.

  • new_tags (str or sequence of str, optional) – The tag(s) to add to both the new tensors.

  • new_ltags (str or sequence of str, optional) – The tag(s) to add to the new left projection tensor.

  • new_rtags (str or sequence of str, optional) – The tag(s) to add to the new right projection tensor.

  • optimize (str or PathOptimizer, optional) – How to optimize the contraction of the projection tensors.

  • inplace (bool, optional) – Whether perform the insertion in-place. If insert_into is supplied then this doesn’t matter, and that tensor network will be modified and returned.

Return type:

TensorNetwork

insert_compressor_between_regions_[source]
distance(*args, **kwargs)[source]
distance_normalized[source]
fit(tn_target, method='als', tol=1e-09, inplace=False, progbar=False, **fitting_opts)[source]

Optimize the entries of this tensor network with respect to a least squares fit of tn_target which should have the same outer indices. Depending on method this calls tensor_network_fit_als() or tensor_network_fit_autodiff(). The quantity minimized is:

\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]
Parameters:
  • tn_target (TensorNetwork) – The target tensor network to try and fit the current one to.

  • method ({'als', 'autodiff'}, optional) – Whether to use alternating least squares (ALS) or automatic differentiation to perform the optimization. Generally ALS is better for simple geometries, autodiff better for complex ones.

  • tol (float, optional) – The target norm distance.

  • inplace (bool, optional) – Update the current tensor network in place.

  • progbar (bool, optional) – Show a live progress bar of the fitting process.

  • fitting_opts – Supplied to either tensor_network_fit_als() or tensor_network_fit_autodiff().

Returns:

tn_opt – The optimized tensor network.

Return type:

TensorNetwork

See also

tensor_network_fit_als, tensor_network_fit_autodiff, tensor_network_distance

fit_[source]
property tags
all_inds()[source]

Return a tuple of all indices in this network.

ind_size(ind)[source]

Find the size of ind.

inds_size(inds)[source]

Return the total size of dimensions corresponding to inds.

ind_sizes()[source]

Get dict of each index mapped to its size.

inner_inds()[source]

Tuple of interior indices, assumed to be any indices that appear twice or more (this only holds generally for non-hyper tensor networks).

outer_inds()[source]

Tuple of exterior indices, assumed to be any lone indices (this only holds generally for non-hyper tensor networks).

outer_dims_inds()[source]

Get the ‘outer’ pairs of dimension and indices, i.e. as if this tensor network was fully contracted.

outer_size()[source]

Get the total size of the ‘outer’ indices, i.e. as if this tensor network was fully contracted.

get_multibonds(include=None, exclude=None)[source]

Get a dict of ‘multibonds’ in this tensor network, i.e. groups of two or more indices that appear on exactly the same tensors and thus could be fused, for example.

Parameters:
  • include (sequence of str, optional) – Only consider these indices, by default all indices.

  • exclude (sequence of str, optional) – Ignore these indices, by default the outer indices of this TN.

Returns:

A dict mapping the tuple of indices that could be fused to the tuple of tensor ids they appear on.

Return type:

dict[tuple[str], tuple[int]]

get_hyperinds(output_inds=None)[source]

Get a tuple of all ‘hyperinds’, defined as those indices which don’t appear exactly twice on either the tensors or in the ‘outer’ (i.e. output) indices.

Note the default set of ‘outer’ indices is calculated as only those indices that appear once on the tensors, so these likely need to be manually specified, otherwise, for example, an index that appears on two tensors and the output will incorrectly be identified as non-hyper.

Parameters:

output_inds (None, str or sequence of str, optional) – The outer or output index or indices. If not specified then taken as every index that appears only once on the tensors (and thus non-hyper).

Returns:

The tensor network hyperinds.

Return type:

tuple[str]

compute_contracted_inds(*tids, output_inds=None)[source]

Get the indices describing the tensor contraction of tensors corresponding to tids.

squeeze(fuse=False, include=None, exclude=None, inplace=False)[source]

Drop singlet bonds and dimensions from this tensor network. If fuse=True also fuse all multibonds between tensors.

Parameters:
  • fuse (bool, optional) – Whether to fuse multibonds between tensors as well as squeezing.

  • include (sequence of str, optional) – Only squeeze these indices, by default all indices.

  • exclude (sequence of str, optional) – Ignore these indices, by default the outer indices of this TN.

  • inplace (bool, optional) – Whether to perform the squeeze and optional fuse inplace.

Return type:

TensorNetwork

squeeze_[source]
isometrize(method='qr', allow_no_left_inds=False, inplace=False)[source]

Project every tensor in this network into an isometric form, assuming they have left_inds marked.

Parameters:
  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

  • allow_no_left_inds (bool, optional) – If True then allow tensors with no left_inds to be left alone, rather than raising an error.

  • inplace (bool, optional) – If True then perform the operation in-place.

Return type:

TensorNetwork

isometrize_[source]
unitize[source]
unitize_
randomize(dtype=None, seed=None, inplace=False, **randn_opts)[source]

Randomize every tensor in this TN - see quimb.tensor.tensor_core.Tensor.randomize().

Parameters:
  • dtype ({None, str}, optional) – The data type of the random entries. If left as the default None, then the data type of the current array will be used.

  • seed (None or int, optional) – Seed for the random number generator.

  • inplace (bool, optional) – Whether to perform the randomization inplace, by default False.

  • randn_opts – Supplied to randn().

Return type:

TensorNetwork

randomize_[source]
strip_exponent(tid_or_tensor, value=None)[source]

Scale the elements of tensor corresponding to tid so that the norm of the array is some value, which defaults to 1. The log of the scaling factor, base 10, is then accumulated in the exponent attribute.

Parameters:
  • tid (str or Tensor) – The tensor identifier or actual tensor.

  • value (None or float, optional) – The value to scale the norm of the tensor to.

distribute_exponent()[source]

Distribute the exponent p of this tensor network (i.e. corresponding to tn * 10**p) equally among all tensors.

equalize_norms(value=None, inplace=False)[source]

Make the Frobenius norm of every tensor in this TN equal without changing the overall value if value=None, or set the norm of every tensor to value by scalar multiplication only.

Parameters:
  • value (None or float, optional) – Set the norm of each tensor to this value specifically. If supplied the change in overall scaling will be accumulated in tn.exponent in the form of a base 10 power.

  • inplace (bool, optional) – Whether to perform the norm equalization inplace or not.

Return type:

TensorNetwork

equalize_norms_[source]
balance_bonds(inplace=False)[source]

Apply tensor_balance_bond() to all bonds in this tensor network.

Parameters:

inplace (bool, optional) – Whether to perform the bond balancing inplace or not.

Return type:

TensorNetwork

balance_bonds_[source]
fuse_multibonds(gauges=None, include=None, exclude=None, inplace=False)[source]

Fuse any multi-bonds (more than one index shared by the same pair of tensors) into a single bond.

Parameters:
  • gauges (None or dict[str, array_like], optional) – If supplied, also fuse the gauges contained in this dict.

  • include (sequence of str, optional) – Only consider these indices, by default all indices.

  • exclude (sequence of str, optional) – Ignore these indices, by default the outer indices of this TN.

fuse_multibonds_[source]
expand_bond_dimension(new_bond_dim, mode=None, rand_strength=None, rand_dist='normal', inds_to_expand=None, inplace=False)[source]

Increase the dimension of all or some of the bonds in this tensor network to at least new_bond_dim, optinally adding some random noise to the new entries.

Parameters:
  • new_bond_dim (int) – The minimum bond dimension to expand to, if the bond dimension is already larger than this it will be left unchanged.

  • rand_strength (float, optional) – The strength of random noise to add to the new array entries, if any. The noise is drawn from a normal distribution with standard deviation rand_strength.

  • inds_to_expand (sequence of str, optional) – The indices to expand, if not all.

  • inplace (bool, optional) – Whether to expand this tensor network in place, or return a new one.

Return type:

TensorNetwork

expand_bond_dimension_[source]
flip(inds, inplace=False)[source]

Flip the dimension corresponding to indices inds on all tensors that share it.

flip_[source]
rank_simplify(output_inds=None, equalize_norms=False, cache=None, max_combinations=500, inplace=False)[source]

Simplify this tensor network by performing contractions that don’t increase the rank of any tensors.

Parameters:
  • output_inds (sequence of str, optional) – Explicitly set which indices of the tensor network are output indices and thus should not be modified.

  • equalize_norms (bool or float) – Actively renormalize the tensors during the simplification process. Useful for very large TNs. The scaling factor will be stored as an exponent in tn.exponent.

  • cache (None or set) – Persistent cache used to mark already checked tensors.

  • inplace (bool, optional) – Whether to perform the rand reduction inplace.

Return type:

TensorNetwork

rank_simplify_[source]
diagonal_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False)[source]

Find tensors with diagonal structure and collapse those axes. This will create a tensor ‘hyper’ network with indices repeated 2+ times, as such, output indices should be explicitly supplied when contracting, as they can no longer be automatically inferred. For example:

>>> tn_diag = tn.diagonal_reduce()
>>> tn_diag.contract(all, output_inds=[])
Parameters:
  • output_inds (sequence of str, optional) – Which indices to explicitly consider as outer legs of the tensor network and thus not replace. If not given, these will be taken as all the indices that appear once.

  • atol (float, optional) – When identifying diagonal tensors, the absolute tolerance with which to compare to zero with.

  • cache (None or set) – Persistent cache used to mark already checked tensors.

  • inplace – Whether to perform the diagonal reduction inplace.

  • bool – Whether to perform the diagonal reduction inplace.

  • optional – Whether to perform the diagonal reduction inplace.

Return type:

TensorNetwork

diagonal_reduce_[source]
antidiag_gauge(output_inds=None, atol=1e-12, cache=None, inplace=False)[source]

Flip the order of any bonds connected to antidiagonal tensors. Whilst this is just a gauge fixing (with the gauge being the flipped identity) it then allows diagonal_reduce to then simplify those indices.

Parameters:
  • output_inds (sequence of str, optional) – Which indices to explicitly consider as outer legs of the tensor network and thus not flip. If not given, these will be taken as all the indices that appear once.

  • atol (float, optional) – When identifying antidiagonal tensors, the absolute tolerance with which to compare to zero with.

  • cache (None or set) – Persistent cache used to mark already checked tensors.

  • inplace – Whether to perform the antidiagonal gauging inplace.

  • bool – Whether to perform the antidiagonal gauging inplace.

  • optional – Whether to perform the antidiagonal gauging inplace.

Return type:

TensorNetwork

antidiag_gauge_[source]
column_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False)[source]

Find bonds on this tensor network which have tensors where all but one column (of the respective index) is non-zero, allowing the ‘cutting’ of that bond.

Parameters:
  • output_inds (sequence of str, optional) – Which indices to explicitly consider as outer legs of the tensor network and thus not slice. If not given, these will be taken as all the indices that appear once.

  • atol (float, optional) – When identifying singlet column tensors, the absolute tolerance with which to compare to zero with.

  • cache (None or set) – Persistent cache used to mark already checked tensors.

  • inplace – Whether to perform the column reductions inplace.

  • bool – Whether to perform the column reductions inplace.

  • optional – Whether to perform the column reductions inplace.

Return type:

TensorNetwork

column_reduce_[source]
split_simplify(atol=1e-12, equalize_norms=False, cache=None, inplace=False, **split_opts)[source]

Find tensors which have low rank SVD decompositions across any combination of bonds and perform them.

Parameters:
  • atol (float, optional) – Cutoff used when attempting low rank decompositions.

  • equalize_norms (bool or float) – Actively renormalize the tensors during the simplification process. Useful for very large TNs. The scaling factor will be stored as an exponent in tn.exponent.

  • cache (None or set) – Persistent cache used to mark already checked tensors.

  • inplace – Whether to perform the split simplification inplace.

  • bool – Whether to perform the split simplification inplace.

  • optional – Whether to perform the split simplification inplace.

split_simplify_[source]
gen_loops(max_loop_length=None)[source]

Generate sequences of tids that represent loops in the TN.

Parameters:

max_loop_length (None or int) – Set the maximum number of tensors that can appear in a loop. If None, wait until any loop is found and set that as the maximum length.

Yields:

tuple[int]

See also

gen_inds_loops

gen_inds_loops(max_loop_length=None)[source]

Generate all sequences of indices, up to a specified length, that represent loops in this tensor network. Unlike gen_loops this function will return the indices of the tensors in the loop rather than the tensor ids, allowing one to differentiate between e.g. a double loop and a ‘figure of eight’ loop.

Parameters:

max_loop_length (None or int) – Set the maximum number of indices that can appear in a loop. If None, wait until any loop is found and set that as the maximum length.

Yields:

tuple[str]

gen_inds_connected(max_length)[source]

Generate all index ‘patches’ of size up to max_length.

Parameters:

max_length (int) – The maximum number of indices in the patch.

Yields:

tuple[str]

See also

gen_inds_loops

_get_string_between_tids(tida, tidb)[source]
tids_are_connected(tids)[source]

Check whether nodes tids are connected.

Parameters:

tids (sequence of int) – Nodes to check.

Return type:

bool

compute_shortest_distances(tids=None, exclude_inds=())[source]

Compute the minimum graph distances between all or some nodes tids.

compute_hierarchical_linkage(tids=None, method='weighted', optimal_ordering=True, exclude_inds=())[source]
compute_hierarchical_ssa_path(tids=None, method='weighted', optimal_ordering=True, exclude_inds=(), are_sorted=False, linkage=None)[source]

Compute a hierarchical grouping of tids, as a ssa_path.

compute_hierarchical_ordering(tids=None, method='weighted', optimal_ordering=True, exclude_inds=(), linkage=None)[source]
compute_hierarchical_grouping(max_group_size, tids=None, method='weighted', optimal_ordering=True, exclude_inds=(), linkage=None)[source]

Group tids (by default, all tensors) into groups of size max_group_size or less, using a hierarchical clustering.

pair_simplify(cutoff=1e-12, output_inds=None, max_inds=10, cache=None, equalize_norms=False, max_combinations=500, inplace=False, **split_opts)[source]
pair_simplify_[source]
loop_simplify(output_inds=None, max_loop_length=None, max_inds=10, cutoff=1e-12, loops=None, cache=None, equalize_norms=False, inplace=False, **split_opts)[source]

Try and simplify this tensor network by identifying loops and checking for low-rank decompositions across groupings of the loops outer indices.

Parameters:
  • max_loop_length (None or int, optional) – Largest length of loop to search for, if not set, the size will be set to the length of the first (and shortest) loop found.

  • cutoff (float, optional) – Cutoff to use for the operator decomposition.

  • loops (None, sequence or callable) – Loops to check, or a function that generates them.

  • cache (set, optional) – For performance reasons can supply a cache for already checked loops.

  • inplace (bool, optional) – Whether to replace the loops inplace.

  • split_opts – Supplied to tensor_split().

Return type:

TensorNetwork

loop_simplify_[source]
full_simplify(seq='ADCR', output_inds=None, atol=1e-12, equalize_norms=False, cache=None, inplace=False, progbar=False, rank_simplify_opts=None, loop_simplify_opts=None, split_simplify_opts=None, custom_methods=(), split_method='svd')[source]

Perform a series of tensor network ‘simplifications’ in a loop until there is no more reduction in the number of tensors or indices. Note that apart from rank-reduction, the simplification methods make use of the non-zero structure of the tensors, and thus changes to this will potentially produce different simplifications.

Parameters:
  • seq (str, optional) –

    Which simplifications and which order to perform them in.

    • 'A' : stands for antidiag_gauge

    • 'D' : stands for diagonal_reduce

    • 'C' : stands for column_reduce

    • 'R' : stands for rank_simplify

    • 'S' : stands for split_simplify

    • 'L' : stands for loop_simplify

    If you want to keep the tensor network ‘simple’, i.e. with no hyperedges, then don’t use 'D' (moreover 'A' is redundant).

  • output_inds (sequence of str, optional) – Explicitly set which indices of the tensor network are output indices and thus should not be modified. If not specified the tensor network is assumed to be a ‘standard’ one where indices that only appear once are the output indices.

  • atol (float, optional) – The absolute tolerance when indentifying zero entries of tensors and performing low-rank decompositions.

  • equalize_norms (bool or float) – Actively renormalize the tensors during the simplification process. Useful for very large TNs. If True, the norms, in the formed of stripped exponents, will be redistributed at the end. If an actual number, the final tensors will all have this norm, and the scaling factor will be stored as a base-10 exponent in tn.exponent.

  • cache (None or set) – A persistent cache for each simplification process to mark already processed tensors.

  • progbar (bool, optional) – Show a live progress bar of the simplification process.

  • inplace (bool, optional) – Whether to perform the simplification inplace.

Return type:

TensorNetwork

full_simplify_[source]
hyperinds_resolve(mode='dense', sorter=None, output_inds=None, inplace=False)[source]

Convert this into a regular tensor network, where all indices appear at most twice, by inserting COPY tensor or tensor networks for each hyper index.

Parameters:
  • mode ({'dense', 'mps', 'tree'}, optional) – What type of COPY tensor(s) to insert.

  • sorter (None or callable, optional) – If given, a function to sort the indices that a single hyperindex will be turned into. Th function is called like tids.sort(key=sorter).

  • inplace (bool, optional) – Whether to insert the COPY tensors inplace.

Return type:

TensorNetwork

hyperinds_resolve_[source]
compress_simplify(output_inds=None, atol=1e-06, simplify_sequence_a='ADCRS', simplify_sequence_b='RPL', hyperind_resolve_mode='tree', hyperind_resolve_sort='clustering', final_resolve=False, split_method='svd', max_simplification_iterations=100, converged_tol=0.01, equalize_norms=True, progbar=False, inplace=False, **full_simplify_opts)[source]
compress_simplify_[source]
max_bond()[source]

Return the size of the largest bond in this network.

property shape
Actual, i.e. exterior, shape of this TensorNetwork.
property dtype
The dtype of this TensorNetwork, this is the minimal common type
of all the tensors data.
iscomplex()[source]
astype(dtype, inplace=False)[source]

Convert the type of all tensors in this network to dtype.

astype_[source]
__getstate__()[source]

Helper for pickle.

__setstate__(state)[source]
_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_str()[source]

Render the general info as a string.

_repr_html_()[source]

Render this TensorNetwork as HTML, for Jupyter notebooks.

__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

draw[source]
draw_3d[source]
draw_interactive[source]
draw_3d_interactive[source]
graph[source]
visualize_tensors[source]
quimb.tensor.tensor_2d.bonds(t1, t2)[source]

Getting any indices connecting the Tensor(s) or TensorNetwork(s) t1 and t2.

quimb.tensor.tensor_2d.bonds_size(t1, t2)[source]

Get the size of the bonds linking tensors or tensor networks t1 and t2.

class quimb.tensor.tensor_2d.oset(it=())[source]

An ordered set which stores elements as the keys of dict (ordered as of python 3.6). ‘A few times’ slower than using a set directly for small sizes, but makes everything deterministic.

__slots__ = ('_d',)
classmethod _from_dict(d)[source]
classmethod from_dict(d)[source]

Public method makes sure to copy incoming dictionary.

copy()[source]
__deepcopy__(memo)[source]
add(k)[source]
discard(k)[source]
remove(k)[source]
clear()[source]
update(*others)[source]
union(*others)[source]
intersection_update(*others)[source]
intersection(*others)[source]
difference_update(*others)[source]
difference(*others)[source]
popleft()[source]
popright()[source]
pop[source]
__eq__(other)[source]

Return self==value.

__or__(other)[source]
__ior__(other)[source]
__and__(other)[source]
__iand__(other)[source]
__sub__(other)[source]
__isub__(other)[source]
__len__()[source]
__iter__()[source]
__contains__(x)[source]
__repr__()[source]

Return repr(self).

quimb.tensor.tensor_2d.oset_union(xs)[source]

Non-variadic ordered set union taking any sequence of iterables.

quimb.tensor.tensor_2d.rand_uuid(base='')[source]

Return a guaranteed unique, shortish identifier, optional appended to base.

Examples

>>> rand_uuid()
'_2e1dae1b'
>>> rand_uuid('virt-bond')
'virt-bond_bf342e68'
quimb.tensor.tensor_2d.tags_to_oset(tags)[source]

Parse a tags argument into an ordered set.

quimb.tensor.tensor_2d.tensor_contract(*tensors, output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, drop_tags=False, **contract_opts)[source]

Contract a collection of tensors into a scalar or tensor, automatically aligning their indices and computing an optimized contraction path. The output tensor will have the union of tags from the input tensors.

Parameters:
  • tensors (sequence of Tensor) – The tensors to contract.

  • output_inds (sequence of str) – The output indices. These can be inferred if the contraction has no ‘hyper’ indices, in which case the output indices are those that appear only once in the input indices, and ordered as they appear in the inputs. For hyper indices or a specific ordering, these must be supplied.

  • optimize ({None, str, path_like, PathOptimizer}, optional) –

    The contraction path optimization strategy to use.

    • None: use the default strategy,

    • str: use the preset strategy with the given name,

    • path_like: use this exact path,

    • cotengra.HyperOptimizer: find the contraction using this optimizer, supports slicing,

    • cotengra.ContractionTree: use this exact tree, supports slicing,

    • opt_einsum.PathOptimizer: find the path using this optimizer.

    Contraction with cotengra might be a bit more efficient but the main reason would be to handle sliced contraction automatically, as well as the fact that it uses autoray internally.

  • get (str, optional) –

    What to return. If:

    • None (the default) - return the resulting scalar or Tensor.

    • 'expression' - return a callbable expression that performs the contraction and operates on the raw arrays.

    • 'tree' - return the cotengra.ContractionTree describing the contraction.

    • 'path' - return the raw ‘path’ as a list of tuples.

    • 'symbol-map' - return the dict mapping indices to ‘symbols’ (single unicode letters) used internally by cotengra

    • 'path-info' - return the opt_einsum.PathInfo path object with detailed information such as flop cost. The symbol-map is also added to the quimb_symbol_map attribute.

  • backend ({'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional) – Which backend to use to perform the contraction. Supplied to cotengra.

  • preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.

  • drop_tags (bool, optional) – Whether to drop all tags from the output tensor. By default the output tensor will keep the union of all tags from the input tensors.

  • contract_opts – Passed to cotengra.array_contract.

Return type:

scalar or Tensor

quimb.tensor.tensor_2d.manhattan_distance(coo_a, coo_b)[source]
quimb.tensor.tensor_2d.nearest_neighbors(coo)[source]
quimb.tensor.tensor_2d.gen_2d_bonds(Lx, Ly, steppers=None, coo_filter=None, cyclic=False)[source]

Convenience function for tiling pairs of bond coordinates on a 2D lattice given a function like lambda i, j: (i + 1, j + 1).

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • steppers (callable or sequence of callable, optional) – Function(s) that take args (i, j) and generate another coordinate, thus defining a bond. Only valid steps are taken. If not given, defaults to nearest neighbor bonds.

  • coo_filter (callable) – Function that takes args (i, j) and only returns True if this is to be a valid starting coordinate.

Yields:

bond (tuple[tuple[int, int], tuple[int, int]]) – A pair of coordinates.

Examples

Generate nearest neighbor bonds:

>>> for bond in gen_2d_bonds(2, 2, [lambda i, j: (i, j + 1),
>>>                                 lambda i, j: (i + 1, j)]):
>>>     print(bond)
((0, 0), (0, 1))
((0, 0), (1, 0))
((0, 1), (1, 1))
((1, 0), (1, 1))

Generate next nearest neighbor digonal bonds:

>>> for bond in gen_2d_bonds(2, 2, [lambda i, j: (i + 1, j + 1),
>>>                                 lambda i, j: (i + 1, j - 1)]):
>>>     print(bond)
((0, 0), (1, 1))
((0, 1), (1, 0))
quimb.tensor.tensor_2d.gen_2d_plaquette(coo0, steps)[source]

Generate a plaquette at site coo0 by stepping first in steps and then the reverse steps.

Parameters:
  • coo0 (tuple) – The coordinate of the first site in the plaquette.

  • steps (tuple) – The steps to take to generate the plaquette. Each element should be one of ('x+', 'x-', 'y+', 'y-').

Yields:

coo (tuple) – The coordinates of the sites in the plaquette, including the last site which will be the same as the first.

quimb.tensor.tensor_2d.gen_2d_plaquettes(Lx, Ly, tiling)[source]

Generate a tiling of plaquettes in a square 2D lattice.

Parameters:
  • Lx (int) – The length of the lattice in the x direction.

  • Ly (int) – The length of the lattice in the y direction.

  • tiling ({'1', '2', 'full'}) –

    The tiling to use:

    • ’1’: plaquettes in a checkerboard pattern, such that each edge

      is covered by a maximum of one plaquette.

    • ’2’ or ‘full’: dense tiling of plaquettes. All bulk edges will

      be covered twice.

Yields:

plaquette (tuple[tuple[int]]) – The coordinates of the sites in each plaquette, including the last site which will be the same as the first.

quimb.tensor.tensor_2d.gen_2d_strings(Lx, Ly)[source]

Generate all length-wise strings in a square 2D lattice.

class quimb.tensor.tensor_2d.Rotator2D(tn, xrange, yrange, from_which, stepsize=1)[source]

Object for rotating coordinates and various contraction functions so that the core algorithms only have to written once, but nor does the actual TN have to be modified.

Parameters:
  • tn (TensorNetwork2D) – The tensor network to rotate coordinates for.

  • xrange (tuple[int, int]) – The range of x-coordinates to range over.

  • yrange (tuple[int, int]) – The range of y-coordinates to range over.

  • from_which ({'xmin', 'xmax', 'ymin', 'ymax'}) – The direction to sweep from.

  • stepsize (int, optional) – The step size to use when sweeping.

sweep_other()
cyclic_x()
cyclic_y()
get_jnext(j)[source]
get_opposite_env_fn()[source]

Get the function and location label for contracting boundaries in the opposite direction to main sweep.

quimb.tensor.tensor_2d.BOUNDARY_SEQUENCE_VALID
quimb.tensor.tensor_2d.BOUNDARY_SEQUENCE_MAP
quimb.tensor.tensor_2d.parse_boundary_sequence(sequence)[source]

Ensure sequence is a tuple of boundary sequence strings from {'xmin', 'xmax', 'ymin', 'ymax'}

class quimb.tensor.tensor_2d.TensorNetwork2D(ts=(), *, virtual=False, check_collisions=True)[source]

Bases: quimb.tensor.tensor_arbgeom.TensorNetworkGen

Mixin class for tensor networks with a square lattice two-dimensional structure, indexed by [{row},{column}] so that:

             'Y{j}'
                v

i=Lx-1 ●──●──●──●──●──●──   ──●
       |  |  |  |  |  |       |
             ...
       |  |  |  |  |  | 'I{i},{j}' = 'I3,5' e.g.
i=3    ●──●──●──●──●──●──
       |  |  |  |  |  |       |
i=2    ●──●──●──●──●──●──   ──●    <== 'X{i}'
       |  |  |  |  |  |  ...  |
i=1    ●──●──●──●──●──●──   ──●
       |  |  |  |  |  |       |
i=0    ●──●──●──●──●──●──   ──●

     j=0, 1, 2, 3, 4, 5    j=Ly-1

This implies the following conventions:

  • the ‘up’ bond is coordinates (i, j), (i + 1, j)

  • the ‘down’ bond is coordinates (i, j), (i - 1, j)

  • the ‘right’ bond is coordinates (i, j), (i, j + 1)

  • the ‘left’ bond is coordinates (i, j), (i, j - 1)

_NDIMS = 2
_EXTRA_PROPS = ('_site_tag_id', '_x_tag_id', '_y_tag_id', '_Lx', '_Ly')
_compatible_2d(other)[source]

Check whether self and other are compatible 2D tensor networks such that they can remain a 2D tensor network when combined.

combine(other, *, virtual=False, check_collisions=True)[source]

Combine this tensor network with another, returning a new tensor network. If the two are compatible, cast the resulting tensor network to a TensorNetwork2D instance.

Parameters:
  • other (TensorNetwork2D or TensorNetwork) – The other tensor network to combine with.

  • virtual (bool, optional) – Whether the new tensor network should copy all the incoming tensors (False, the default), or view them as virtual (True).

  • check_collisions (bool, optional) – Whether to check for index collisions between the two tensor networks before combining them. If True (the default), any inner indices that clash will be mangled.

Return type:

TensorNetwork2D or TensorNetwork

property Lx
The number of rows.
property Ly
The number of columns.
property nsites
The total number of sites.
site_tag(i, j=None)[source]

The name of the tag specifiying the tensor at site (i, j).

property x_tag_id
The string specifier for tagging each row of this 2D TN.
x_tag(i)[source]
property x_tags
A tuple of all of the ``Lx`` different row tags.
row_tag[source]
row_tags
property y_tag_id
The string specifier for tagging each column of this 2D TN.
y_tag(j)[source]
property y_tags
A tuple of all of the ``Ly`` different column tags.
col_tag[source]
col_tags
maybe_convert_coo(x)[source]

Check if x is a tuple of two ints and convert to the corresponding site tag if so.

_get_tids_from_tags(tags, which='all')[source]

This is the function that lets coordinates such as (i, j) be used for many ‘tag’ based functions.

gen_site_coos()[source]

Generate coordinates for all the sites in this 2D TN.

gen_bond_coos()[source]

Generate pairs of coordinates for all the bonds in this 2D TN.

gen_horizontal_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i, j + 1).

gen_horizontal_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i, j + 1) where j is even, which thus don’t overlap at all.

gen_horizontal_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i, j + 1) where j is odd, which thus don’t overlap at all.

gen_vertical_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j).

gen_vertical_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j) where i is even, which thus don’t overlap at all.

gen_vertical_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j) where i is odd, which thus don’t overlap at all.

gen_diagonal_left_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j - 1).

gen_diagonal_left_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j - 1) where j is even, which thus don’t overlap at all.

gen_diagonal_left_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j - 1) where j is odd, which thus don’t overlap at all.

gen_diagonal_right_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j + 1).

gen_diagonal_right_even_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j + 1) where i is even, which thus don’t overlap at all.

gen_diagonal_right_odd_bond_coos()[source]

Generate all coordinate pairs like (i, j), (i + 1, j + 1) where i is odd, which thus don’t overlap at all.

gen_diagonal_bond_coos()[source]

Generate all next nearest neighbor diagonal coordinate pairs.

valid_coo(coo, xrange=None, yrange=None)[source]

Check whether coo is in-bounds.

Parameters:
  • coo ((int, int, int), optional) – The coordinates to check.

  • xrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

  • yrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

Return type:

bool

get_ranges_present()[source]

Return the range of site coordinates present in this TN.

Returns:

xrange, yrange – The minimum and maximum site coordinates present in each direction.

Return type:

tuple[tuple[int, int]]

is_cyclic_x(j=None, imin=None, imax=None)[source]

Check if the x dimension is cyclic (periodic), specifically whether a bond exists between (imin, j) and (imax, j), with default values of imin = 0 and imax = Lx - 1, and j at the center of the lattice. If imin and imax are adjacent then this is considered False, since there is no ‘extra’ connectivity.

is_cyclic_y(i=None, jmin=None, jmax=None)[source]

Check if the y dimension is cyclic (periodic), specifically whether a bond exists between (i, jmin) and (i, jmax), with default values of jmin = 0 and jmax = Ly - 1, and i at the center of the lattice. If jmin and jmax are adjacent then this is considered False, since there is no ‘extra’ connectivity.

__getitem__(key)[source]

Key based tensor selection, checking for integer based shortcut.

show()[source]

Print a unicode schematic of this 2D TN and its bond dimensions.

_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

flatten(fuse_multibonds=True, inplace=False)[source]

Contract all tensors corresponding to each site into one.

flatten_[source]
gen_pairs(xrange=None, yrange=None, xreverse=False, yreverse=False, coordinate_order='xy', xstep=None, ystep=None, stepping_order='xy', step_only=None)[source]

Helper function for generating pairs of cooordinates for all bonds within a certain range, optionally specifying an order.

Parameters:
  • xrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

  • yrange ((int, int), optional) – The range of allowed values for the x and y coordinates.

  • xreverse (bool, optional) – Whether to reverse the order of the x and y sweeps.

  • yreverse (bool, optional) – Whether to reverse the order of the x and y sweeps.

  • coordinate_order (str, optional) – The order in which to sweep the x and y coordinates. Earlier dimensions will change slower. If the corresponding range has size 1 then that dimension doesn’t need to be specified.

  • xstep (int, optional) – When generating a bond, step in this direction to yield the neighboring coordinate. By default, these follow xreverse and yreverse respectively.

  • ystep (int, optional) – When generating a bond, step in this direction to yield the neighboring coordinate. By default, these follow xreverse and yreverse respectively.

  • stepping_order (str, optional) – The order in which to step the x and y coordinates to generate bonds. Does not need to include all dimensions.

  • step_only (int, optional) – Only perform the ith steps in stepping_order, used to interleave canonizing and compressing for example.

Yields:

coo_a, coo_b (((int, int), (int, int)))

canonize_plane(xrange, yrange, equalize_norms=False, canonize_opts=None, **gen_pair_opts)[source]

Canonize every pair of tensors within a subrange, optionally specifying a order to visit those pairs in.

canonize_row(i, sweep, yrange=None, **canonize_opts)[source]

Canonize all or part of a row.

If sweep == 'right' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─  ==>  ─●──>──>──>──>──o──●─ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstart      jstop           jstart      jstop

If sweep == 'left' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─  ==>  ─●──o──<──<──<──<──●─ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
─●──●──●──●──●──●──●─       ─●──●──●──●──●──●──●─
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstop       jstart          jstop       jstart

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • i (int) – Which row to canonize.

  • sweep ({'right', 'left'}) – Which direction to sweep in.

  • jstart (int or None) – Starting column, defaults to whole row.

  • jstop (int or None) – Stopping column, defaults to whole row.

  • canonize_opts – Supplied to canonize_between.

canonize_column(j, sweep, xrange=None, **canonize_opts)[source]

Canonize all or part of a column.

If sweep='up' then:

 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
─●──●──●─       ─●──o──●─ istop
 |  |  |   ==>   |  |  |
─●──●──●─       ─●──^──●─
 |  |  |         |  |  |
─●──●──●─       ─●──^──●─ istart
 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
    .               .
    j               j

If sweep='down' then:

 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
─●──●──●─       ─●──v──●─ istart
 |  |  |   ==>   |  |  |
─●──●──●─       ─●──v──●─
 |  |  |         |  |  |
─●──●──●─       ─●──o──●─ istop
 |  |  |         |  |  |
─●──●──●─       ─●──●──●─
 |  |  |         |  |  |
    .               .
    j               j

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • j (int) – Which column to canonize.

  • sweep ({'up', 'down'}) – Which direction to sweep in.

  • xrange (None or (int, int), optional) – The range of columns to canonize.

  • canonize_opts – Supplied to canonize_between.

canonize_row_around(i, around=(0, 1))[source]
compress_plane(xrange, yrange, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None, **gen_pair_opts)[source]

Compress every pair of tensors within a subrange, optionally specifying a order to visit those pairs in.

compress_row(i, sweep, yrange=None, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None)[source]

Compress all or part of a row.

If sweep == 'right' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━  ━━>  ━●━━>──>──>──>──o━━●━ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstart      jstop           jstart      jstop

If sweep == 'left' then:

 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━  ━━>  ━●━━o──<──<──<──<━━●━ row=i
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
━●━━●━━●━━●━━●━━●━━●━       ━●━━●━━●━━●━━●━━●━━●━
 |  |  |  |  |  |  |         |  |  |  |  |  |  |
    .           .               .           .
    jstop       jstart          jstop       jstart

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • i (int) – Which row to compress.

  • sweep ({'right', 'left'}) – Which direction to sweep in.

  • yrange (tuple[int, int] or None) – The range of columns to compress.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

compress_column(j, sweep, xrange=None, max_bond=None, cutoff=1e-10, equalize_norms=False, compress_opts=None)[source]

Compress all or part of a column.

If sweep='up' then:

 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──o──●─  .
 ┃  ┃  ┃   ==>   ┃  |  ┃   .
─●──●──●─       ─●──^──●─  . xrange
 ┃  ┃  ┃         ┃  |  ┃   .
─●──●──●─       ─●──^──●─  .
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
    .               .
    j               j

If sweep='down' then:

 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──v──●─ .
 ┃  ┃  ┃   ==>   ┃  |  ┃  .
─●──●──●─       ─●──v──●─ . xrange
 ┃  ┃  ┃         ┃  |  ┃  .
─●──●──●─       ─●──o──●─ .
 ┃  ┃  ┃         ┃  ┃  ┃
─●──●──●─       ─●──●──●─
 ┃  ┃  ┃         ┃  ┃  ┃
    .               .
    j               j

Does not yield an orthogonal form in the same way as in 1D.

Parameters:
  • j (int) – Which column to compress.

  • sweep ({'up', 'down'}) – Which direction to sweep in.

  • xrange (None or (int, int), optional) – The range of rows to compress.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

_contract_boundary_core_via_1d(xrange, yrange, from_which, max_bond, cutoff=1e-10, method='dm', layer_tags=None, **compress_opts)[source]
_contract_boundary_core(xrange, yrange, from_which, max_bond, cutoff=1e-10, canonize=True, layer_tags=None, compress_late=True, sweep_reverse=False, equalize_norms=False, compress_opts=None, canonize_opts=None)[source]
_contract_boundary_full_bond(xrange, yrange, from_which, max_bond, cutoff=0.0, method='eigh', renorm=False, optimize='auto-hq', opposite_envs=None, equalize_norms=False, contract_boundary_opts=None)[source]

Contract the boundary of this 2D TN using the ‘full bond’ environment information obtained from a boundary contraction in the opposite direction.

Parameters:
  • xrange ((int, int) or None, optional) – The range of rows to contract and compress.

  • yrange ((int, int)) – The range of columns to contract and compress.

  • from_which ({'xmin', 'ymin', 'xmax', 'ymax'}) – Which direction to contract the rectangular patch from.

  • max_bond (int) – The maximum boundary dimension, AKA ‘chi’. By default used for the opposite direction environment contraction as well.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction - only for the opposite direction environment contraction.

  • method ({'eigh', 'eig', 'svd', 'biorthog'}, optional) – Which similarity decomposition method to use to compress the full bond environment.

  • renorm (bool, optional) – Whether to renormalize the isometric projection or not.

  • optimize (str or PathOptimize, optimize) – Contraction optimizer to use for the exact contractions.

  • opposite_envs (dict, optional) – If supplied, the opposite environments will be fetched or lazily computed into this dict depending on whether they are missing.

  • contract_boundary_opts – Other options given to the opposite direction environment contraction.

_contract_boundary_projector(xrange, yrange, from_which, max_bond=None, cutoff=1e-10, lazy=False, equalize_norms=False, optimize='auto-hq', compress_opts=None)[source]

Contract the boundary of this 2D tensor network by explicitly computing and inserting explicit local projector tensors, which can optionally be left uncontracted. Multilayer networks are naturally supported.

Parameters:
  • xrange (tuple) – The range of x indices to contract.

  • yrange (tuple) – The range of y indices to contract.

  • from_which ({'xmin', 'xmax', 'ymin', 'ymax'}) – From which boundary to contract.

  • max_bond (int, optional) – The maximum bond dimension to contract to. If None (default), compression is left to cutoff.

  • cutoff (float, optional) – The cutoff to use for boundary compression.

  • lazy (bool, optional) – Whether to leave the boundary tensors uncontracted. If False (the default), the boundary tensors are contracted and the resulting boundary has a single tensor per site.

  • equalize_norms (bool, optional) – Whether to actively absorb the norm of modified tensors into self.exponent.

  • optimize (str or PathOptimizer, optional) – The contract path optimization to use when forming the projector tensors.

  • compress_opts (dict, optional) – Other options to pass to svd_truncated().

contract_boundary_from(xrange, yrange, from_which, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]

Unified entrypoint for contracting any rectangular patch of tensors from any direction, with any boundary method.

contract_boundary_from_[source]
contract_boundary_from_xmin(xrange, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, sweep_reverse=False, compress_opts=None, inplace=False, **contract_boundary_opts)[source]

Contract a 2D tensor network inwards from the bottom, canonizing and compressing (left to right) along the way. If layer_tags is None this looks like:

a) contract

│  │  │  │  │
●──●──●──●──●       │  │  │  │  │
│  │  │  │  │  -->  ●══●══●══●══●
●──●──●──●──●

b) optionally canonicalize

│  │  │  │  │
●══●══<══<══<

c) compress in opposite direction

│  │  │  │  │  -->  │  │  │  │  │  -->  │  │  │  │  │
>──●══●══●══●  -->  >──>──●══●══●  -->  >──>──>──●══●
.  .           -->     .  .        -->        .  .

If layer_tags is specified, each then each layer is contracted in and compressed separately, resulting generally in a lower memory scaling. For two layer tags this looks like:

a) first flatten the outer boundary only

│ ││ ││ ││ ││ │       │ ││ ││ ││ ││ │
●─○●─○●─○●─○●─○       ●─○●─○●─○●─○●─○
│ ││ ││ ││ ││ │  ==>   ╲│ ╲│ ╲│ ╲│ ╲│
●─○●─○●─○●─○●─○         ●══●══●══●══●

b) contract and compress a single layer only

│ ││ ││ ││ ││ │
│ ○──○──○──○──○
│╱ │╱ │╱ │╱ │╱
●══<══<══<══<

c) contract and compress the next layer

╲│ ╲│ ╲│ ╲│ ╲│
 >══>══>══>══●
Parameters:
  • xrange ((int, int)) – The range of rows to compress (inclusive).

  • yrange ((int, int) or None, optional) – The range of columns to compress (inclusive), sweeping along with canonization and compression. Defaults to all columns.

  • max_bond (int, optional) – The maximum boundary dimension, AKA ‘chi’. The default of None means truncation is left purely to cutoff and is not recommended in 2D.

  • cutoff (float, optional) – Cut-off value to used to truncate singular values in the boundary contraction.

  • canonize (bool, optional) – Whether to sweep one way with canonization before compressing.

  • mode ({'mps', 'full-bond'}, optional) – How to perform the compression on the boundary.

  • layer_tags (None or sequence[str], optional) – If None, all tensors at each coordinate pair [(i, j), (i + 1, j)] will be first contracted. If specified, then the outer tensor at (i, j) will be contracted with the tensor specified by [(i + 1, j), layer_tag], for each layer_tag in layer_tags.

  • sweep_reverse (bool, optional) – Which way to perform the compression sweep, which has an effect on which tensors end up being canonized. Setting this to true sweeps the compression from largest to smallest coordinates.

  • compress_opts (None or dict, optional) – Supplied to compress_between().

  • inplace (bool, optional) – Whether to perform the contraction inplace or not.

contract_boundary_from_xmin_[source]
contract_boundary_from_xmax(xrange, yrange=None, max_bond=None, *, cutoff=1e-10, canonize=True, mode='mps', layer_tags=None, inplace=False, sweep_reverse=False, compress_opts=None, **contract_boundary_opts)[source]

Contract a 2D tensor network inwards from the top, canonizing and compressing (right to left) along the way. If layer_tags is None this looks like:

a) contract

●──●──●──●──●
|  |  |  |  |  -->  ●══●══●══●══●
●──●──●──●──●       |  |  |  |  |
|  |  |  |  |

b) optionally canonicalize

●══●══<══<══<
|  |  |  |  |

c) compress in opposite direction

>──●══●══●══●  -->  >──>──●══●══●  -->  >──>──>──●══●
|  |  |  |  |  -->  |  |  |  |  |  -->  |  |  |  |  |
.  .           -->     .  .        -->        .  .

If layer_tags is specified, each then each