quimb.tensor.tensor_dmrg

DMRG-like variational algorithms, but in tensor network language.

Attributes

Exceptions

DMRGError

Common base class for all non-exit exceptions.

Classes

IdentityLinearOperator

Get a LinearOperator representation of the identity operator,

Tensor

A labelled, tagged n-dimensional array. The index labels are used

TNLinearOperator

Get a linear operator - something that replicates the matrix-vector

MovingEnvironment

Helper class for efficiently moving the effective 'environment' of a

DMRG

Density Matrix Renormalization Group variational groundstate search.

DMRG1

Simple alias of one site DMRG.

DMRG2

Simple alias of two site DMRG.

DMRGX

Class implmenting DMRG-X [1], whereby local effective energy eigenstates

Functions

prod(iterable)

tensor_contract(*tensors[, output_inds, optimize, ...])

Contract a collection of tensors into a scalar or tensor, automatically

asarray(array)

Maybe convert data for a tensor to use. If array already has a

get_default_opts([cyclic])

Get the default advanced settings for DMRG.

get_cyclic_canonizer(k, b[, inv_tol])

Get a function to use as a callback for MovingEnvironment that

parse_2site_inds_dims(k, b, i)

Sort out the dims and inds of:

Module Contents

quimb.tensor.tensor_dmrg.prod(iterable)
quimb.tensor.tensor_dmrg.eigh
class quimb.tensor.tensor_dmrg.IdentityLinearOperator(size, factor=1)[source]

Bases: scipy.sparse.linalg.LinearOperator

Get a LinearOperator representation of the identity operator, scaled by factor.

Parameters:
  • size (int) – The size of the identity.

  • factor (float) – The coefficient of the identity.

Examples

>>> I3 = IdentityLinearOperator(100, 1/3)
>>> p = rand_ket(100)
>>> np.allclose(I3 @ p, p / 3)
True
_matvec(vec)[source]

Default matrix-vector multiplication handler.

If self is a linear operator of shape (M, N), then this method will be called on a shape (N,) or (N, 1) ndarray, and should return a shape (M,) or (M, 1) ndarray.

This default implementation falls back on _matmat, so defining that will define matrix-vector multiplication as well.

_rmatvec(vec)[source]

Default implementation of _rmatvec; defers to adjoint.

_matmat(mat)[source]

Default matrix-matrix multiplication handler.

Falls back on the user-defined _matvec method, so defining that will define matrix multiplication (though in a very suboptimal way).

class quimb.tensor.tensor_dmrg.Tensor(data=1.0, inds=(), tags=None, left_inds=None)[source]

A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.

Parameters:
  • data (numpy.ndarray) – The n-dimensional data.

  • inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of data.

  • tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a oset.

  • left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.

Examples

Basic construction:

>>> from quimb import randn
>>> from quimb.tensor import Tensor
>>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'})
>>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})

Indices are automatically aligned, and tags combined, when contracting:

>>> X @ Y
Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
__slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')
_set_data(data)[source]
_set_inds(inds)[source]
_set_tags(tags)[source]
_set_left_inds(left_inds)[source]
get_params()[source]

A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

set_params(params)[source]

A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

copy(deep=False, virtual=False)[source]

Copy this tensor.

Note

By default (deep=False), the underlying array will not be copied.

Parameters:
  • deep (bool, optional) – Whether to copy the underlying data as well.

  • virtual (bool, optional) – To conveniently mimic the behaviour of taking a virtual copy of tensor network, this simply returns self.

__copy__[source]
property data
property inds
property tags
property left_inds
check()[source]

Do some basic diagnostics on this tensor, raising errors if something is wrong.

property owners
add_owner(tn, tid)[source]

Add tn as owner of this Tensor - it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.

remove_owner(tn)[source]

Remove TensorNetwork tn as an owner of this Tensor.

check_owners()[source]

Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.

_apply_function(fn)[source]
modify(**kwargs)[source]

Overwrite the data of this tensor in place.

Parameters:
  • data (array, optional) – New data.

  • apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.

  • inds (sequence of str, optional) – New tuple of indices.

  • tags (sequence of str, optional) – New tags.

  • left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.

apply_to_arrays(fn)[source]

Apply the function fn to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.

isel(selectors, inplace=False)[source]

Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to X[:, :, 3, :, :] with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice.

Parameters:
  • selectors (dict[str, int], dict[str, slice]) – Mapping of index(es) to which value to take.

  • inplace (bool, optional) – Whether to select inplace or not.

Return type:

Tensor

Examples

>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c'))
>>> T.isel({'b': -1})
Tensor(shape=(2, 4), inds=('a', 'c'), tags=())

See also

TensorNetwork.isel

isel_[source]
add_tag(tag)[source]

Add a tag or multiple tags to this tensor. Unlike self.tags.add this also updates any TensorNetwork objects viewing this Tensor.

expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace increase the size of the dimension of ind, the new array entries will be filled with zeros by default.

Parameters:
  • name (str) – Name of the index to expand.

  • size (int, optional) – Size of the expanded index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace add a new index - a named dimension. If size is specified to be greater than one then the new array entries will be filled with zeros.

Parameters:
  • name (str) – Name of the new index.

  • size (int, optional) – Size of the new index.

  • axis (int, optional) – Position of the new index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_bond[source]
new_ind_with_identity(name, left_inds, right_inds, axis=0)[source]

Inplace add a new index, where the newly stacked array entries form the identity from left_inds to right_inds. Selecting 0 or 1 for the new index name thus is like ‘turning off’ this tensor if viewed as an operator.

Parameters:
  • name (str) – Name of the new index.

  • left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.

  • right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of left_inds.

  • axis (int, optional) – Position of the new index.

new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)[source]

Expand this tensor with two new indices of size d, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.

Parameters:
  • new_left_ind (str) – Name of the new left index.

  • new_right_ind (str) – Name of the new right index.

  • d (int) – Size of the new indices.

  • inplace (bool, optional) – Whether to perform the expansion inplace.

Return type:

Tensor

new_ind_pair_with_identity_[source]
conj(inplace=False)[source]

Conjugate this tensors data (does nothing to indices).

conj_[source]
property H
Conjugate this tensors data (does nothing to indices).
property shape
The size of each dimension.
property ndim
The number of dimensions.
property size
The total number of array elements.
property dtype
The data type of the array elements.
property backend
The backend inferred from the data.
iscomplex()[source]
astype(dtype, inplace=False)[source]

Change the type of this tensor to dtype.

astype_[source]
max_dim()[source]

Return the maximum size of any dimension, or 1 if scalar.

ind_size(ind)[source]

Return the size of dimension corresponding to ind.

inds_size(inds)[source]

Return the total size of dimensions corresponding to inds.

shared_bond_size(other)[source]

Get the total size of the shared index(es) with other.

inner_inds()[source]

Get all indices that appear on two or more tensors.

transpose(*output_inds, inplace=False)[source]

Transpose this tensor - permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.

Parameters:
  • output_inds (sequence of str) – The desired output sequence of indices.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

transpose_[source]
transpose_like(other, inplace=False)[source]

Transpose this tensor to match the indices of other, allowing for one index to be different. E.g. if self.inds = ('a', 'b', 'c', 'x') and other.inds = ('b', 'a', 'd', 'c') then ‘x’ will be aligned with ‘d’ and the output inds will be ('b', 'a', 'x', 'c')

Parameters:
  • other (Tensor) – The tensor to match.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

See also

transpose

transpose_like_[source]
moveindex(ind, axis, inplace=False)[source]

Move the index ind to position axis. Like transpose, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Parameters:
  • ind (str) – The index to move.

  • axis (int) – The new position to move ind to. Can be negative.

  • inplace (bool, optional) – Whether to perform the move inplace or not.

Return type:

Tensor

moveindex_[source]
item()[source]

Return the scalar value of this tensor, if it has a single element.

trace(left_inds, right_inds, preserve_tensor=False, inplace=False)[source]

Trace index or indices left_inds with right_inds, removing them.

Parameters:
  • left_inds (str or sequence of str) – The left indices to trace, order matching right_inds.

  • right_inds (str or sequence of str) – The right indices to trace, order matching left_inds.

  • preserve_tensor (bool, optional) – If True, a tensor will be returned even if no indices remain.

  • inplace (bool, optional) – Perform the trace inplace.

Returns:

z

Return type:

Tensor or scalar

sum_reduce(ind, inplace=False)[source]

Sum over index ind, removing it from this tensor.

Parameters:
  • ind (str) – The index to sum over.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

sum_reduce_[source]
vector_reduce(ind, v, inplace=False)[source]

Contract the vector v with the index ind of this tensor, removing it.

Parameters:
  • ind (str) – The index to contract.

  • v (array_like) – The vector to contract with.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

vector_reduce_[source]
collapse_repeated(inplace=False)[source]

Take the diagonals of any repeated indices, such that each index only appears once.

collapse_repeated_[source]
contract(*others, output_inds=None, **opts)[source]
direct_product(other, sum_inds=(), inplace=False)[source]
direct_product_[source]
split(*args, **kwargs)[source]
compute_reduced_factor(side, left_inds, right_inds, **split_opts)[source]
distance(other, **contract_opts)[source]
distance_normalized[source]
gate(G, ind, preserve_inds=True, inplace=False)[source]

Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike contract, G is a raw array and the tensor remains with the same set of indices.

Parameters:
  • G (2D array_like) – The matrix to gate the tensor index with.

  • ind (str) – Which index to apply the gate to.

Return type:

Tensor

Examples

Create a random tensor of 4 qubits:

>>> t = qtn.rand_tensor(
...    shape=[2, 2, 2, 2],
...    inds=['k0', 'k1', 'k2', 'k3'],
... )

Create another tensor with an X gate applied to qubit 2:

>>> Gt = t.gate(qu.pauli('X'), 'k2')

The contraction of these two tensors is now the expectation of that operator:

>>> t.H @ Gt
-4.108910576149794
gate_[source]
singular_values(left_inds, method='svd')[source]

Return the singular values associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Returns:

The singular values.

Return type:

1d-array

entropy(left_inds, method='svd')[source]

Return the entropy associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Return type:

float

retag(retag_map, inplace=False)[source]

Rename the tags of this tensor, optionally, in-place.

Parameters:
  • retag_map (dict-like) – Mapping of pairs {old_tag: new_tag, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed tags will be returned.

retag_[source]
reindex(index_map, inplace=False)[source]

Rename the indices of this tensor, optionally in-place.

Parameters:
  • index_map (dict-like) – Mapping of pairs {old_ind: new_ind, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed inds will be returned.

reindex_[source]
fuse(fuse_map, inplace=False)[source]

Combine groups of indices into single indices.

Parameters:

fuse_map (dict_like or sequence of tuples.) – Mapping like: {new_ind: sequence of existing inds, ...} or an ordered mapping like [(new_ind_1, old_inds_1), ...] in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused.

Returns:

The transposed, reshaped and re-labeled tensor.

Return type:

Tensor

fuse_[source]
unfuse(unfuse_map, shape_map, inplace=False)[source]

Reshape single indices into groups of multiple indices

Parameters:
  • unfuse_map (dict_like or sequence of tuples.) – Mapping like: {existing_ind: sequence of new inds, ...} or an ordered mapping like [(old_ind_1, new_inds_1), ...] in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shape

  • shape_map (dict_like or sequence of tuples) – Mapping like: {old_ind: new_ind_sizes, ...} or an ordered mapping like [(old_ind_1, new_ind_sizes_1), ...].

Returns:

The transposed, reshaped and re-labeled tensor

Return type:

Tensor

unfuse_[source]
to_dense(*inds_seq, to_qarray=False)[source]

Convert this Tensor into an dense array, with a single dimension for each of inds in inds_seqs. E.g. to convert several sites into a density matrix: T.to_dense(('k0', 'k1'), ('b0', 'b1')).

to_qarray[source]
squeeze(include=None, exclude=None, inplace=False)[source]

Drop any singlet dimensions from this tensor.

Parameters:
  • inplace (bool, optional) – Whether modify the original or return a new tensor.

  • include (sequence of str, optional) – Only squeeze dimensions with indices in this list.

  • exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.

  • inplace – Whether to perform the squeeze inplace or not.

Return type:

Tensor

squeeze_[source]
largest_element()[source]

Return the largest element, in terms of absolute magnitude, of this tensor.

idxmin(f=None)[source]

Get the index configuration of the minimum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the minimum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the minimum element.

Return type:

dict[str, int]

idxmax(f=None)[source]

Get the index configuration of the maximum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the maximum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the maximum element.

Return type:

dict[str, int]

norm()[source]

Frobenius norm of this tensor:

\[\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]

where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.

normalize(inplace=False)[source]
normalize_[source]
symmetrize(ind1, ind2, inplace=False)[source]

Hermitian symmetrize this tensor for indices ind1 and ind2. I.e. T = (T + T.conj().T) / 2, where the transpose is taken only over the specified indices.

symmetrize_[source]
isometrize(left_inds=None, method='qr', inplace=False)[source]

Make this tensor unitary (or isometric) with respect to left_inds. The underlying method is set by method.

Parameters:
  • left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.

  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

  • inplace (bool, optional) – Whether to perform the unitization inplace.

Return type:

Tensor

isometrize_[source]
unitize[source]
unitize_
randomize(dtype=None, inplace=False, **randn_opts)[source]

Randomize the entries of this tensor.

Parameters:
  • dtype ({None, str}, optional) – The data type of the random entries. If left as the default None, then the data type of the current array will be used.

  • inplace (bool, optional) – Whether to perform the randomization inplace, by default False.

  • randn_opts – Supplied to randn().

Return type:

Tensor

randomize_[source]
flip(ind, inplace=False)[source]

Reverse the axis on this tensor corresponding to ind. Like performing e.g. X[:, :, ::-1, :].

flip_[source]
multiply_index_diagonal(ind, x, inplace=False)[source]

Multiply this tensor by 1D array x as if it were a diagonal tensor being contracted into index ind.

multiply_index_diagonal_[source]
almost_equals(other, **kwargs)[source]

Check if this tensor is almost the same as another.

drop_tags(tags=None)[source]

Drop certain tags, defaulting to all, from this tensor.

bonds(other)[source]

Return a tuple of the shared indices between this tensor and other.

filter_bonds(other)[source]

Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.

Parameters:

other (Tensor) – The other tensor.

Returns:

shared, unshared – The shared and unshared indices.

Return type:

(tuple[str], tuple[str])

__imul__(other)[source]
__itruediv__(other)[source]
__and__(other)[source]

Combine with another Tensor or TensorNetwork into a new TensorNetwork.

__or__(other)[source]

Combine virtually (no copies made) with another Tensor or TensorNetwork into a new TensorNetwork.

__matmul__(other)[source]

Explicitly contract with another tensor. Avoids some slight overhead of calling the full tensor_contract().

negate(inplace=False)[source]

Negate this tensor.

negate_[source]
__neg__()[source]

Negate this tensor.

as_network(virtual=True)[source]

Return a TensorNetwork with only this tensor.

draw(*args, **kwargs)[source]

Plot a graph of this tensor and its indices.

graph[source]
visualize[source]
__getstate__()[source]

Helper for pickle.

__setstate__(state)[source]
_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_extra()[source]

General detailed info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_str(normal=True, extra=False)[source]

Render the general info as a string.

_repr_html_()[source]

Render this Tensor as HTML, for Jupyter notebooks.

__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

quimb.tensor.tensor_dmrg.tensor_contract(*tensors, output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, drop_tags=False, **contract_opts)[source]

Contract a collection of tensors into a scalar or tensor, automatically aligning their indices and computing an optimized contraction path. The output tensor will have the union of tags from the input tensors.

Parameters:
  • tensors (sequence of Tensor) – The tensors to contract.

  • output_inds (sequence of str) – The output indices. These can be inferred if the contraction has no ‘hyper’ indices, in which case the output indices are those that appear only once in the input indices, and ordered as they appear in the inputs. For hyper indices or a specific ordering, these must be supplied.

  • optimize ({None, str, path_like, PathOptimizer}, optional) –

    The contraction path optimization strategy to use.

    • None: use the default strategy,

    • str: use the preset strategy with the given name,

    • path_like: use this exact path,

    • cotengra.HyperOptimizer: find the contraction using this optimizer, supports slicing,

    • cotengra.ContractionTree: use this exact tree, supports slicing,

    • opt_einsum.PathOptimizer: find the path using this optimizer.

    Contraction with cotengra might be a bit more efficient but the main reason would be to handle sliced contraction automatically, as well as the fact that it uses autoray internally.

  • get (str, optional) –

    What to return. If:

    • None (the default) - return the resulting scalar or Tensor.

    • 'expression' - return a callbable expression that performs the contraction and operates on the raw arrays.

    • 'tree' - return the cotengra.ContractionTree describing the contraction.

    • 'path' - return the raw ‘path’ as a list of tuples.

    • 'symbol-map' - return the dict mapping indices to ‘symbols’ (single unicode letters) used internally by cotengra

    • 'path-info' - return the opt_einsum.PathInfo path object with detailed information such as flop cost. The symbol-map is also added to the quimb_symbol_map attribute.

  • backend ({'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional) – Which backend to use to perform the contraction. Supplied to cotengra.

  • preserve_tensor (bool, optional) – Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not.

  • drop_tags (bool, optional) – Whether to drop all tags from the output tensor. By default the output tensor will keep the union of all tags from the input tensors.

  • contract_opts – Passed to cotengra.array_contract.

Return type:

scalar or Tensor

class quimb.tensor.tensor_dmrg.TNLinearOperator(tns, left_inds, right_inds, ldims=None, rdims=None, optimize=None, backend=None, is_conj=False)[source]

Bases: scipy.sparse.linalg.LinearOperator

Get a linear operator - something that replicates the matrix-vector operation - for an arbitrary uncontracted TensorNetwork, e.g:

         : --O--O--+ +-- :                 --+
         :   |     | |   :                   |
         : --O--O--O-O-- :    acting on    --V
         :   |     |     :                   |
         : --+     +---- :                 --+
left_inds^               ^right_inds

This can then be supplied to scipy’s sparse linear algebra routines. The left_inds / right_inds convention is that the linear operator will have shape matching (*left_inds, *right_inds), so that the right_inds are those that will be contracted in a normal matvec / matmat operation:

_matvec =    --0--v    , _rmatvec =     v--0--
Parameters:
  • tns (sequence of Tensors or TensorNetwork) – A representation of the hamiltonian

  • left_inds (sequence of str) – The ‘left’ inds of the effective hamiltonian network.

  • right_inds (sequence of str) – The ‘right’ inds of the effective hamiltonian network. These should be ordered the same way as left_inds.

  • ldims (tuple of int, or None) – The dimensions corresponding to left_inds. Will figure out if None.

  • rdims (tuple of int, or None) – The dimensions corresponding to right_inds. Will figure out if None.

  • optimize (str, optional) – The path optimizer to use for the ‘matrix-vector’ contraction.

  • backend (str, optional) – The array backend to use for the ‘matrix-vector’ contraction.

  • is_conj (bool, optional) – Whether this object should represent the adjoint operator.

See also

TNLinearOperator1D

_matvec(vec)[source]

Default matrix-vector multiplication handler.

If self is a linear operator of shape (M, N), then this method will be called on a shape (N,) or (N, 1) ndarray, and should return a shape (M,) or (M, 1) ndarray.

This default implementation falls back on _matmat, so defining that will define matrix-vector multiplication as well.

_matmat(mat)[source]

Default matrix-matrix multiplication handler.

Falls back on the user-defined _matvec method, so defining that will define matrix multiplication (though in a very suboptimal way).

trace()[source]
copy(conj=False, transpose=False)[source]
conj()[source]
_transpose()[source]

Default implementation of _transpose; defers to rmatvec + conj

_adjoint()[source]

Hermitian conjugate of this TNLO.

to_dense(*inds_seq, to_qarray=False, **contract_opts)[source]

Convert this TNLinearOperator into a dense array, defaulting to grouping the left and right indices respectively.

to_qarray[source]
split(**split_opts)[source]
property A
astype(dtype)[source]

Convert this TNLinearOperator to type dtype.

__array_function__(func, types, args, kwargs)[source]
quimb.tensor.tensor_dmrg.asarray(array)[source]

Maybe convert data for a tensor to use. If array already has a .shape attribute, i.e. looks like an array, it is left as-is. Else the elements are inspected to see which libraries’ array constructor should be used, defaulting to numpy if everything is builtin or numpy numbers.

quimb.tensor.tensor_dmrg.get_default_opts(cyclic=False)[source]

Get the default advanced settings for DMRG.

Returns:

  • default_sweep_sequence (str) – How to sweep. Will be repeated, e.g. “RRL” -> RRLRRLRRL…, default: R.

  • bond_compress_method ({‘svd’, ‘eig’, …}) – Method used to compress sites after update.

  • bond_compress_cutoff_mode ({‘sum2’, ‘abs’, ‘rel’}) – How to perform compression truncation.

  • bond_expand_rand_strength (float) – In DMRG1, strength of randomness to expand bonds with. Needed to avoid singular matrices after expansion.

  • local_eig_tol (float) – Relative tolerance to solve inner eigenproblem to, larger = quicker but more unstable, default: 1e-3. Note this can be much looser than the overall tolerance, the starting point for each local solve is the previous state, and the overall accuracy comes from multiple sweeps.

  • local_eig_ncv (int) – Number of inner eigenproblem lanczos vectors. Smaller can mean quicker.

  • local_eig_backend ({None, ‘AUTO’, ‘SCIPY’, ‘SLEPC’}) – Which to backend to use for the inner eigenproblem. None or ‘AUTO’ to choose best. Generally 'SLEPC' best if available for large problems, but it can’t currently handle LinearOperator Neff as well as 'lobpcg'.

  • local_eig_maxiter (int) – Maximum number of inner eigenproblem iterations.

  • local_eig_ham_dense (bool) – Force dense representation of the effective hamiltonian.

  • local_eig_EPSType ({‘krylovschur’, ‘gd’, ‘jd’, …}) – Eigensovler tpye if local_eig_backend='slepc'.

  • local_eig_norm_dense (bool) – Force dense representation of the effective norm.

  • periodic_segment_size (float or int) – How large (as a proportion if float) to make the ‘segments’ in periodic DMRG. During a sweep everything outside this (the ‘long way round’) is compressed so the effective energy and norm can be efficiently formed. Tradeoff: longer segments means having to compress less, but also having a shorter ‘long way round’, meaning that it needs a larger bond to represent it and can be ‘pseudo-orthogonalized’ less effectively. 0.5 is the largest fraction that makes sense. Set to >= 1.0 to not use segmentation at all, which is better for small systems.

  • periodic_compress_method ({‘isvd’, ‘svds’}) – Which method to perform the transfer matrix compression with.

  • periodic_compress_norm_eps (float) – Precision to compress the norm transfer matrix in periodic systems.

  • periodic_compress_ham_eps (float) – Precision to compress the energy transfer matrix in periodic systems.

  • periodic_compress_max_bond (int) – The maximum bond to use when compressing transfer matrices.

  • periodic_nullspace_fudge_factor (float) – Factor to add to Heff and Neff to remove nullspace.

  • periodic_canonize_inv_tol (float) – When psuedo-orthogonalizing, an inverse gauge is generated that can be very ill-conditioned. This factor controls cutting off the small singular values of the gauge to stop this.

  • periodic_orthog_tol (float) – When psuedo-orthogonalizing, if the local norm is within this distance to 1 (pseudo-orthogonoalized), then the generalized eigen decomposition is not used, which is much more efficient. If set too large the total normalization can become unstable.

class quimb.tensor.tensor_dmrg.MovingEnvironment(tn, begin, bsz, *, cyclic=False, segment_callbacks=None, ssz=0.5, eps=1e-08, method='isvd', max_bond=-1, norm=False)[source]

Helper class for efficiently moving the effective ‘environment’ of a few sites in a 1D tensor network. E.g. for begin='left', bsz=2, this initializes the right environments like so:

n - 1: ●─●─●─     ─●─●─●
       │ │ │       │ │ │
       H─H─H─ ... ─H─H─H
       │ │ │       │ │ │
       ●─●─●─     ─●─●─●

n - 2: ●─●─●─     ─●─●─╮
       │ │ │       │ │ ●
       H─H─H─ ... ─H─H─H
       │ │ │       │ │ ●
       ●─●─●─     ─●─●─╯

n - 3: ●─●─●─     ─●─╮
       │ │ │       │ ●●
       H─H─H─ ... ─H─HH
       │ │ │       │ ●●
       ●─●─●─     ─●─╯

...

0    : ●─●─╮
       │ │ ●●   ●●●
       H─H─HH...HHH
       │ │ ●●   ●●●
       ●─●─╯

which can then be used to efficiently generate the left environments as each site is updated. For example if bsz=2 and the environements have been shifted many sites into the middle, then MovingEnvironment() returns something like:

     <---> bsz sites
    ╭─●─●─╮
●●●●● │ │ ●●●●●●●
HHHHH─H─H─HHHHHHH
●●●●● │ │ ●●●●●●●
    ╰─●─●─╯
0 ... i i+1 ... n-1

For periodic systems MovingEnvironment approximates the ‘long way round’ transfer matrices. E.g consider replacing segment B (to arbitrary precision) with an SVD:

╭───────────────────────────────────────────────╮
╰─A─A─A─A─A─A─A─A─A─A─A─A─B─B─B─B─B─B─B─B─B─B─B─╯
  │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │           ==>
╭─A─A─A─A─A─A─A─A─A─A─A─A─B─B─B─B─B─B─B─B─B─B─B─╮
╰───────────────────────────────────────────────╯

╭┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╮
┊   ╭─A─A─A─A─A─A─A─A─A─A─A─A─╮   ┊                       ==>
╰┄<BL │ │ │ │ │ │ │ │ │ │ │ │ BR>┄╯
    ╰─A─A─A─A─A─A─A─A─A─A─A─A─╯
      ^                     ^
segment_start          segment_stop - 1

╭┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╮
┊   ╭─A─A─╮                        ┊                      ==>
╰┄<BL │ │ AAAAAAAAAAAAAAAAAAAAABR>┄╯
    ╰─A─A─╯
      ...
    <-bsz->

╭┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╮
┊               ╭─A─A─╮           ┊                       ==>
╰~<BLAAAAAAAAAAA  │ │ AAAAAAAABR>~╯
                ╰─A─A─╯
                  i i+1
     -----sweep--------->

Can then contract and store left and right environments for efficient sweeping just as in non-periodic case. If the segment is long enough (50+) sites, often only 1 singular value is needed, and thus the efficiency is the same as for OBC.

Parameters:
  • tn (TensorNetwork) – The 1D tensor network, should be closed, i.e. an overlap of some sort.

  • begin ({'left', 'right'}) – Which side to start at and sweep from.

  • bsz (int) – The number of sites that form the the ‘non-environment’, e.g. 2 for DMRG2.

  • ssz (float or int, optional) – The size of the segment to use, if float, the proportion. Default: 1/2.

  • eps (float, optional) – The tolerance to approximate the transfer matrix with. See replace_with_svd().

  • cyclic (bool, optional) – Whether this is a periodic MovingEnvironment.

  • segment_callbacks (sequence of callable, optional) – Functions with signature callback(start, stop, self.begin), to be called every time a new segment is initialized.

  • method ({'isvd', 'svds', ...}, optional) – How to perform the transfer matrix compression if PBC. See replace_with_svd().

  • max_bond (, optional) – If > 0, the maximum bond of the compressed transfer matrix.

  • norm (bool, optional) – If True, treat this MovingEnvironment as the state overlap, which enables a few extra checks.

Notes

Does not necessarily need to be an operator overlap tensor network. Useful for any kind of sweep where only local tensor updates are being made. Note that only the current site is completely up-to-date and can be modified with changes meant to propagate.

site_tag(i)[source]
init_segment(begin, start, stop)[source]

Initialize the environments in range(start, stop) so that one can start sweeping from the side defined by begin.

init_non_segment(start, stop)[source]

Compress and label the effective env not in range(start, stop) if cyclic, else just add some dummy left and right end pieces.

move_right()[source]
move_left()[source]
move_to(i)[source]

Move this effective environment to site i.

__call__()[source]

Get the current environment.

quimb.tensor.tensor_dmrg.get_cyclic_canonizer(k, b, inv_tol=1e-10)[source]

Get a function to use as a callback for MovingEnvironment that approximately orthogonalizes the segments of periodic MPS.

quimb.tensor.tensor_dmrg.parse_2site_inds_dims(k, b, i)[source]

Sort out the dims and inds of:

---O---O---
   |   |

For use in 2 site algorithms.

exception quimb.tensor.tensor_dmrg.DMRGError[source]

Bases: Exception

Common base class for all non-exit exceptions.

class quimb.tensor.tensor_dmrg.DMRG(ham, bond_dims, cutoffs=1e-09, bsz=2, which='SA', p0=None)[source]

Density Matrix Renormalization Group variational groundstate search. Some initialising arguments act as defaults, but can be overidden with each solve or sweep. See get_default_opts() for the list of advanced options initialized in the opts attribute.

Parameters:
  • ham (MatrixProductOperator) – The hamiltonian in MPO form.

  • bond_dims (int or sequence of ints.) – The bond-dimension of the MPS to optimize. If bsz > 1, then this corresponds to the maximum bond dimension when splitting the effective local groundstate. If a sequence is supplied then successive sweeps iterate through, then repeate the final value. E.g. [16, 32, 64] -> (16, 32, 64, 64, 64, ...).

  • cutoffs (dict-like) – The cutoff threshold(s) to use when compressing. If a sequence is supplied then successive sweeps iterate through, then repeate the final value. E.g. [1e-5, 1e-7, 1e-9] -> (1e-5, 1e-7, 1e-9, 1e-9, ...).

  • bsz ({1, 2}) – Number of sites to optimize for locally i.e. DMRG1 or DMRG2.

  • which ({'SA', 'LA'}, optional) – Whether to search for smallest or largest real part eigenvectors.

  • p0 (MatrixProductState, optional) – If given, use as the initial state.

state

The current, optimized state.

Type:

MatrixProductState

energy

The current most optimized energy.

Type:

float

energies

The total energy after each sweep.

Type:

list of float

local_energies

The local energies per sweep: local_energies[i, j] contains the local energy found at the jth step of the (i+1)th sweep.

Type:

list of list of float

total_energies

The total energies per sweep: local_energies[i, j] contains the total energy after the jth step of the (i+1)th sweep.

Type:

list of list of float

opts

Advanced options e.g. relating to the inner eigensolve or compression, see get_default_opts().

Type:

dict

(bond_sizes_ham)

If cyclic, the sizes of the energy environement transfer matrix bonds, per segment, per sweep.

Type:

list[list[int]]

(bond_sizes_norm)

If cyclic, the sizes of the norm environement transfer matrix bonds, per segment, per sweep.

Type:

list[list[int]]

_set_bond_dim_seq(bond_dims)[source]
_set_cutoff_seq(cutoffs)[source]
property energy
property state
_canonize_after_1site_update(direction, i)[source]

Compress a site having updated it. Also serves to move the orthogonality center along.

_eigs(A, B=None, v0=None)[source]

Find single eigenpair, using all the internal settings.

print_energy_info(Heff=None, loc_gs=None)[source]
print_norm_info(i=None)[source]
form_local_ops(i, dims, lix, uix)[source]

Construct the effective Hamiltonian, and if needed, norm.

post_check(i, Neff, loc_gs, loc_en, loc_gs_old)[source]

Perform some checks on the output of the local eigensolve.

_update_local_state_1site(i, direction, **compress_opts)[source]

Find the single site effective tensor groundstate of:

>->->->->-/|\-<-<-<-<-<-<-<-<          /|\       <-- uix
| | | | |  |  | | | | | | | |         / | \
H-H-H-H-H--H--H-H-H-H-H-H-H-H   =    L--H--R
| | | | | i|  | | | | | | | |         \i| /
>->->->->-\|/-<-<-<-<-<-<-<-<          \|/       <-- lix

And insert it back into the states k and b, and thus TN_energy.

_update_local_state_2site(i, direction, **compress_opts)[source]

Find the 2-site effective tensor groundstate of:

>->->->->-/| |\-<-<-<-<-<-<-<-<          /| |\
| | | | |  | |  | | | | | | | |         / | | \
H-H-H-H-H--H-H--H-H-H-H-H-H-H-H   =    L--H-H--R
| | | | |  i i+1| | | | | | | |         \ | | /
>->->->->-\| |/-<-<-<-<-<-<-<-<          \| |/
                                     i i+1

And insert it back into the states k and b, and thus TN_energy.

_update_local_state(i, **update_opts)[source]

Move envs to site i and dispatch to the correct local updater.

sweep(direction, canonize=True, verbosity=0, **update_opts)[source]

Perform a sweep of optimizations, either rightwards:

  optimize -->
    ...
>->-o-<-<-<-<-<-<-<-<-<-<-<-<-<
| | | | | | | | | | | | | | | |
H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H
| | | | | | | | | | | | | | | |
>->-o-<-<-<-<-<-<-<-<-<-<-<-<-<

or leftwards (direction=’L’):

                <-- optimize
                          ...
>->->->->->->->->->->->->-o-<-<
| | | | | | | | | | | | | | | |
H-H-H-H-H-H-H-H-H-H-H-H-H-H-H-H
| | | | | | | | | | | | | | | |
>->->->->->->->->->->->->-o-<-<

After the sweep the state is left or right canonized respectively.

Parameters:
  • direction ({'R', 'L'}) – Sweep from left to right (->) or right to left (<-) respectively.

  • canonize (bool, optional) – Canonize the state first, not needed if doing alternate sweeps.

  • verbosity ({0, 1, 2}, optional) – Show a progress bar for the sweep.

  • update_opts – Supplied to self._update_local_state.

sweep_right(canonize=True, verbosity=0, **update_opts)[source]
sweep_left(canonize=True, verbosity=0, **update_opts)[source]
_print_pre_sweep(i, direction, max_bond, cutoff, verbosity=0)[source]

Print this before each sweep.

_compute_post_sweep()[source]

Compute this after each sweep.

_print_post_sweep(converged, verbosity=0)[source]

Print this after each sweep.

_check_convergence(tol)[source]

By default check the absolute change in energy.

solve(tol=0.0001, bond_dims=None, cutoffs=None, sweep_sequence=None, max_sweeps=10, verbosity=0, suppress_warnings=True)[source]

Solve the system with a sequence of sweeps, up to a certain absolute tolerance in the energy or maximum number of sweeps.

Parameters:
  • tol (float, optional) – The absolute tolerance to converge energy to.

  • bond_dims (int or sequence of int) – Overide the initial/current bond_dim sequence.

  • cutoffs (float of sequence of float) – Overide the initial/current cutoff sequence.

  • sweep_sequence (str, optional) – String made of ‘L’ and ‘R’ defining the sweep sequence, e.g ‘RRL’. The sequence will be repeated until max_sweeps is reached.

  • max_sweeps (int, optional) – The maximum number of sweeps to perform.

  • verbosity ({0, 1, 2}, optional) – How much information to print about progress.

  • suppress_warnings (bool, optional) – Whether to suppress warnings about non-convergence, usually due to the intentional low accuracy of the inner eigensolve.

Returns:

converged – Whether the algorithm has converged to tol yet.

Return type:

bool

class quimb.tensor.tensor_dmrg.DMRG1(ham, which='SA', bond_dims=None, cutoffs=1e-08, p0=None)[source]

Bases: DMRG

Simple alias of one site DMRG.

class quimb.tensor.tensor_dmrg.DMRG2(ham, which='SA', bond_dims=None, cutoffs=1e-08, p0=None)[source]

Bases: DMRG

Simple alias of two site DMRG.

class quimb.tensor.tensor_dmrg.DMRGX(ham, p0, bond_dims, cutoffs=1e-08, bsz=1)[source]

Bases: DMRG

Class implmenting DMRG-X [1], whereby local effective energy eigenstates are chosen to maximise overlap with the previous step’s state, leading to convergence on an mid-spectrum eigenstate of the full hamiltonian, as long as it is perturbatively close to the original state.

[1] Khemani, V., Pollmann, F. & Sondhi, S. L. Obtaining Highly Excited Eigenstates of Many-Body Localized Hamiltonians by the Density Matrix Renormalization Group Approach. Phys. Rev. Lett. 116, 247204 (2016).

Parameters:
k

The current, optimized state.

Type:

MatrixProductState

energies

The list of energies after each sweep.

Type:

list of float

property variance
form_local_ops(i, dims, lix, uix)[source]

Construct the effective Hamiltonian, and if needed, norm.

_update_local_state_1site_dmrgx(i, direction, **compress_opts)[source]

Like _update_local_state, but re-insert all eigenvectors, then choose the one with best overlap with eff_ovlp.

_update_local_state(i, **update_opts)[source]

Move envs to site i and dispatch to the correct local updater.

sweep(direction, canonize=True, verbosity=0, **update_opts)[source]

Perform a sweep of the algorithm.

Parameters:
  • direction ({'R', 'L'}) – Sweep from left to right (->) or right to left (<-) respectively.

  • canonize (bool, optional) – Canonize the state first, not needed if doing alternate sweeps.

  • verbosity ({0, 1, 2}, optional) – Show a progress bar for the sweep.

  • update_opts – Supplied to self._update_local_state.

_compute_post_sweep()[source]

Compute this after each sweep.

_print_post_sweep(converged, verbosity=0)[source]

Print this after each sweep.

_check_convergence(tol)[source]

By default check the absolute change in energy.