quimb.tensor.tensor_arbgeom_tebd

Tools for performing TEBD like algorithms on arbitrary lattices.

Attributes

eye

Alias for identity().

Classes

qarray

Thin subclass of numpy.ndarray with some convenient quantum

Tensor

A labelled, tagged n-dimensional array. The index labels are used

LocalHamGen

Representation of a local hamiltonian defined on a general graph. This

TEBDGen

Generic class for performing time evolving block decimation on an

SimpleUpdateGen

Simple update for arbitrary geometry hamiltonians.

Functions

kron(*ops[, stype, coo_build, parallel, ownership])

Tensor (kronecker) product of variable number of arguments.

ensure_dict(x)

Make sure x is a dict, creating an empty one if x is None.

default_to_neutral_style(fn)

Wrap a function or method to use the neutral style by default.

get_colors(color[, custom_colors, alpha])

Generate a sequence of rgbs for tag(s) color.

get_positions(tn, G, *[, dim, fix, layout, ...])

Module Contents

quimb.tensor.tensor_arbgeom_tebd.eye[source]

Alias for identity().

quimb.tensor.tensor_arbgeom_tebd.kron(*ops, stype=None, coo_build=False, parallel=False, ownership=None)[source]

Tensor (kronecker) product of variable number of arguments.

Parameters:
  • ops (sequence of vectors or matrices) – Objects to be tensored together.

  • stype (str, optional) – Desired output format if resultant object is sparse. Should be one of {'csr', 'bsr', 'coo', 'csc'}. If None, infer from input matrices.

  • coo_build (bool, optional) – Whether to force sparse construction to use the 'coo' format (only for sparse matrices in the first place.).

  • parallel (bool, optional) – Perform a parallel reduce on the operators, can be quicker.

  • ownership ((int, int), optional) – If given, only construct the rows in range(*ownership). Such that the final operator is actually X[slice(*ownership), :]. Useful for constructing operators in parallel, e.g. for MPI.

Returns:

X – Tensor product of ops.

Return type:

dense or sparse vector or operator

Notes

  1. The product is performed as (a & (b & (c & ...)))

Examples

Simple example:

>>> a = np.array([[1, 2], [3, 4]])
>>> b = np.array([[1., 1.1], [1.11, 1.111]])
>>> kron(a, b)
qarray([[1.   , 1.1  , 2.   , 2.2  ],
        [1.11 , 1.111, 2.22 , 2.222],
        [3.   , 3.3  , 4.   , 4.4  ],
        [3.33 , 3.333, 4.44 , 4.444]])

Partial construction of rows:

>>> ops = [rand_matrix(2, sparse=True) for _ in range(10)]
>>> kron(*ops, ownership=(256, 512))
<256x1024 sparse matrix of type '<class 'numpy.complex128'>'
        with 13122 stored elements in Compressed Sparse Row format>
class quimb.tensor.tensor_arbgeom_tebd.qarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None)[source]

Bases: numpy.ndarray

Thin subclass of numpy.ndarray with some convenient quantum linear algebra related methods and attributes (.H, &, etc.), and matrix-like preservation of at least 2-dimensions so as to distiguish kets and bras.

property H
property A
__array__()[source]
__and__(other)[source]
normalize(inplace=True)[source]
nmlz(inplace=True)[source]
chop(inplace=True)[source]
tr()[source]
partial_trace(dims, keep)[source]
ptr(dims, keep)[source]
__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

quimb.tensor.tensor_arbgeom_tebd.ensure_dict(x)[source]

Make sure x is a dict, creating an empty one if x is None.

quimb.tensor.tensor_arbgeom_tebd.default_to_neutral_style(fn)[source]

Wrap a function or method to use the neutral style by default.

class quimb.tensor.tensor_arbgeom_tebd.Tensor(data=1.0, inds=(), tags=None, left_inds=None)[source]

A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.

Parameters:
  • data (numpy.ndarray) – The n-dimensional data.

  • inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of data.

  • tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a oset.

  • left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.

Examples

Basic construction:

>>> from quimb import randn
>>> from quimb.tensor import Tensor
>>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'})
>>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})

Indices are automatically aligned, and tags combined, when contracting:

>>> X @ Y
Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
__slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')
_set_data(data)[source]
_set_inds(inds)[source]
_set_tags(tags)[source]
_set_left_inds(left_inds)[source]
get_params()[source]

A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

set_params(params)[source]

A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

copy(deep=False, virtual=False)[source]

Copy this tensor.

Note

By default (deep=False), the underlying array will not be copied.

Parameters:
  • deep (bool, optional) – Whether to copy the underlying data as well.

  • virtual (bool, optional) – To conveniently mimic the behaviour of taking a virtual copy of tensor network, this simply returns self.

__copy__[source]
property data
property inds
property tags
property left_inds
check()[source]

Do some basic diagnostics on this tensor, raising errors if something is wrong.

property owners
add_owner(tn, tid)[source]

Add tn as owner of this Tensor - it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.

remove_owner(tn)[source]

Remove TensorNetwork tn as an owner of this Tensor.

check_owners()[source]

Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.

_apply_function(fn)[source]
modify(**kwargs)[source]

Overwrite the data of this tensor in place.

Parameters:
  • data (array, optional) – New data.

  • apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.

  • inds (sequence of str, optional) – New tuple of indices.

  • tags (sequence of str, optional) – New tags.

  • left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.

apply_to_arrays(fn)[source]

Apply the function fn to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.

isel(selectors, inplace=False)[source]

Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to X[:, :, 3, :, :] with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice.

Parameters:
  • selectors (dict[str, int], dict[str, slice]) – Mapping of index(es) to which value to take.

  • inplace (bool, optional) – Whether to select inplace or not.

Return type:

Tensor

Examples

>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c'))
>>> T.isel({'b': -1})
Tensor(shape=(2, 4), inds=('a', 'c'), tags=())

See also

TensorNetwork.isel

isel_[source]
add_tag(tag)[source]

Add a tag or multiple tags to this tensor. Unlike self.tags.add this also updates any TensorNetwork objects viewing this Tensor.

expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace increase the size of the dimension of ind, the new array entries will be filled with zeros by default.

Parameters:
  • name (str) – Name of the index to expand.

  • size (int, optional) – Size of the expanded index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace add a new index - a named dimension. If size is specified to be greater than one then the new array entries will be filled with zeros.

Parameters:
  • name (str) – Name of the new index.

  • size (int, optional) – Size of the new index.

  • axis (int, optional) – Position of the new index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_bond[source]
new_ind_with_identity(name, left_inds, right_inds, axis=0)[source]

Inplace add a new index, where the newly stacked array entries form the identity from left_inds to right_inds. Selecting 0 or 1 for the new index name thus is like ‘turning off’ this tensor if viewed as an operator.

Parameters:
  • name (str) – Name of the new index.

  • left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.

  • right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of left_inds.

  • axis (int, optional) – Position of the new index.

new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)[source]

Expand this tensor with two new indices of size d, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.

Parameters:
  • new_left_ind (str) – Name of the new left index.

  • new_right_ind (str) – Name of the new right index.

  • d (int) – Size of the new indices.

  • inplace (bool, optional) – Whether to perform the expansion inplace.

Return type:

Tensor

new_ind_pair_with_identity_[source]
conj(inplace=False)[source]

Conjugate this tensors data (does nothing to indices).

conj_[source]
property H
Conjugate this tensors data (does nothing to indices).
property shape
The size of each dimension.
property ndim
The number of dimensions.
property size
The total number of array elements.
property dtype
The data type of the array elements.
property backend
The backend inferred from the data.
iscomplex()[source]
astype(dtype, inplace=False)[source]

Change the type of this tensor to dtype.

astype_[source]
max_dim()[source]

Return the maximum size of any dimension, or 1 if scalar.

ind_size(ind)[source]

Return the size of dimension corresponding to ind.

inds_size(inds)[source]

Return the total size of dimensions corresponding to inds.

shared_bond_size(other)[source]

Get the total size of the shared index(es) with other.

inner_inds()[source]

Get all indices that appear on two or more tensors.

transpose(*output_inds, inplace=False)[source]

Transpose this tensor - permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.

Parameters:
  • output_inds (sequence of str) – The desired output sequence of indices.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

transpose_[source]
transpose_like(other, inplace=False)[source]

Transpose this tensor to match the indices of other, allowing for one index to be different. E.g. if self.inds = ('a', 'b', 'c', 'x') and other.inds = ('b', 'a', 'd', 'c') then ‘x’ will be aligned with ‘d’ and the output inds will be ('b', 'a', 'x', 'c')

Parameters:
  • other (Tensor) – The tensor to match.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

See also

transpose

transpose_like_[source]
moveindex(ind, axis, inplace=False)[source]

Move the index ind to position axis. Like transpose, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Parameters:
  • ind (str) – The index to move.

  • axis (int) – The new position to move ind to. Can be negative.

  • inplace (bool, optional) – Whether to perform the move inplace or not.

Return type:

Tensor

moveindex_[source]
item()[source]

Return the scalar value of this tensor, if it has a single element.

trace(left_inds, right_inds, preserve_tensor=False, inplace=False)[source]

Trace index or indices left_inds with right_inds, removing them.

Parameters:
  • left_inds (str or sequence of str) – The left indices to trace, order matching right_inds.

  • right_inds (str or sequence of str) – The right indices to trace, order matching left_inds.

  • preserve_tensor (bool, optional) – If True, a tensor will be returned even if no indices remain.

  • inplace (bool, optional) – Perform the trace inplace.

Returns:

z

Return type:

Tensor or scalar

sum_reduce(ind, inplace=False)[source]

Sum over index ind, removing it from this tensor.

Parameters:
  • ind (str) – The index to sum over.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

sum_reduce_[source]
vector_reduce(ind, v, inplace=False)[source]

Contract the vector v with the index ind of this tensor, removing it.

Parameters:
  • ind (str) – The index to contract.

  • v (array_like) – The vector to contract with.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

vector_reduce_[source]
collapse_repeated(inplace=False)[source]

Take the diagonals of any repeated indices, such that each index only appears once.

collapse_repeated_[source]
contract(*others, output_inds=None, **opts)[source]
direct_product(other, sum_inds=(), inplace=False)[source]
direct_product_[source]
split(*args, **kwargs)[source]
compute_reduced_factor(side, left_inds, right_inds, **split_opts)[source]
distance(other, **contract_opts)[source]
distance_normalized[source]
gate(G, ind, preserve_inds=True, inplace=False)[source]

Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike contract, G is a raw array and the tensor remains with the same set of indices.

Parameters:
  • G (2D array_like) – The matrix to gate the tensor index with.

  • ind (str) – Which index to apply the gate to.

Return type:

Tensor

Examples

Create a random tensor of 4 qubits:

>>> t = qtn.rand_tensor(
...    shape=[2, 2, 2, 2],
...    inds=['k0', 'k1', 'k2', 'k3'],
... )

Create another tensor with an X gate applied to qubit 2:

>>> Gt = t.gate(qu.pauli('X'), 'k2')

The contraction of these two tensors is now the expectation of that operator:

>>> t.H @ Gt
-4.108910576149794
gate_[source]
singular_values(left_inds, method='svd')[source]

Return the singular values associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Returns:

The singular values.

Return type:

1d-array

entropy(left_inds, method='svd')[source]

Return the entropy associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Return type:

float

retag(retag_map, inplace=False)[source]

Rename the tags of this tensor, optionally, in-place.

Parameters:
  • retag_map (dict-like) – Mapping of pairs {old_tag: new_tag, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed tags will be returned.

retag_[source]
reindex(index_map, inplace=False)[source]

Rename the indices of this tensor, optionally in-place.

Parameters:
  • index_map (dict-like) – Mapping of pairs {old_ind: new_ind, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed inds will be returned.

reindex_[source]
fuse(fuse_map, inplace=False)[source]

Combine groups of indices into single indices.

Parameters:

fuse_map (dict_like or sequence of tuples.) – Mapping like: {new_ind: sequence of existing inds, ...} or an ordered mapping like [(new_ind_1, old_inds_1), ...] in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused.

Returns:

The transposed, reshaped and re-labeled tensor.

Return type:

Tensor

fuse_[source]
unfuse(unfuse_map, shape_map, inplace=False)[source]

Reshape single indices into groups of multiple indices

Parameters:
  • unfuse_map (dict_like or sequence of tuples.) – Mapping like: {existing_ind: sequence of new inds, ...} or an ordered mapping like [(old_ind_1, new_inds_1), ...] in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shape

  • shape_map (dict_like or sequence of tuples) – Mapping like: {old_ind: new_ind_sizes, ...} or an ordered mapping like [(old_ind_1, new_ind_sizes_1), ...].

Returns:

The transposed, reshaped and re-labeled tensor

Return type:

Tensor

unfuse_[source]
to_dense(*inds_seq, to_qarray=False)[source]

Convert this Tensor into an dense array, with a single dimension for each of inds in inds_seqs. E.g. to convert several sites into a density matrix: T.to_dense(('k0', 'k1'), ('b0', 'b1')).

to_qarray[source]
squeeze(include=None, exclude=None, inplace=False)[source]

Drop any singlet dimensions from this tensor.

Parameters:
  • inplace (bool, optional) – Whether modify the original or return a new tensor.

  • include (sequence of str, optional) – Only squeeze dimensions with indices in this list.

  • exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.

  • inplace – Whether to perform the squeeze inplace or not.

Return type:

Tensor

squeeze_[source]
largest_element()[source]

Return the largest element, in terms of absolute magnitude, of this tensor.

idxmin(f=None)[source]

Get the index configuration of the minimum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the minimum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the minimum element.

Return type:

dict[str, int]

idxmax(f=None)[source]

Get the index configuration of the maximum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the maximum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the maximum element.

Return type:

dict[str, int]

norm()[source]

Frobenius norm of this tensor:

\[\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]

where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.

normalize(inplace=False)[source]
normalize_[source]
symmetrize(ind1, ind2, inplace=False)[source]

Hermitian symmetrize this tensor for indices ind1 and ind2. I.e. T = (T + T.conj().T) / 2, where the transpose is taken only over the specified indices.

symmetrize_[source]
isometrize(left_inds=None, method='qr', inplace=False)[source]

Make this tensor unitary (or isometric) with respect to left_inds. The underlying method is set by method.

Parameters:
  • left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.

  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

  • inplace (bool, optional) – Whether to perform the unitization inplace.

Return type:

Tensor

isometrize_[source]
unitize[source]
unitize_
randomize(dtype=None, inplace=False, **randn_opts)[source]

Randomize the entries of this tensor.

Parameters:
  • dtype ({None, str}, optional) – The data type of the random entries. If left as the default None, then the data type of the current array will be used.

  • inplace (bool, optional) – Whether to perform the randomization inplace, by default False.

  • randn_opts – Supplied to randn().

Return type:

Tensor

randomize_[source]
flip(ind, inplace=False)[source]

Reverse the axis on this tensor corresponding to ind. Like performing e.g. X[:, :, ::-1, :].

flip_[source]
multiply_index_diagonal(ind, x, inplace=False)[source]

Multiply this tensor by 1D array x as if it were a diagonal tensor being contracted into index ind.

multiply_index_diagonal_[source]
almost_equals(other, **kwargs)[source]

Check if this tensor is almost the same as another.

drop_tags(tags=None)[source]

Drop certain tags, defaulting to all, from this tensor.

bonds(other)[source]

Return a tuple of the shared indices between this tensor and other.

filter_bonds(other)[source]

Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.

Parameters:

other (Tensor) – The other tensor.

Returns:

shared, unshared – The shared and unshared indices.

Return type:

(tuple[str], tuple[str])

__imul__(other)[source]
__itruediv__(other)[source]
__and__(other)[source]

Combine with another Tensor or TensorNetwork into a new TensorNetwork.

__or__(other)[source]

Combine virtually (no copies made) with another Tensor or TensorNetwork into a new TensorNetwork.

__matmul__(other)[source]

Explicitly contract with another tensor. Avoids some slight overhead of calling the full tensor_contract().

negate(inplace=False)[source]

Negate this tensor.

negate_[source]
__neg__()[source]

Negate this tensor.

as_network(virtual=True)[source]

Return a TensorNetwork with only this tensor.

draw(*args, **kwargs)[source]

Plot a graph of this tensor and its indices.

graph[source]
visualize[source]
__getstate__()[source]

Helper for pickle.

__setstate__(state)[source]
_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_extra()[source]

General detailed info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_str(normal=True, extra=False)[source]

Render the general info as a string.

_repr_html_()[source]

Render this Tensor as HTML, for Jupyter notebooks.

__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

quimb.tensor.tensor_arbgeom_tebd.get_colors(color, custom_colors=None, alpha=None)[source]

Generate a sequence of rgbs for tag(s) color.

quimb.tensor.tensor_arbgeom_tebd.get_positions(tn, G, *, dim=2, fix=None, layout='auto', initial_layout='auto', refine_layout='auto', iterations='auto', k=None)[source]
class quimb.tensor.tensor_arbgeom_tebd.LocalHamGen(H2, H1=None)[source]

Representation of a local hamiltonian defined on a general graph. This combines all two site and one site terms into a single interaction per lattice pair, and caches operations on the terms such as getting their exponential. The sites (nodes) should be hashable and comparable.

Parameters:
  • H2 (dict[tuple[node], array_like]) – The interaction terms, with each key being an tuple of nodes defining an edge and each value the local hamilotonian term for those two nodes.

  • H1 (array_like or dict[node, array_like], optional) – The one site term(s). If a single array is given, assume to be the default onsite term for all terms. If a dict is supplied, the keys should represent specific coordinates like (i, j) with the values the array representing the local term for that site. A default term for all remaining sites can still be supplied with the key None.

terms

The total effective local term for each interaction (with single site terms appropriately absorbed). Each key is a pair of coordinates site_a, site_b with site_a < site_b.

Type:

dict[tuple, array_like]

property nsites
The number of sites in the system.
items()[source]

Iterate over all terms in the hamiltonian. This is mostly for convenient compatibility with compute_local_expectation.

_convert_from_qarray_cached(x)[source]
_flip_cached(x)[source]
_add_cached(x, y)[source]
_div_cached(x, y)[source]
_op_id_cached(x)[source]
_id_op_cached(x)[source]
_expm_cached(x, y)[source]
get_gate(where)[source]

Get the local term for pair where, cached.

get_gate_expm(where, x)[source]

Get the local term for pair where, matrix exponentiated by x, and cached.

apply_to_arrays(fn)[source]

Apply the function fn to all the arrays representing terms.

_nx_color_ordering(strategy='smallest_first', interchange=True)[source]

Generate a term ordering based on a coloring on the line graph.

get_auto_ordering(order='sort', **kwargs)[source]

Get an ordering of the terms to use with TEBD, for example. The default is to sort the coordinates then greedily group them into commuting sets.

Parameters:

order ({'sort', None, 'random', str}) –

How to order the terms before greedily grouping them into commuting (non-coordinate overlapping) sets:

  • 'sort' will sort the coordinate pairs first.

  • None will use the current order of terms which should match the order they were supplied to this LocalHam2D instance.

  • 'random' will randomly shuffle the coordinate pairs before grouping them - not the same as returning a completely random order.

  • 'random-ungrouped' will randomly shuffle the coordinate pairs but not group them at all with respect to commutation.

Any other option will be passed as a strategy to networkx.coloring.greedy_color to generate the ordering.

Returns:

Sequence of coordinate pairs.

Return type:

list[tuple[node]]

__repr__()[source]

Return repr(self).

draw(ordering='sort', show_norm=True, figsize=None, fontsize=8, legend=True, ax=None, **kwargs)[source]

Plot this Hamiltonian as a network.

Parameters:
  • ordering ({'sort', None, 'random'}, optional) – An ordering of the termns, or an argument to be supplied to quimb.tensor.tensor_arbgeom_tebd.LocalHamGen.get_auto_ordering() to generate this automatically.

  • show_norm (bool, optional) – Show the norm of each term as edge labels.

  • figsize (None or tuple[int], optional) – Size of the figure, defaults to size of Hamiltonian.

  • fontsize (int, optional) – Font size for norm labels.

  • legend (bool, optional) – Whether to show the legend of which terms are in which group.

  • ax (None or matplotlib.Axes, optional) – Add to a existing set of axes.

graph[source]
class quimb.tensor.tensor_arbgeom_tebd.TEBDGen(psi0, ham, tau=0.01, D=None, imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]

Generic class for performing time evolving block decimation on an arbitrary graph, i.e. applying the exponential of a Hamiltonian using a product formula that involves applying local exponentiated gates only.

sweep(tau)[source]

Perform a full sweep of gates at every pair.

\[\psi \rightarrow \prod_{\{ij\}} \exp(-\tau H_{ij}) \psi\]
_update_progbar(pbar)[source]
evolve(steps, tau=None, progbar=None)[source]

Evolve the state with the local Hamiltonian for steps steps with time step tau.

property state
Return a copy of the current state.
property n
The number of sweeps performed.
property D
The maximum bond dimension.
_check_energy()[source]

Logic for maybe computing the energy if needed.

property energy
Return the energy of current state, computing it only if necessary.
get_state()[source]

The default method for retrieving the current state - simply a copy. Subclasses can override this to perform additional transformations.

set_state(psi)[source]

The default method for setting the current state - simply a copy. Subclasses can override this to perform additional transformations.

presweep(i)[source]

Perform any computations required before the sweep (and energy computation). For the basic TEBD this is nothing.

gate(U, where)[source]

Perform single gate U at coordinate pair where. This is the the most common method to override.

compute_energy()[source]

Compute and return the energy of the current state. Subclasses can override this with a custom method to compute the energy.

__repr__()[source]

Return repr(self).

class quimb.tensor.tensor_arbgeom_tebd.SimpleUpdateGen(psi0, ham, tau=0.01, D=None, imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]

Bases: TEBDGen

Simple update for arbitrary geometry hamiltonians.

gate(U, where)[source]

Perform single gate U at coordinate pair where. This is the the most common method to override.

compute_energy()[source]

Compute and return the energy of the current state. Subclasses can override this with a custom method to compute the energy.

get_state(absorb_gauges=True)[source]

The default method for retrieving the current state - simply a copy. Subclasses can override this to perform additional transformations.

set_state(psi, gauges=None)[source]

The default method for setting the current state - simply a copy. Subclasses can override this to perform additional transformations.