quimb.experimental.merabuilder

Tools for constructing MERA for arbitrary geometry.

TODO:

   - [ ] 2D, 3D MERA classes
   - [ ] general strategies for arbitrary geometries
   - [ ] layer_tag? and hanling of other attributes
   - [ ] handle dangling case
   - [ ] invariant generators?

DONE::

   - [x] layer_gate methods for arbitrary geometry
   - [x] 1D: generic way to handle finite and open boundary conditions
   - [x] hook into other arbgeom infrastructure for computing rdms etc

Classes

Tensor

A labelled, tagged n-dimensional array. The index labels are used

IsoTensor

A Tensor subclass which keeps its left_inds by default even

TensorNetworkGenVector

A tensor network which notionally has a single tensor and outer index

oset

An ordered set which stores elements as the keys of dict (ordered as of

TensorNetwork1DVector

1D Tensor network which overall is like a vector with a single type of

TensorNetworkGenIso

A class for building generic 'isometric' or MERA like tensor network

MERA

Replacement class for MERA which uses the new infrastructure and

Functions

oset_union(xs)

Non-variadic ordered set union taking any sequence of iterables.

prod(iterable)

tags_to_oset(tags)

Parse a tags argument into an ordered set.

rand_uuid([base])

Return a guaranteed unique, shortish identifier, optional appended

_compute_expecs_maybe_in_parallel(fn, tn, terms[, ...])

Unified helper function for the various methods that compute many

_tn_local_expectation(tn, *args, **kwargs)

Define as function for pickleability.

calc_1d_unis_isos(sites, block_size, cyclic, ...)

Given sites, assumed to be in a 1D order, though not neccessarily

TTN_randtree_rand(sites, D[, phys_dim, group_size, ...])

Return a randomly constructed tree tensor network.

Module Contents

class quimb.experimental.merabuilder.Tensor(data=1.0, inds=(), tags=None, left_inds=None)

A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.

Parameters:
  • data (numpy.ndarray) – The n-dimensional data.

  • inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of data.

  • tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a oset.

  • left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.

Examples

Basic construction:

>>> from quimb import randn
>>> from quimb.tensor import Tensor
>>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'})
>>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})

Indices are automatically aligned, and tags combined, when contracting:

>>> X @ Y
Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
__slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')
_set_data(data)
_set_inds(inds)
_set_tags(tags)
_set_left_inds(left_inds)
get_params()

A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

set_params(params)

A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

copy(deep=False, virtual=False)

Copy this tensor.

Note

By default (deep=False), the underlying array will not be copied.

Parameters:
  • deep (bool, optional) – Whether to copy the underlying data as well.

  • virtual (bool, optional) – To conveniently mimic the behaviour of taking a virtual copy of tensor network, this simply returns self.

__copy__
property data
property inds
property tags
property left_inds
check()

Do some basic diagnostics on this tensor, raising errors if something is wrong.

property owners
add_owner(tn, tid)

Add tn as owner of this Tensor - it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.

remove_owner(tn)

Remove TensorNetwork tn as an owner of this Tensor.

check_owners()

Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.

_apply_function(fn)
modify(**kwargs)

Overwrite the data of this tensor in place.

Parameters:
  • data (array, optional) – New data.

  • apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.

  • inds (sequence of str, optional) – New tuple of indices.

  • tags (sequence of str, optional) – New tags.

  • left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.

apply_to_arrays(fn)

Apply the function fn to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.

isel(selectors, inplace=False)

Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to X[:, :, 3, :, :] with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice.

Parameters:
  • selectors (dict[str, int], dict[str, slice]) – Mapping of index(es) to which value to take.

  • inplace (bool, optional) – Whether to select inplace or not.

Return type:

Tensor

Examples

>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c'))
>>> T.isel({'b': -1})
Tensor(shape=(2, 4), inds=('a', 'c'), tags=())

See also

TensorNetwork.isel

isel_
add_tag(tag)

Add a tag or multiple tags to this tensor. Unlike self.tags.add this also updates any TensorNetwork objects viewing this Tensor.

expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')

Inplace increase the size of the dimension of ind, the new array entries will be filled with zeros by default.

Parameters:
  • name (str) – Name of the index to expand.

  • size (int, optional) – Size of the expanded index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')

Inplace add a new index - a named dimension. If size is specified to be greater than one then the new array entries will be filled with zeros.

Parameters:
  • name (str) – Name of the new index.

  • size (int, optional) – Size of the new index.

  • axis (int, optional) – Position of the new index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_bond
new_ind_with_identity(name, left_inds, right_inds, axis=0)

Inplace add a new index, where the newly stacked array entries form the identity from left_inds to right_inds. Selecting 0 or 1 for the new index name thus is like ‘turning off’ this tensor if viewed as an operator.

Parameters:
  • name (str) – Name of the new index.

  • left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.

  • right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of left_inds.

  • axis (int, optional) – Position of the new index.

new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)

Expand this tensor with two new indices of size d, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.

Parameters:
  • new_left_ind (str) – Name of the new left index.

  • new_right_ind (str) – Name of the new right index.

  • d (int) – Size of the new indices.

  • inplace (bool, optional) – Whether to perform the expansion inplace.

Return type:

Tensor

new_ind_pair_with_identity_
conj(inplace=False)

Conjugate this tensors data (does nothing to indices).

conj_
property H
Conjugate this tensors data (does nothing to indices).
property shape
The size of each dimension.
property ndim
The number of dimensions.
property size
The total number of array elements.
property dtype
The data type of the array elements.
property backend
The backend inferred from the data.
iscomplex()
astype(dtype, inplace=False)

Change the type of this tensor to dtype.

astype_
max_dim()

Return the maximum size of any dimension, or 1 if scalar.

ind_size(ind)

Return the size of dimension corresponding to ind.

inds_size(inds)

Return the total size of dimensions corresponding to inds.

shared_bond_size(other)

Get the total size of the shared index(es) with other.

inner_inds()

Get all indices that appear on two or more tensors.

transpose(*output_inds, inplace=False)

Transpose this tensor - permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.

Parameters:
  • output_inds (sequence of str) – The desired output sequence of indices.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

transpose_
transpose_like(other, inplace=False)

Transpose this tensor to match the indices of other, allowing for one index to be different. E.g. if self.inds = ('a', 'b', 'c', 'x') and other.inds = ('b', 'a', 'd', 'c') then ‘x’ will be aligned with ‘d’ and the output inds will be ('b', 'a', 'x', 'c')

Parameters:
  • other (Tensor) – The tensor to match.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

See also

transpose

transpose_like_
moveindex(ind, axis, inplace=False)

Move the index ind to position axis. Like transpose, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Parameters:
  • ind (str) – The index to move.

  • axis (int) – The new position to move ind to. Can be negative.

  • inplace (bool, optional) – Whether to perform the move inplace or not.

Return type:

Tensor

moveindex_
item()

Return the scalar value of this tensor, if it has a single element.

trace(left_inds, right_inds, preserve_tensor=False, inplace=False)

Trace index or indices left_inds with right_inds, removing them.

Parameters:
  • left_inds (str or sequence of str) – The left indices to trace, order matching right_inds.

  • right_inds (str or sequence of str) – The right indices to trace, order matching left_inds.

  • preserve_tensor (bool, optional) – If True, a tensor will be returned even if no indices remain.

  • inplace (bool, optional) – Perform the trace inplace.

Returns:

z

Return type:

Tensor or scalar

sum_reduce(ind, inplace=False)

Sum over index ind, removing it from this tensor.

Parameters:
  • ind (str) – The index to sum over.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

sum_reduce_
vector_reduce(ind, v, inplace=False)

Contract the vector v with the index ind of this tensor, removing it.

Parameters:
  • ind (str) – The index to contract.

  • v (array_like) – The vector to contract with.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

vector_reduce_
collapse_repeated(inplace=False)

Take the diagonals of any repeated indices, such that each index only appears once.

collapse_repeated_
contract(*others, output_inds=None, **opts)
direct_product(other, sum_inds=(), inplace=False)
direct_product_
split(*args, **kwargs)
compute_reduced_factor(side, left_inds, right_inds, **split_opts)
distance(other, **contract_opts)
distance_normalized
gate(G, ind, preserve_inds=True, inplace=False)

Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike contract, G is a raw array and the tensor remains with the same set of indices.

Parameters:
  • G (2D array_like) – The matrix to gate the tensor index with.

  • ind (str) – Which index to apply the gate to.

Return type:

Tensor

Examples

Create a random tensor of 4 qubits:

>>> t = qtn.rand_tensor(
...    shape=[2, 2, 2, 2],
...    inds=['k0', 'k1', 'k2', 'k3'],
... )

Create another tensor with an X gate applied to qubit 2:

>>> Gt = t.gate(qu.pauli('X'), 'k2')

The contraction of these two tensors is now the expectation of that operator:

>>> t.H @ Gt
-4.108910576149794
gate_
singular_values(left_inds, method='svd')

Return the singular values associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Returns:

The singular values.

Return type:

1d-array

entropy(left_inds, method='svd')

Return the entropy associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Return type:

float

retag(retag_map, inplace=False)

Rename the tags of this tensor, optionally, in-place.

Parameters:
  • retag_map (dict-like) – Mapping of pairs {old_tag: new_tag, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed tags will be returned.

retag_
reindex(index_map, inplace=False)

Rename the indices of this tensor, optionally in-place.

Parameters:
  • index_map (dict-like) – Mapping of pairs {old_ind: new_ind, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed inds will be returned.

reindex_
fuse(fuse_map, inplace=False)

Combine groups of indices into single indices.

Parameters:

fuse_map (dict_like or sequence of tuples.) – Mapping like: {new_ind: sequence of existing inds, ...} or an ordered mapping like [(new_ind_1, old_inds_1), ...] in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused.

Returns:

The transposed, reshaped and re-labeled tensor.

Return type:

Tensor

fuse_
unfuse(unfuse_map, shape_map, inplace=False)

Reshape single indices into groups of multiple indices

Parameters:
  • unfuse_map (dict_like or sequence of tuples.) – Mapping like: {existing_ind: sequence of new inds, ...} or an ordered mapping like [(old_ind_1, new_inds_1), ...] in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shape

  • shape_map (dict_like or sequence of tuples) – Mapping like: {old_ind: new_ind_sizes, ...} or an ordered mapping like [(old_ind_1, new_ind_sizes_1), ...].

Returns:

The transposed, reshaped and re-labeled tensor

Return type:

Tensor

unfuse_
to_dense(*inds_seq, to_qarray=False)

Convert this Tensor into an dense array, with a single dimension for each of inds in inds_seqs. E.g. to convert several sites into a density matrix: T.to_dense(('k0', 'k1'), ('b0', 'b1')).

to_qarray
squeeze(include=None, exclude=None, inplace=False)

Drop any singlet dimensions from this tensor.

Parameters:
  • inplace (bool, optional) – Whether modify the original or return a new tensor.

  • include (sequence of str, optional) – Only squeeze dimensions with indices in this list.

  • exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.

  • inplace – Whether to perform the squeeze inplace or not.

Return type:

Tensor

squeeze_
largest_element()

Return the largest element, in terms of absolute magnitude, of this tensor.

idxmin(f=None)

Get the index configuration of the minimum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the minimum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the minimum element.

Return type:

dict[str, int]

idxmax(f=None)

Get the index configuration of the maximum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the maximum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the maximum element.

Return type:

dict[str, int]

norm()

Frobenius norm of this tensor:

\[\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]

where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.

normalize(inplace=False)
normalize_
symmetrize(ind1, ind2, inplace=False)

Hermitian symmetrize this tensor for indices ind1 and ind2. I.e. T = (T + T.conj().T) / 2, where the transpose is taken only over the specified indices.

symmetrize_
isometrize(left_inds=None, method='qr', inplace=False)

Make this tensor unitary (or isometric) with respect to left_inds. The underlying method is set by method.

Parameters:
  • left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.

  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

  • inplace (bool, optional) – Whether to perform the unitization inplace.

Return type:

Tensor

isometrize_
unitize
unitize_
randomize(dtype=None, inplace=False, **randn_opts)

Randomize the entries of this tensor.

Parameters:
  • dtype ({None, str}, optional) – The data type of the random entries. If left as the default None, then the data type of the current array will be used.

  • inplace (bool, optional) – Whether to perform the randomization inplace, by default False.

  • randn_opts – Supplied to randn().

Return type:

Tensor

randomize_
flip(ind, inplace=False)

Reverse the axis on this tensor corresponding to ind. Like performing e.g. X[:, :, ::-1, :].

flip_
multiply_index_diagonal(ind, x, inplace=False)

Multiply this tensor by 1D array x as if it were a diagonal tensor being contracted into index ind.

multiply_index_diagonal_
almost_equals(other, **kwargs)

Check if this tensor is almost the same as another.

drop_tags(tags=None)

Drop certain tags, defaulting to all, from this tensor.

bonds(other)

Return a tuple of the shared indices between this tensor and other.

filter_bonds(other)

Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.

Parameters:

other (Tensor) – The other tensor.

Returns:

shared, unshared – The shared and unshared indices.

Return type:

(tuple[str], tuple[str])

__imul__(other)
__itruediv__(other)
__and__(other)

Combine with another Tensor or TensorNetwork into a new TensorNetwork.

__or__(other)

Combine virtually (no copies made) with another Tensor or TensorNetwork into a new TensorNetwork.

__matmul__(other)

Explicitly contract with another tensor. Avoids some slight overhead of calling the full tensor_contract().

negate(inplace=False)

Negate this tensor.

negate_
__neg__()

Negate this tensor.

as_network(virtual=True)

Return a TensorNetwork with only this tensor.

draw(*args, **kwargs)

Plot a graph of this tensor and its indices.

graph
visualize
__getstate__()

Helper for pickle.

__setstate__(state)
_repr_info()

General info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_extra()

General detailed info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_str(normal=True, extra=False)

Render the general info as a string.

_repr_html_()

Render this Tensor as HTML, for Jupyter notebooks.

__str__()

Return str(self).

__repr__()

Return repr(self).

class quimb.experimental.merabuilder.IsoTensor(data=1.0, inds=(), tags=None, left_inds=None)

Bases: Tensor

A Tensor subclass which keeps its left_inds by default even when its data is changed.

__slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')
modify(**kwargs)

Overwrite the data of this tensor in place.

Parameters:
  • data (array, optional) – New data.

  • apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.

  • inds (sequence of str, optional) – New tuple of indices.

  • tags (sequence of str, optional) – New tags.

  • left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.

fuse(*args, inplace=False, **kwargs)

Combine groups of indices into single indices.

Parameters:

fuse_map (dict_like or sequence of tuples.) – Mapping like: {new_ind: sequence of existing inds, ...} or an ordered mapping like [(new_ind_1, old_inds_1), ...] in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused.

Returns:

The transposed, reshaped and re-labeled tensor.

Return type:

Tensor

quimb.experimental.merabuilder.oset_union(xs)

Non-variadic ordered set union taking any sequence of iterables.

quimb.experimental.merabuilder.prod(iterable)
class quimb.experimental.merabuilder.TensorNetworkGenVector(ts=(), *, virtual=False, check_collisions=True)

Bases: TensorNetworkGen

A tensor network which notionally has a single tensor and outer index per ‘site’, though these could be labelled arbitrarily and could also be linked in an arbitrary geometry by bonds.

_EXTRA_PROPS = ('_sites', '_site_tag_id', '_site_ind_id')
property site_ind_id
The string specifier for the physical indices.
site_ind(site)
property site_inds
Return a tuple of all site indices.
property site_inds_present
All of the site inds still present in this tensor network.
reset_cached_properties()

Reset any cached properties, one should call this when changing the actual geometry of a TN inplace, for example.

reindex_sites(new_id, where=None, inplace=False)

Modify the site indices for all or some tensors in this vector tensor network (without changing the site_ind_id).

Parameters:
  • new_id (str) – A string with a format placeholder to accept a site, e.g. “ket{}”.

  • where (None or sequence) – Which sites to update the index labels on. If None (default) all sites.

  • inplace (bool) – Whether to reindex in place.

reindex_sites_
reindex_all(new_id, inplace=False)

Reindex all physical sites and change the site_ind_id.

reindex_all_
gen_inds_from_coos(coos)

Generate the site inds corresponding to the given coordinates.

phys_dim(site=None)

Get the physical dimension of site, defaulting to the first site if not specified.

to_dense(*inds_seq, to_qarray=False, to_ket=None, **contract_opts)

Contract this tensor network ‘vector’ into a dense array. By default, turn into a ‘ket’ qarray, i.e. column vector of shape (d, 1).

Parameters:
  • inds_seq (sequence of sequences of str) – How to group the site indices into the dense array. By default, use a single group ordered like sites, but only containing those sites which are still present.

  • to_qarray (bool) – Whether to turn the dense array into a qarray, if the backend would otherwise be 'numpy'.

  • to_ket (None or str) – Whether to reshape the dense array into a ket (shape (d, 1) array). If None (default), do this only if the inds_seq is not supplied.

  • contract_opts – Options to pass to contract().

Return type:

array

to_qarray
gate_with_op_lazy(A, transpose=False, inplace=False, **kwargs)

Act lazily with the operator tensor network A, which should have matching structure, on this vector/state tensor network, like A @ x. The returned tensor network will have the same structure as this one, but with the operator gated in lazily, i.e. uncontracted.

\[| x \rangle \rightarrow A | x \rangle\]

or (if transpose=True):

\[| x \rangle \rightarrow A^T | x \rangle\]
Parameters:
  • A (TensorNetworkGenOperator) – The operator tensor network to gate with, or apply to this tensor network.

  • transpose (bool, optional) – Whether to contract the lower or upper indices of A with the site indices of x. If False (the default), the lower indices of A will be contracted with the site indices of x, if True the upper indices of A will be contracted with the site indices of x, which is like applying A.T @ x.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on this tensor network.

Return type:

TensorNetworkGenVector

gate_with_op_lazy_
gate(G, where, contract=False, tags=None, propagate_tags=False, info=None, inplace=False, **compress_opts)

Apply a gate to this vector tensor network at sites where. This is essentially a wrapper around gate_inds() apart from where can be specified as a list of sites, and tags can be optionally, intelligently propagated to the new gate tensor.

\[| \psi \rangle \rightarrow G_\mathrm{where} | \psi \rangle\]
Parameters:
  • G (array_ike) – The gate array to apply, should match or be factorable into the shape (*phys_dims, *phys_dims).

  • where (node or sequence[node]) – The sites to apply the gate to.

  • contract ({False, True, 'split', 'reduce-split', 'split-gate',) – ‘swap-split-gate’, ‘auto-split-gate’}, optional How to apply the gate, see gate_inds().

  • tags (str or sequence of str, optional) – Tags to add to the new gate tensor.

  • propagate_tags ({False, True, 'register', 'sites'}, optional) –

    Whether to propagate tags to the new gate tensor:

    - False: no tags are propagated
    - True: all tags are propagated
    - 'register': only site tags corresponding to ``where`` are
      added.
    - 'sites': all site tags on the current sites are propgated,
      resulting in a lightcone like tagging.
    

  • info (None or dict, optional) – Used to store extra optional information such as the singular values if not absorbed.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on the tensor network or not.

  • compress_opts – Supplied to tensor_split() for any contract methods that involve splitting. Ignored otherwise.

Return type:

TensorNetworkGenVector

See also

TensorNetwork.gate_inds

gate_
gate_simple_(G, where, gauges, renorm=True, **gate_opts)

Apply a gate to this vector tensor network at sites where, using simple update style gauging of the tensors first, as supplied in gauges. The new singular values for the bond are reinserted into gauges.

Parameters:
  • G (array_like) – The gate to be applied.

  • where (node or sequence[node]) – The sites to apply the gate to.

  • gauges (dict[str, array_like]) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.

  • renorm (bool, optional) – Whether to renormalise the singular after the gate is applied, before reinserting them into gauges.

gate_fit_local_(G, where, max_distance=0, fillin=0, gauges=None, **fit_opts)
local_expectation_cluster(G, where, normalized=True, max_distance=0, fillin=False, gauges=None, optimize='auto', max_bond=None, rehearse=False, **contract_opts)

Approximately compute a single local expectation value of the gate G at sites where, either treating the environment beyond max_distance as the identity, or using simple update style bond gauges as supplied in gauges.

This selects a local neighbourhood of tensors up to distance max_distance away from where, then traces over dangling bonds after potentially inserting the bond gauges, to form an approximate version of the reduced density matrix.

\[\langle \psi | G | \psi \rangle \approx \frac{ \mathrm{Tr} [ G \tilde{\rho}_\mathrm{where} ] }{ \mathrm{Tr} [ \tilde{\rho}_\mathrm{where} ] }\]

assuming normalized==True.

Parameters:
  • G (array_like) – The gate to compute the expecation of.

  • where (node or sequence[node]) – The sites to compute the expectation at.

  • normalized (bool, optional) – Whether to locally normalize the result, i.e. divide by the expectation value of the identity.

  • max_distance (int, optional) – The maximum graph distance to include tensors neighboring where when computing the expectation. The default 0 means only the tensors at sites where are used.

  • fillin (bool or int, optional) – When selecting the local tensors, whether and how many times to ‘fill-in’ corner tensors attached multiple times to the local region. On a lattice this fills in the corners. See select_local().

  • gauges (dict[str, array_like], optional) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the local tensors.

  • max_bond (None or int, optional) – If specified, use compressed contraction.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

Returns:

expectation

Return type:

float

local_expectation_simple
compute_local_expectation_cluster(terms, *, max_distance=0, fillin=False, normalized=True, gauges=None, optimize='auto', max_bond=None, return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)

Compute all local expectations of the given terms, either treating the environment beyond max_distance as the identity, or using simple update style bond gauges as supplied in gauges.

This selects a local neighbourhood of tensors up to distance max_distance away from each term’s sites, then traces over dangling bonds after potentially inserting the bond gauges, to form an approximate version of the reduced density matrix.

\[\sum_\mathrm{i} \langle \psi | G_\mathrm{i} | \psi \rangle \approx \sum_\mathrm{i} \frac{ \mathrm{Tr} [ G_\mathrm{i} \tilde{\rho}_\mathrm{i} ] }{ \mathrm{Tr} [ \tilde{\rho}_\mathrm{i} ] }\]

assuming normalized==True.

Parameters:
  • terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.

  • max_distance (int, optional) – The maximum graph distance to include tensors neighboring each term’s sites when computing the expectation. The default 0 means only the tensors at sites of each term are used.

  • fillin (bool or int, optional) – When selecting the local tensors, whether and how many times to ‘fill-in’ corner tensors attached multiple times to the local region. On a lattice this fills in the corners. See select_local().

  • normalized (bool, optional) – Whether to locally normalize the result, i.e. divide by the expectation value of the identity. This implies that a different normalization factor is used for each term.

  • gauges (dict[str, array_like], optional) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the local tensors.

  • max_bond (None or int, optional) – If specified, use compressed contraction.

  • return_all (bool, optional) – Whether to return all results, or just the summed expectation.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

  • executor (Executor, optional) – If supplied compute the terms in parallel using this executor.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Supplied to contract().

Returns:

expecs – If return_all==False, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value.

Return type:

float or dict[node or (node, node), float]

compute_local_expectation_simple
local_expectation_exact(G, where, optimize='auto-hq', normalized=True, rehearse=False, **contract_opts)

Compute the local expectation of operator G at site(s) where by exactly contracting the full overlap tensor network.

compute_local_expectation_exact(terms, optimize='auto-hq', *, normalized=True, return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)

Compute the local expectations of many operators, by exactly contracting the full overlap tensor network.

Parameters:
  • terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the full tensor network.

  • normalized (bool, optional) – Whether to normalize the result.

  • return_all (bool, optional) – Whether to return all results, or just the summed expectation.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

  • executor (Executor, optional) – If supplied compute the terms in parallel using this executor.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Supplied to contract().

Returns:

expecs – If return_all==False, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value.

Return type:

float or dict[node or (node, node), float]

partial_trace(keep, max_bond, optimize, flatten=True, reduce=False, normalized=True, symmetrized='auto', rehearse=False, method='contract_compressed', **contract_compressed_opts)

Partially trace this tensor network state, keeping only the sites in keep, using compressed contraction.

Parameters:
  • keep (iterable of hashable) – The sites to keep.

  • max_bond (int) – The maximum bond dimensions to use while compressed contracting.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.

  • flatten ({False, True, 'all'}, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually improved. If 'all' also flatten the tensors in keep.

  • reduce (bool, optional) – Whether to first ‘pull’ the physical indices off their respective tensors using QR reduction. Experimental.

  • normalized (bool, optional) – Whether to normalize the reduced density matrix at the end.

  • symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computation or not:

    - False: perform the computation.
    - 'tn': return the tensor network without running the path
      optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree``..
    - True: run the path optimizer and return the ``PathInfo``.
    

  • contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to contract_compressed().

Returns:

rho – The reduce density matrix of sites in keep.

Return type:

array_like

local_expectation(G, where, max_bond, optimize, flatten=True, normalized=True, symmetrized='auto', reduce=False, rehearse=False, **contract_compressed_opts)

Compute the local expectation of operator G at site(s) where by approximately contracting the full overlap tensor network.

Parameters:
  • G (array_like) – The local operator to compute the expectation of.

  • where (node or sequence of nodes) – The sites to compute the expectation for.

  • max_bond (int) – The maximum bond dimensions to use while compressed contracting.

  • optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.

  • method ({'rho', 'rho-reduced'}, optional) – The method to use to compute the expectation value.

  • flatten (bool, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually much improved.

  • normalized (bool, optional) – If computing via partial_trace, whether to normalize the reduced density matrix at the end.

  • symmetrized ({'auto', True, False}, optional) – If computing via partial_trace, whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computation or not:

    - False: perform the computation.
    - 'tn': return the tensor network without running the path
      optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree``..
    - True: run the path optimizer and return the ``PathInfo``.
    

  • contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to contract_compressed().

Returns:

expec

Return type:

float

compute_local_expectation(terms, max_bond, optimize, *, flatten=True, normalized=True, symmetrized='auto', reduce=False, return_all=False, rehearse=False, executor=None, progbar=False, **contract_compressed_opts)

Compute the local expectations of many local operators, by approximately contracting the full overlap tensor network.

Parameters:
  • terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.

  • max_bond (int) – The maximum bond dimension to use during contraction.

  • optimize (str or PathOptimizer) – The compressed contraction path optimizer to use.

  • method ({'rho', 'rho-reduced'}, optional) –

    The method to use to compute the expectation value.

    • ’rho’: compute the expectation value via the reduced density matrix.

    • ’rho-reduced’: compute the expectation value via the reduced density matrix, having reduced the physical indices onto the bonds first.

  • flatten (bool, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy can often be much improved.

  • normalized (bool, optional) – Whether to locally normalize the result.

  • symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if flatten is set to True.

  • return_all (bool, optional) – Whether to return all results, or just the summed expectation. If rehease is not False, this is ignored and a dict is always returned.

  • rehearse ({False, 'tn', 'tree', True}, optional) –

    Whether to perform the computations or not:

    - False: perform the computation.
    - 'tn': return the tensor networks of each local expectation,
      without running the path optimizer.
    - 'tree': run the path optimizer and return the
      ``cotengra.ContractonTree`` for each local expectation.
    - True: run the path optimizer and return the ``PathInfo`` for
      each local expectation.
    

  • executor (Executor, optional) – If supplied compute the terms in parallel using this executor.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_compressed_opts – Supplied to contract_compressed().

Returns:

expecs – If return_all==False, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value.

Return type:

float or dict[node or (node, node), float]

compute_local_expectation_rehearse
compute_local_expectation_tn
class quimb.experimental.merabuilder.oset(it=())

An ordered set which stores elements as the keys of dict (ordered as of python 3.6). ‘A few times’ slower than using a set directly for small sizes, but makes everything deterministic.

__slots__ = ('_d',)
classmethod _from_dict(d)
classmethod from_dict(d)

Public method makes sure to copy incoming dictionary.

copy()
__deepcopy__(memo)
add(k)
discard(k)
remove(k)
clear()
update(*others)
union(*others)
intersection_update(*others)
intersection(*others)
difference_update(*others)
difference(*others)
popleft()
popright()
pop
__eq__(other)

Return self==value.

__or__(other)
__ior__(other)
__and__(other)
__iand__(other)
__sub__(other)
__isub__(other)
__len__()
__iter__()
__contains__(x)
__repr__()

Return repr(self).

quimb.experimental.merabuilder.tags_to_oset(tags)

Parse a tags argument into an ordered set.

quimb.experimental.merabuilder.rand_uuid(base='')

Return a guaranteed unique, shortish identifier, optional appended to base.

Examples

>>> rand_uuid()
'_2e1dae1b'
>>> rand_uuid('virt-bond')
'virt-bond_bf342e68'
quimb.experimental.merabuilder._compute_expecs_maybe_in_parallel(fn, tn, terms, return_all=False, executor=None, progbar=False, **kwargs)

Unified helper function for the various methods that compute many expectations, possibly in parallel, possibly with a progress bar.

quimb.experimental.merabuilder._tn_local_expectation(tn, *args, **kwargs)

Define as function for pickleability.

class quimb.experimental.merabuilder.TensorNetwork1DVector(ts=(), *, virtual=False, check_collisions=True)

Bases: TensorNetwork1D, quimb.tensor.tensor_arbgeom.TensorNetworkGenVector

1D Tensor network which overall is like a vector with a single type of site ind.

_EXTRA_PROPS = ('_site_tag_id', '_site_ind_id', '_L')
reindex_sites(new_id, where=None, inplace=False)

Update the physical site index labels to a new string specifier. Note that this doesn’t change the stored id string with the TN.

Parameters:
  • new_id (str) – A string with a format placeholder to accept an int, e.g. “ket{}”.

  • where (None or slice) – Which sites to update the index labels on. If None (default) all sites.

  • inplace (bool) – Whether to reindex in place.

reindex_sites_
site_ind(i)

Get the physical index name of site i.

gate(*args, inplace=False, **kwargs)

Apply a gate to this vector tensor network at sites where. This is essentially a wrapper around gate_inds() apart from where can be specified as a list of sites, and tags can be optionally, intelligently propagated to the new gate tensor.

\[| \psi \rangle \rightarrow G_\mathrm{where} | \psi \rangle\]
Parameters:
  • G (array_ike) – The gate array to apply, should match or be factorable into the shape (*phys_dims, *phys_dims).

  • where (node or sequence[node]) – The sites to apply the gate to.

  • contract ({False, True, 'split', 'reduce-split', 'split-gate',) – ‘swap-split-gate’, ‘auto-split-gate’}, optional How to apply the gate, see gate_inds().

  • tags (str or sequence of str, optional) – Tags to add to the new gate tensor.

  • propagate_tags ({False, True, 'register', 'sites'}, optional) –

    Whether to propagate tags to the new gate tensor:

    - False: no tags are propagated
    - True: all tags are propagated
    - 'register': only site tags corresponding to ``where`` are
      added.
    - 'sites': all site tags on the current sites are propgated,
      resulting in a lightcone like tagging.
    

  • info (None or dict, optional) – Used to store extra optional information such as the singular values if not absorbed.

  • inplace (bool, optional) – Whether to perform the gate operation inplace on the tensor network or not.

  • compress_opts – Supplied to tensor_split() for any contract methods that involve splitting. Ignored otherwise.

Return type:

TensorNetworkGenVector

See also

TensorNetwork.gate_inds

gate_
expec(*args, **kwargs)
correlation(A, i, j, B=None, **expec_opts)

Correlation of operator A between i and j.

Parameters:
  • A (array) – The operator to act with, can be multi site.

  • i (int or sequence of int) – The first site(s).

  • j (int or sequence of int) – The second site(s).

  • expec_opts – Supplied to expec_TN_1D().

Returns:

C – The correlation <A(i)> + <A(j)> - <A(ij)>.

Return type:

float

Examples

>>> ghz = (MPS_computational_state('0000') +
...        MPS_computational_state('1111')) / 2**0.5
>>> ghz.correlation(pauli('Z'), 0, 1)
1.0
>>> ghz.correlation(pauli('Z'), 0, 1, B=pauli('X'))
0.0
class quimb.experimental.merabuilder.TensorNetworkGenIso(ts=(), *, virtual=False, check_collisions=True)

Bases: quimb.tensor.tensor_arbgeom.TensorNetworkGenVector

A class for building generic ‘isometric’ or MERA like tensor network states with arbitrary geometry. After supplying the underyling sites of the problem - which can be an arbitrary sequence of hashable objects - one places either unitaries, isometries or tree tensors layered above groups of sites. The isometric and tree tensors effectively coarse grain blocks into a single new site, and the unitaries generally ‘disentangle’ between blocks.

_EXTRA_PROPS = ('_site_tag_id', '_sites', '_site_ind_id', '_layer_ind_id')
classmethod empty(sites, phys_dim=2, site_tag_id='I{}', site_ind_id='k{}', layer_ind_id='l{}')
property layer_ind_id
layer_ind(site)
layer_gate_raw(G, where, iso=True, new_sites=None, tags=None, all_site_tags=None)

Build out this MERA by placing either a new unitary, isometry or tree tensor, given by G, at the sites given by where. This handles propagating the lightcone of tags and marking the correct indices of the IsoTensor as left_inds.

Parameters:
  • G (array_like) – The raw array to place at the sites. Its shape determines whether it is a unitary or isometry/tree. It should have k + len(where) dimensions. For a unitary k == len(where). If it is an isometry/tree, k will generally be 1, or 0 to ‘cap’ the MERA. The rightmost indices are those attached to the current open layer indices.

  • where (sequence of hashable) – The sites to layer the tensor above.

  • iso (bool, optional) – Whether to declare the tensor as an unitary/isometry by marking the left indices. If iso = False (a ‘tree’ tensor) then one should have k <= 1. Once you have such a ‘tree’ tensor you cannot place isometries or unitaries above it. It will also have the lightcone tags of every site. Technically one could place ‘PEPS’ style tensor with iso = False and k > 1 but some methods might break.

  • new_sites (sequence of hashable, optional) – Which sites to make new open sites. If not given, defaults to the first k sites in where.

  • tags (sequence of str, optional) – Custom tags to add to the new tensor, in addition to the automatically generated site tags.

  • all_site_tags (sequence of str, optional) – For performance, supply all site tags to avoid recomputing them.

layer_gate_fill_fn(fill_fn, operation, where, max_bond, new_sites=None, tags=None, all_site_tags=None)

Build out this MERA by placing either a new unitary, isometry or tree tensor at sites where, generating the data array using fill_fn and maximum bond dimension max_bond.

Parameters:
  • fill_fn (callable) – A function with signature fill_fn(shape) -> array_like.

  • operation ({"iso", "uni", "cap", "tree", "treecap"}) – The type of tensor to place.

  • where (sequence of hashable) – The sites to layer the tensor above.

  • max_bond (int) – The maximum bond dimension of the tensor. This only applies for isometries and trees and when the product of the lower dimensions is greater than max_bond.

  • new_sites (sequence of hashable, optional) – Which sites to make new open sites. If not given, defaults to the first k sites in where.

  • tags (sequence of str, optional) – Custom tags to add to the new tensor, in addition to the automatically generated site tags.

  • all_site_tags (sequence of str, optional) – For performance, supply all site tags to avoid recomputing them.

See also

layer_gate_raw

partial_trace(keep, optimize='auto-hq', rehearse=False, preserve_tensor=False, **contract_opts)

Partial trace out all sites except those in keep, making use of the lightcone structure of the MERA.

Parameters:
  • keep (sequence of hashable) – The sites to keep.

  • optimize (str or PathOptimzer, optional) – The contraction ordering strategy to use.

  • rehearse ({False, "tn", "tree"}, optional) –

    Whether to rehearse the contraction rather than actually performing it. If:

    • False: perform the contraction and return the reduced density matrix,

    • ”tn”: just the lightcone tensor network is returned,

    • ”tree”: just the contraction tree that will be used is returned.

  • contract_opts – Additional options to pass to tensor_contract().

Returns:

The reduced density matrix on sites keep.

Return type:

array_like

local_expectation(G, where, optimize='auto-hq', rehearse=False, **contract_opts)

Compute the expectation value of a local operator G at sites where. This is done by contracting the lightcone tensor network to form the reduced density matrix, before taking the trace with G.

Parameters:
  • G (array_like) – The local operator to compute the expectation value of.

  • where (sequence of hashable) – The sites to compute the expectation value at.

  • optimize (str or PathOptimzer, optional) – The contraction ordering strategy to use.

  • rehearse ({False, "tn", "tree"}, optional) – Whether to rehearse the contraction rather than actually performing it. See partial_trace() for details.

  • contract_opts – Additional options to pass to tensor_contract().

Returns:

The expectation value of G at sites where.

Return type:

float

See also

partial_trace

compute_local_expectation(terms, optimize='auto-hq', return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)

Compute the expectation value of a collection of local operators terms at sites where. This is done by contracting the lightcone tensor network to form the reduced density matrices, before taking the trace with each G in terms.

Parameters:
  • terms (dict[tuple[hashable], array_like]) – The local operators to compute the expectation value of, keyed by the sites they act on.

  • optimize (str or PathOptimzer, optional) – The contraction ordering strategy to use.

  • return_all (bool, optional) – Whether to return all the expectation values, or just the sum.

  • rehearse ({False, "tn", "tree"}, optional) – Whether to rehearse the contraction rather than actually performing it. See partial_trace() for details.

  • executor (Executor, optional) – The executor to use for parallelism.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Additional options to pass to tensor_contract().

expand_bond_dimension(new_bond_dim, rand_strength=0.0, inds_to_expand=None, inplace=False)

Expand the maxmimum bond dimension of this isometric tensor network to new_bond_dim. Unlike expand_bond_dimension() this proceeds from the physical indices upwards, and only increases a bonds size if new_bond_dim is larger than product of the lower indices dimensions.

Parameters:
  • new_bond_dim (int) – The new maximum bond dimension to expand to.

  • rand_strength (float, optional) – The strength of random noise to add to the new array entries, if any.

  • inds_to_expand (sequence of str, optional) – The indices to expand, if not all.

  • inplace (bool, optional) – Whether to expand this tensor network in place, or return a new one.

Return type:

TensorNetworkGenIso

expand_bond_dimension_
quimb.experimental.merabuilder.calc_1d_unis_isos(sites, block_size, cyclic, group_from_right)

Given sites, assumed to be in a 1D order, though not neccessarily contiguous, calculate unitary and isometry groupings:

       │         │ <- new grouped site
┐   ┌─────┐   ┌─────┐   ┌
│   │ ISO │   │ ISO │   │
┘   └─────┘   └─────┘   └
│   │..│..│   │..│..│   │
┌───┐  │  ┌───┐  │  ┌───┐
│UNI│  │  │UNI│  │  │UNI│
└───┘  │  └───┘  │  └───┘
│   │ ... │   │ ... │   │
    ^^^^^^^ <- isometry groupings of size, block_size
^^^^^     ^^^^^ <- unitary groupings of size 2
Parameters:
  • sites (sequence of hashable) – The sites to apply a layer to.

  • block_size (int) – How many sites to group together per isometry block. Note that currently the unitaries will only ever act on blocks of size 2 across isometry block boundaries.

  • cyclic (bool) – Whether to apply disentangler / unitaries across the boundary. The isometries will never be applied across the boundary, but since they always form a tree such a bipartition is natural.

  • group_from_right (bool) – Wether to group the sites starting from the left or right. This only matters if block_size does not divide the number of sites. Alternating between left and right more evenly tiles the unitaries and isometries, especially at lower layers.

Returns:

  • unis (list[tuple]) – The unitary groupings.

  • isos (list[tuple]) – The isometry groupings.

class quimb.experimental.merabuilder.MERA(*args, **kwargs)

Bases: quimb.tensor.tensor_1d.TensorNetwork1DVector, TensorNetworkGenIso

Replacement class for MERA which uses the new infrastructure and thus has methods like compute_local_expectation.

_EXTRA_PROPS
_CONTRACT_STRUCTURED = False
classmethod from_fill_fn(fill_fn, L, D, phys_dim=2, block_size=2, cyclic=True, uni_fill_fn=None, iso_fill_fn=None, cap_fill_fn=None, **kwargs)

Create a 1D MERA using fill_fn(shape) -> array_like to fill the tensors.

Parameters:
  • fill_fn (callable) – A function which takes a shape and returns an array_like of that shape. You can override this specfically for the unitaries, isometries and cap tensors using the kwargs uni_fill_fn, iso_fill_fn and cap_fill_fn.

  • L (int) – The number of sites.

  • D (int) – The maximum bond dimension.

  • phys_dim (int, optional) – The dimension of the physical indices.

  • block_size (int, optional) – The size of the isometry blocks. Binary MERA is the default, ternary MERA is block_size=3.

  • cyclic (bool, optional) – Whether to apply disentangler / unitaries across the boundary. The isometries will never be applied across the boundary, but since they always form a tree such a bipartition is natural.

  • uni_fill_fn (callable, optional) – A function which takes a shape and returns an array_like of that shape. This is used to fill the unitary tensors. If None then fill_fn is used.

  • iso_fill_fn (callable, optional) – A function which takes a shape and returns an array_like of that shape. This is used to fill the isometry tensors. If None then fill_fn is used.

  • cap_fill_fn (callable, optional) – A function which takes a shape and returns an array_like of that shape. This is used to fill the cap tensors. If None then fill_fn is used.

  • kwargs – Supplied to TensorNetworkGenIso.__init__.

classmethod rand(L, D, seed=None, block_size=2, phys_dim=2, cyclic=True, isometrize_method='svd', **kwargs)

Return a random (optionally isometrized) MERA.

Parameters:
  • L (int) – The number of sites.

  • D (int) – The maximum bond dimension.

  • seed (int, optional) – A random seed.

  • block_size (int, optional) – The size of the isometry blocks. Binary MERA is the default, ternary MERA is block_size=3.

  • phys_dim (int, optional) – The dimension of the physical indices.

  • cyclic (bool, optional) – Whether to apply disentangler / unitaries across the boundary. The isometries will never be applied across the boundary, but since they always form a tree such a bipartition is natural.

  • isometrize_method (str or None, optional) – If given, the method to use to isometrize the MERA. If None then the MERA is not isometrized.

property num_layers
quimb.experimental.merabuilder.TTN_randtree_rand(sites, D, phys_dim=2, group_size=2, iso=False, seed=None, **kwargs)

Return a randomly constructed tree tensor network.

Parameters:
  • sites (list of hashable) – The sites of the tensor network.

  • D (int) – The maximum bond dimension.

  • phys_dim (int, optional) – The dimension of the physical indices.

  • group_size (int, optional) – How many sites to group together in each tensor.

  • iso (bool, optional) – Whether to build the tree with an isometric flow towards the top.

  • seed (int, optional) – A random seed.

  • kwargs – Supplied to TensorNetworkGenIso.empty.

Returns:

ttn – The tree tensor network.

Return type:

TensorNetworkGenIso