quimb.experimental.merabuilder¶
Tools for constructing MERA for arbitrary geometry.
TODO:
 [ ] 2D, 3D MERA classes
 [ ] general strategies for arbitrary geometries
 [ ] layer_tag? and hanling of other attributes
 [ ] handle dangling case
 [ ] invariant generators?
DONE::
 [x] layer_gate methods for arbitrary geometry
 [x] 1D: generic way to handle finite and open boundary conditions
 [x] hook into other arbgeom infrastructure for computing rdms etc
Classes¶
A labelled, tagged ndimensional array. The index labels are used 

A 

A tensor network which notionally has a single tensor and outer index 

An ordered set which stores elements as the keys of dict (ordered as of 

1D Tensor network which overall is like a vector with a single type of 

A class for building generic 'isometric' or MERA like tensor network 

Replacement class for 
Functions¶

Nonvariadic ordered set union taking any sequence of iterables. 



Parse a 

Return a guaranteed unique, shortish identifier, optional appended 

Unified helper function for the various methods that compute many 

Define as function for pickleability. 

Given 

Return a randomly constructed tree tensor network. 
Module Contents¶
 class quimb.experimental.merabuilder.Tensor(data=1.0, inds=(), tags=None, left_inds=None)¶
A labelled, tagged ndimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.
 Parameters:
data (numpy.ndarray) – The ndimensional data.
inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of
data
.tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a
oset
.left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.
Examples
Basic construction:
>>> from quimb import randn >>> from quimb.tensor import Tensor >>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'}) >>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})
Indices are automatically aligned, and tags combined, when contracting:
>>> X @ Y Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
 __slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')¶
 _set_data(data)¶
 _set_inds(inds)¶
 _set_tags(tags)¶
 _set_left_inds(left_inds)¶
 get_params()¶
A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.
 set_params(params)¶
A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.
 copy(deep=False, virtual=False)¶
Copy this tensor.
Note
By default (
deep=False
), the underlying array will not be copied.
 __copy__¶
 property data¶
 property inds¶
 property tags¶
 property left_inds¶
 check()¶
Do some basic diagnostics on this tensor, raising errors if something is wrong.
 property owners¶
 add_owner(tn, tid)¶
Add
tn
as owner of this Tensor  it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.
 remove_owner(tn)¶
Remove TensorNetwork
tn
as an owner of this Tensor.
 check_owners()¶
Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.
 _apply_function(fn)¶
 modify(**kwargs)¶
Overwrite the data of this tensor in place.
 Parameters:
data (array, optional) – New data.
apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.
inds (sequence of str, optional) – New tuple of indices.
tags (sequence of str, optional) – New tags.
left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.
 apply_to_arrays(fn)¶
Apply the function
fn
to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.
 isel(selectors, inplace=False)¶
Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to
X[:, :, 3, :, :]
with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice. Parameters:
 Return type:
Examples
>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c')) >>> T.isel({'b': 1}) Tensor(shape=(2, 4), inds=('a', 'c'), tags=())
See also
TensorNetwork.isel
 isel_¶
 add_tag(tag)¶
Add a tag or multiple tags to this tensor. Unlike
self.tags.add
this also updates anyTensorNetwork
objects viewing thisTensor
.
 expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')¶
Inplace increase the size of the dimension of
ind
, the new array entries will be filled with zeros by default. Parameters:
name (str) – Name of the index to expand.
size (int, optional) – Size of the expanded index.
mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If
'zeros'
then fill with zeros, if'repeat'
then repeatedly tile the existing entries. If'random'
then fill with random entries drawn fromrand_dist
, multiplied byrand_strength
. IfNone
then select from zeros or random depening on nonzerorand_strength
.rand_strength (float, optional) – If
mode='random'
, a multiplicative scale for the random entries, defaulting to 1.0. Ifmode is None
then supplying a nonzero value here triggersmode='random'
.rand_dist ({'normal', 'uniform', 'exp'}, optional) – If
mode='random'
, the distribution to draw the random entries from.
 new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')¶
Inplace add a new index  a named dimension. If
size
is specified to be greater than one then the new array entries will be filled with zeros. Parameters:
name (str) – Name of the new index.
size (int, optional) – Size of the new index.
axis (int, optional) – Position of the new index.
mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If
'zeros'
then fill with zeros, if'repeat'
then repeatedly tile the existing entries. If'random'
then fill with random entries drawn fromrand_dist
, multiplied byrand_strength
. IfNone
then select from zeros or random depening on nonzerorand_strength
.rand_strength (float, optional) – If
mode='random'
, a multiplicative scale for the random entries, defaulting to 1.0. Ifmode is None
then supplying a nonzero value here triggersmode='random'
.rand_dist ({'normal', 'uniform', 'exp'}, optional) – If
mode='random'
, the distribution to draw the random entries from.
See also
 new_bond¶
 new_ind_with_identity(name, left_inds, right_inds, axis=0)¶
Inplace add a new index, where the newly stacked array entries form the identity from
left_inds
toright_inds
. Selecting 0 or 1 for the new indexname
thus is like ‘turning off’ this tensor if viewed as an operator. Parameters:
name (str) – Name of the new index.
left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.
right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of
left_inds
.axis (int, optional) – Position of the new index.
 new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)¶
Expand this tensor with two new indices of size
d
, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.
 new_ind_pair_with_identity_¶
 conj(inplace=False)¶
Conjugate this tensors data (does nothing to indices).
 conj_¶
 property H¶
 Conjugate this tensors data (does nothing to indices).
 property shape¶
 The size of each dimension.
 property ndim¶
 The number of dimensions.
 property size¶
 The total number of array elements.
 property dtype¶
 The data type of the array elements.
 property backend¶
 The backend inferred from the data.
 iscomplex()¶
 astype(dtype, inplace=False)¶
Change the type of this tensor to
dtype
.
 astype_¶
 max_dim()¶
Return the maximum size of any dimension, or 1 if scalar.
 ind_size(ind)¶
Return the size of dimension corresponding to
ind
.
 inds_size(inds)¶
Return the total size of dimensions corresponding to
inds
.
Get the total size of the shared index(es) with
other
.
 inner_inds()¶
Get all indices that appear on two or more tensors.
 transpose(*output_inds, inplace=False)¶
Transpose this tensor  permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.
Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.
 Parameters:
 Returns:
tt – The transposed tensor.
 Return type:
See also
 transpose_¶
 transpose_like(other, inplace=False)¶
Transpose this tensor to match the indices of
other
, allowing for one index to be different. E.g. ifself.inds = ('a', 'b', 'c', 'x')
andother.inds = ('b', 'a', 'd', 'c')
then ‘x’ will be aligned with ‘d’ and the output inds will be('b', 'a', 'x', 'c')
 Parameters:
 Returns:
tt – The transposed tensor.
 Return type:
See also
 transpose_like_¶
 moveindex(ind, axis, inplace=False)¶
Move the index
ind
to positionaxis
. Liketranspose
, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.
 moveindex_¶
 item()¶
Return the scalar value of this tensor, if it has a single element.
 trace(left_inds, right_inds, preserve_tensor=False, inplace=False)¶
Trace index or indices
left_inds
withright_inds
, removing them. Parameters:
left_inds (str or sequence of str) – The left indices to trace, order matching
right_inds
.right_inds (str or sequence of str) – The right indices to trace, order matching
left_inds
.preserve_tensor (bool, optional) – If
True
, a tensor will be returned even if no indices remain.inplace (bool, optional) – Perform the trace inplace.
 Returns:
z
 Return type:
Tensor or scalar
 sum_reduce(ind, inplace=False)¶
Sum over index
ind
, removing it from this tensor.
 sum_reduce_¶
 vector_reduce(ind, v, inplace=False)¶
Contract the vector
v
with the indexind
of this tensor, removing it.
 vector_reduce_¶
 collapse_repeated(inplace=False)¶
Take the diagonals of any repeated indices, such that each index only appears once.
 collapse_repeated_¶
 contract(*others, output_inds=None, **opts)¶
 direct_product(other, sum_inds=(), inplace=False)¶
 direct_product_¶
 split(*args, **kwargs)¶
 compute_reduced_factor(side, left_inds, right_inds, **split_opts)¶
 distance(other, **contract_opts)¶
 distance_normalized¶
 gate(G, ind, preserve_inds=True, inplace=False)¶
Gate this tensor  contract a matrix into one of its indices without changing its indices. Unlike
contract
,G
is a raw array and the tensor remains with the same set of indices. Parameters:
G (2D array_like) – The matrix to gate the tensor index with.
ind (str) – Which index to apply the gate to.
 Return type:
Examples
Create a random tensor of 4 qubits:
>>> t = qtn.rand_tensor( ... shape=[2, 2, 2, 2], ... inds=['k0', 'k1', 'k2', 'k3'], ... )
Create another tensor with an X gate applied to qubit 2:
>>> Gt = t.gate(qu.pauli('X'), 'k2')
The contraction of these two tensors is now the expectation of that operator:
>>> t.H @ Gt 4.108910576149794
 gate_¶
 singular_values(left_inds, method='svd')¶
Return the singular values associated with splitting this tensor according to
left_inds
. Parameters:
left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.
method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.
 Returns:
The singular values.
 Return type:
1darray
 entropy(left_inds, method='svd')¶
Return the entropy associated with splitting this tensor according to
left_inds
.
 retag(retag_map, inplace=False)¶
Rename the tags of this tensor, optionally, inplace.
 Parameters:
retag_map (dictlike) – Mapping of pairs
{old_tag: new_tag, ...}
.inplace (bool, optional) – If
False
(the default), a copy of this tensor with the changed tags will be returned.
 retag_¶
 reindex(index_map, inplace=False)¶
Rename the indices of this tensor, optionally inplace.
 Parameters:
index_map (dictlike) – Mapping of pairs
{old_ind: new_ind, ...}
.inplace (bool, optional) – If
False
(the default), a copy of this tensor with the changed inds will be returned.
 reindex_¶
 fuse(fuse_map, inplace=False)¶
Combine groups of indices into single indices.
 Parameters:
fuse_map (dict_like or sequence of tuples.) – Mapping like:
{new_ind: sequence of existing inds, ...}
or an ordered mapping like[(new_ind_1, old_inds_1), ...]
in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused. Returns:
The transposed, reshaped and relabeled tensor.
 Return type:
 fuse_¶
 unfuse(unfuse_map, shape_map, inplace=False)¶
Reshape single indices into groups of multiple indices
 Parameters:
unfuse_map (dict_like or sequence of tuples.) – Mapping like:
{existing_ind: sequence of new inds, ...}
or an ordered mapping like[(old_ind_1, new_inds_1), ...]
in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shapeshape_map (dict_like or sequence of tuples) – Mapping like:
{old_ind: new_ind_sizes, ...}
or an ordered mapping like[(old_ind_1, new_ind_sizes_1), ...]
.
 Returns:
The transposed, reshaped and relabeled tensor
 Return type:
 unfuse_¶
 to_dense(*inds_seq, to_qarray=False)¶
Convert this Tensor into an dense array, with a single dimension for each of inds in
inds_seqs
. E.g. to convert several sites into a density matrix:T.to_dense(('k0', 'k1'), ('b0', 'b1'))
.
 to_qarray¶
 squeeze(include=None, exclude=None, inplace=False)¶
Drop any singlet dimensions from this tensor.
 Parameters:
inplace (bool, optional) – Whether modify the original or return a new tensor.
include (sequence of str, optional) – Only squeeze dimensions with indices in this list.
exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.
inplace – Whether to perform the squeeze inplace or not.
 Return type:
 squeeze_¶
 largest_element()¶
Return the largest element, in terms of absolute magnitude, of this tensor.
 idxmin(f=None)¶
Get the index configuration of the minimum element of this tensor, optionally applying
f
first.
 idxmax(f=None)¶
Get the index configuration of the maximum element of this tensor, optionally applying
f
first.
 norm()¶
Frobenius norm of this tensor:
\[\t\_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.
 normalize(inplace=False)¶
 normalize_¶
 symmetrize(ind1, ind2, inplace=False)¶
Hermitian symmetrize this tensor for indices
ind1
andind2
. I.e.T = (T + T.conj().T) / 2
, where the transpose is taken only over the specified indices.
 symmetrize_¶
 isometrize(left_inds=None, method='qr', inplace=False)¶
Make this tensor unitary (or isometric) with respect to
left_inds
. The underlying method is set bymethod
. Parameters:
left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.
method (str, optional) –
The method used to generate the isometry. The options are:
”qr”: use the Q factor of the QR decomposition of
x
with the constraint that the diagonal ofR
is positive.”svd”: uses
U @ VH
of the SVD decomposition ofx
. This is useful for finding the ‘closest’ isometric matrix tox
, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.”exp”: use the matrix exponential of
x  dag(x)
, first completingx
with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for nonsquarex
.”cayley”: use the Cayley transform of
x  dag(x)
, first completingx
with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for nonsquarex
.”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.
”torch_householder”: use the Householder reflection method directly, using the
torch_householder
package. This requires that the package is installed and that the backend is"torch"
. This is generally the best parametrizing method for “torch” if available.”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.
Not all backends support all methods or differentiating through all methods.
inplace (bool, optional) – Whether to perform the unitization inplace.
 Return type:
 isometrize_¶
 unitize¶
 unitize_¶
 randomize(dtype=None, inplace=False, **randn_opts)¶
Randomize the entries of this tensor.
 Parameters:
 Return type:
 randomize_¶
 flip(ind, inplace=False)¶
Reverse the axis on this tensor corresponding to
ind
. Like performing e.g.X[:, :, ::1, :]
.
 flip_¶
 multiply_index_diagonal(ind, x, inplace=False)¶
Multiply this tensor by 1D array
x
as if it were a diagonal tensor being contracted into indexind
.
 multiply_index_diagonal_¶
 almost_equals(other, **kwargs)¶
Check if this tensor is almost the same as another.
 drop_tags(tags=None)¶
Drop certain tags, defaulting to all, from this tensor.
 bonds(other)¶
Return a tuple of the shared indices between this tensor and
other
.
 filter_bonds(other)¶
Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.
 __imul__(other)¶
 __itruediv__(other)¶
 __and__(other)¶
Combine with another
Tensor
orTensorNetwork
into a newTensorNetwork
.
 __or__(other)¶
Combine virtually (no copies made) with another
Tensor
orTensorNetwork
into a newTensorNetwork
.
 __matmul__(other)¶
Explicitly contract with another tensor. Avoids some slight overhead of calling the full
tensor_contract()
.
 negate(inplace=False)¶
Negate this tensor.
 negate_¶
 __neg__()¶
Negate this tensor.
 as_network(virtual=True)¶
Return a
TensorNetwork
with only this tensor.
 draw(*args, **kwargs)¶
Plot a graph of this tensor and its indices.
 graph¶
 visualize¶
 __getstate__()¶
Helper for pickle.
 __setstate__(state)¶
 _repr_info()¶
General info to show in various reprs. Sublasses can add more relevant info to this dict.
 _repr_info_extra()¶
General detailed info to show in various reprs. Sublasses can add more relevant info to this dict.
 _repr_info_str(normal=True, extra=False)¶
Render the general info as a string.
 _repr_html_()¶
Render this Tensor as HTML, for Jupyter notebooks.
 __str__()¶
Return str(self).
 __repr__()¶
Return repr(self).
 class quimb.experimental.merabuilder.IsoTensor(data=1.0, inds=(), tags=None, left_inds=None)¶
Bases:
Tensor
A
Tensor
subclass which keeps itsleft_inds
by default even when its data is changed. __slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')¶
 modify(**kwargs)¶
Overwrite the data of this tensor in place.
 Parameters:
data (array, optional) – New data.
apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.
inds (sequence of str, optional) – New tuple of indices.
tags (sequence of str, optional) – New tags.
left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.
 fuse(*args, inplace=False, **kwargs)¶
Combine groups of indices into single indices.
 Parameters:
fuse_map (dict_like or sequence of tuples.) – Mapping like:
{new_ind: sequence of existing inds, ...}
or an ordered mapping like[(new_ind_1, old_inds_1), ...]
in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused. Returns:
The transposed, reshaped and relabeled tensor.
 Return type:
 quimb.experimental.merabuilder.oset_union(xs)¶
Nonvariadic ordered set union taking any sequence of iterables.
 quimb.experimental.merabuilder.prod(iterable)¶
 class quimb.experimental.merabuilder.TensorNetworkGenVector(ts=(), *, virtual=False, check_collisions=True)¶
Bases:
TensorNetworkGen
A tensor network which notionally has a single tensor and outer index per ‘site’, though these could be labelled arbitrarily and could also be linked in an arbitrary geometry by bonds.
 _EXTRA_PROPS = ('_sites', '_site_tag_id', '_site_ind_id')¶
 property site_ind_id¶
 The string specifier for the physical indices.
 site_ind(site)¶
 property site_inds¶
 Return a tuple of all site indices.
 property site_inds_present¶
 All of the site inds still present in this tensor network.
 reset_cached_properties()¶
Reset any cached properties, one should call this when changing the actual geometry of a TN inplace, for example.
 reindex_sites(new_id, where=None, inplace=False)¶
Modify the site indices for all or some tensors in this vector tensor network (without changing the
site_ind_id
).
 reindex_sites_¶
 reindex_all(new_id, inplace=False)¶
Reindex all physical sites and change the
site_ind_id
.
 reindex_all_¶
 gen_inds_from_coos(coos)¶
Generate the site inds corresponding to the given coordinates.
 phys_dim(site=None)¶
Get the physical dimension of
site
, defaulting to the first site if not specified.
 to_dense(*inds_seq, to_qarray=False, to_ket=None, **contract_opts)¶
Contract this tensor network ‘vector’ into a dense array. By default, turn into a ‘ket’
qarray
, i.e. column vector of shape(d, 1)
. Parameters:
inds_seq (sequence of sequences of str) – How to group the site indices into the dense array. By default, use a single group ordered like
sites
, but only containing those sites which are still present.to_qarray (bool) – Whether to turn the dense array into a
qarray
, if the backend would otherwise be'numpy'
.to_ket (None or str) – Whether to reshape the dense array into a ket (shape
(d, 1)
array). IfNone
(default), do this only if theinds_seq
is not supplied.contract_opts – Options to pass to
contract()
.
 Return type:
array
 to_qarray¶
 gate_with_op_lazy(A, transpose=False, inplace=False, **kwargs)¶
Act lazily with the operator tensor network
A
, which should have matching structure, on this vector/state tensor network, likeA @ x
. The returned tensor network will have the same structure as this one, but with the operator gated in lazily, i.e. uncontracted.\[ x \rangle \rightarrow A  x \rangle\]or (if
transpose=True
):\[ x \rangle \rightarrow A^T  x \rangle\] Parameters:
A (TensorNetworkGenOperator) – The operator tensor network to gate with, or apply to this tensor network.
transpose (bool, optional) – Whether to contract the lower or upper indices of
A
with the site indices ofx
. IfFalse
(the default), the lower indices ofA
will be contracted with the site indices ofx
, ifTrue
the upper indices ofA
will be contracted with the site indices ofx
, which is like applyingA.T @ x
.inplace (bool, optional) – Whether to perform the gate operation inplace on this tensor network.
 Return type:
 gate_with_op_lazy_¶
 gate(G, where, contract=False, tags=None, propagate_tags=False, info=None, inplace=False, **compress_opts)¶
Apply a gate to this vector tensor network at sites
where
. This is essentially a wrapper aroundgate_inds()
apart fromwhere
can be specified as a list of sites, and tags can be optionally, intelligently propagated to the new gate tensor.\[ \psi \rangle \rightarrow G_\mathrm{where}  \psi \rangle\] Parameters:
G (array_ike) – The gate array to apply, should match or be factorable into the shape
(*phys_dims, *phys_dims)
.where (node or sequence[node]) – The sites to apply the gate to.
contract ({False, True, 'split', 'reducesplit', 'splitgate',) – ‘swapsplitgate’, ‘autosplitgate’}, optional How to apply the gate, see
gate_inds()
.tags (str or sequence of str, optional) – Tags to add to the new gate tensor.
propagate_tags ({False, True, 'register', 'sites'}, optional) –
Whether to propagate tags to the new gate tensor:
 False: no tags are propagated  True: all tags are propagated  'register': only site tags corresponding to ``where`` are added.  'sites': all site tags on the current sites are propgated, resulting in a lightcone like tagging.
info (None or dict, optional) – Used to store extra optional information such as the singular values if not absorbed.
inplace (bool, optional) – Whether to perform the gate operation inplace on the tensor network or not.
compress_opts – Supplied to
tensor_split()
for anycontract
methods that involve splitting. Ignored otherwise.
 Return type:
See also
TensorNetwork.gate_inds
 gate_¶
 gate_simple_(G, where, gauges, renorm=True, **gate_opts)¶
Apply a gate to this vector tensor network at sites
where
, using simple update style gauging of the tensors first, as supplied ingauges
. The new singular values for the bond are reinserted intogauges
. Parameters:
G (array_like) – The gate to be applied.
where (node or sequence[node]) – The sites to apply the gate to.
gauges (dict[str, array_like]) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.
renorm (bool, optional) – Whether to renormalise the singular after the gate is applied, before reinserting them into
gauges
.
 gate_fit_local_(G, where, max_distance=0, fillin=0, gauges=None, **fit_opts)¶
 local_expectation_cluster(G, where, normalized=True, max_distance=0, fillin=False, gauges=None, optimize='auto', max_bond=None, rehearse=False, **contract_opts)¶
Approximately compute a single local expectation value of the gate
G
at siteswhere
, either treating the environment beyondmax_distance
as the identity, or using simple update style bond gauges as supplied ingauges
.This selects a local neighbourhood of tensors up to distance
max_distance
away fromwhere
, then traces over dangling bonds after potentially inserting the bond gauges, to form an approximate version of the reduced density matrix.\[\langle \psi  G  \psi \rangle \approx \frac{ \mathrm{Tr} [ G \tilde{\rho}_\mathrm{where} ] }{ \mathrm{Tr} [ \tilde{\rho}_\mathrm{where} ] }\]assuming
normalized==True
. Parameters:
G (array_like) – The gate to compute the expecation of.
where (node or sequence[node]) – The sites to compute the expectation at.
normalized (bool, optional) – Whether to locally normalize the result, i.e. divide by the expectation value of the identity.
max_distance (int, optional) – The maximum graph distance to include tensors neighboring
where
when computing the expectation. The default 0 means only the tensors at siteswhere
are used.fillin (bool or int, optional) – When selecting the local tensors, whether and how many times to ‘fillin’ corner tensors attached multiple times to the local region. On a lattice this fills in the corners. See
select_local()
.gauges (dict[str, array_like], optional) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.
optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the local tensors.
max_bond (None or int, optional) – If specified, use compressed contraction.
rehearse ({False, 'tn', 'tree', True}, optional) –
Whether to perform the computations or not:
 False: perform the computation.  'tn': return the tensor networks of each local expectation, without running the path optimizer.  'tree': run the path optimizer and return the ``cotengra.ContractonTree`` for each local expectation.  True: run the path optimizer and return the ``PathInfo`` for each local expectation.
 Returns:
expectation
 Return type:
 local_expectation_simple¶
 compute_local_expectation_cluster(terms, *, max_distance=0, fillin=False, normalized=True, gauges=None, optimize='auto', max_bond=None, return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)¶
Compute all local expectations of the given terms, either treating the environment beyond
max_distance
as the identity, or using simple update style bond gauges as supplied ingauges
.This selects a local neighbourhood of tensors up to distance
max_distance
away from each term’s sites, then traces over dangling bonds after potentially inserting the bond gauges, to form an approximate version of the reduced density matrix.\[\sum_\mathrm{i} \langle \psi  G_\mathrm{i}  \psi \rangle \approx \sum_\mathrm{i} \frac{ \mathrm{Tr} [ G_\mathrm{i} \tilde{\rho}_\mathrm{i} ] }{ \mathrm{Tr} [ \tilde{\rho}_\mathrm{i} ] }\]assuming
normalized==True
. Parameters:
terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.
max_distance (int, optional) – The maximum graph distance to include tensors neighboring each term’s sites when computing the expectation. The default 0 means only the tensors at sites of each term are used.
fillin (bool or int, optional) – When selecting the local tensors, whether and how many times to ‘fillin’ corner tensors attached multiple times to the local region. On a lattice this fills in the corners. See
select_local()
.normalized (bool, optional) – Whether to locally normalize the result, i.e. divide by the expectation value of the identity. This implies that a different normalization factor is used for each term.
gauges (dict[str, array_like], optional) – The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be used.
optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the local tensors.
max_bond (None or int, optional) – If specified, use compressed contraction.
return_all (bool, optional) – Whether to return all results, or just the summed expectation.
rehearse ({False, 'tn', 'tree', True}, optional) –
Whether to perform the computations or not:
 False: perform the computation.  'tn': return the tensor networks of each local expectation, without running the path optimizer.  'tree': run the path optimizer and return the ``cotengra.ContractonTree`` for each local expectation.  True: run the path optimizer and return the ``PathInfo`` for each local expectation.
executor (Executor, optional) – If supplied compute the terms in parallel using this executor.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Supplied to
contract()
.
 Returns:
expecs – If
return_all==False
, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value. Return type:
 compute_local_expectation_simple¶
 local_expectation_exact(G, where, optimize='autohq', normalized=True, rehearse=False, **contract_opts)¶
Compute the local expectation of operator
G
at site(s)where
by exactly contracting the full overlap tensor network.
 compute_local_expectation_exact(terms, optimize='autohq', *, normalized=True, return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)¶
Compute the local expectations of many operators, by exactly contracting the full overlap tensor network.
 Parameters:
terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.
optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, when exactly contracting the full tensor network.
normalized (bool, optional) – Whether to normalize the result.
return_all (bool, optional) – Whether to return all results, or just the summed expectation.
rehearse ({False, 'tn', 'tree', True}, optional) –
Whether to perform the computations or not:
 False: perform the computation.  'tn': return the tensor networks of each local expectation, without running the path optimizer.  'tree': run the path optimizer and return the ``cotengra.ContractonTree`` for each local expectation.  True: run the path optimizer and return the ``PathInfo`` for each local expectation.
executor (Executor, optional) – If supplied compute the terms in parallel using this executor.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Supplied to
contract()
.
 Returns:
expecs – If
return_all==False
, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value. Return type:
 partial_trace(keep, max_bond, optimize, flatten=True, reduce=False, normalized=True, symmetrized='auto', rehearse=False, method='contract_compressed', **contract_compressed_opts)¶
Partially trace this tensor network state, keeping only the sites in
keep
, using compressed contraction. Parameters:
keep (iterable of hashable) – The sites to keep.
max_bond (int) – The maximum bond dimensions to use while compressed contracting.
optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.
flatten ({False, True, 'all'}, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually improved. If
'all'
also flatten the tensors inkeep
.reduce (bool, optional) – Whether to first ‘pull’ the physical indices off their respective tensors using QR reduction. Experimental.
normalized (bool, optional) – Whether to normalize the reduced density matrix at the end.
symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if
flatten
is set toTrue
.rehearse ({False, 'tn', 'tree', True}, optional) –
Whether to perform the computation or not:
 False: perform the computation.  'tn': return the tensor network without running the path optimizer.  'tree': run the path optimizer and return the ``cotengra.ContractonTree``..  True: run the path optimizer and return the ``PathInfo``.
contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to
contract_compressed()
.
 Returns:
rho – The reduce density matrix of sites in
keep
. Return type:
array_like
 local_expectation(G, where, max_bond, optimize, flatten=True, normalized=True, symmetrized='auto', reduce=False, rehearse=False, **contract_compressed_opts)¶
Compute the local expectation of operator
G
at site(s)where
by approximately contracting the full overlap tensor network. Parameters:
G (array_like) – The local operator to compute the expectation of.
where (node or sequence of nodes) – The sites to compute the expectation for.
max_bond (int) – The maximum bond dimensions to use while compressed contracting.
optimize (str or PathOptimizer, optional) – The contraction path optimizer to use, should specifically generate contractions paths designed for compressed contraction.
method ({'rho', 'rhoreduced'}, optional) – The method to use to compute the expectation value.
flatten (bool, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy is usually much improved.
normalized (bool, optional) – If computing via partial_trace, whether to normalize the reduced density matrix at the end.
symmetrized ({'auto', True, False}, optional) – If computing via partial_trace, whether to symmetrize the reduced density matrix at the end. This should be unecessary if
flatten
is set toTrue
.rehearse ({False, 'tn', 'tree', True}, optional) –
Whether to perform the computation or not:
 False: perform the computation.  'tn': return the tensor network without running the path optimizer.  'tree': run the path optimizer and return the ``cotengra.ContractonTree``..  True: run the path optimizer and return the ``PathInfo``.
contract_compressed_opts (dict, optional) – Additional keyword arguments to pass to
contract_compressed()
.
 Returns:
expec
 Return type:
 compute_local_expectation(terms, max_bond, optimize, *, flatten=True, normalized=True, symmetrized='auto', reduce=False, return_all=False, rehearse=False, executor=None, progbar=False, **contract_compressed_opts)¶
Compute the local expectations of many local operators, by approximately contracting the full overlap tensor network.
 Parameters:
terms (dict[node or (node, node), array_like]) – The terms to compute the expectation of, with keys being the sites and values being the local operators.
max_bond (int) – The maximum bond dimension to use during contraction.
optimize (str or PathOptimizer) – The compressed contraction path optimizer to use.
method ({'rho', 'rhoreduced'}, optional) –
The method to use to compute the expectation value.
’rho’: compute the expectation value via the reduced density matrix.
’rhoreduced’: compute the expectation value via the reduced density matrix, having reduced the physical indices onto the bonds first.
flatten (bool, optional) – Whether to force ‘flattening’ (contracting all physical indices) of the tensor network before contraction, whilst this makes the TN generally more complex to contract, the accuracy can often be much improved.
normalized (bool, optional) – Whether to locally normalize the result.
symmetrized ({'auto', True, False}, optional) – Whether to symmetrize the reduced density matrix at the end. This should be unecessary if
flatten
is set toTrue
.return_all (bool, optional) – Whether to return all results, or just the summed expectation. If
rehease is not False
, this is ignored and a dict is always returned.rehearse ({False, 'tn', 'tree', True}, optional) –
Whether to perform the computations or not:
 False: perform the computation.  'tn': return the tensor networks of each local expectation, without running the path optimizer.  'tree': run the path optimizer and return the ``cotengra.ContractonTree`` for each local expectation.  True: run the path optimizer and return the ``PathInfo`` for each local expectation.
executor (Executor, optional) – If supplied compute the terms in parallel using this executor.
progbar (bool, optional) – Whether to show a progress bar.
contract_compressed_opts – Supplied to
contract_compressed()
.
 Returns:
expecs – If
return_all==False
, return the summed expectation value of the given terms. Otherwise, return a dictionary mapping each term’s location to the expectation value. Return type:
 compute_local_expectation_rehearse¶
 compute_local_expectation_tn¶
 class quimb.experimental.merabuilder.oset(it=())¶
An ordered set which stores elements as the keys of dict (ordered as of python 3.6). ‘A few times’ slower than using a set directly for small sizes, but makes everything deterministic.
 __slots__ = ('_d',)¶
 classmethod _from_dict(d)¶
 classmethod from_dict(d)¶
Public method makes sure to copy incoming dictionary.
 copy()¶
 __deepcopy__(memo)¶
 add(k)¶
 discard(k)¶
 remove(k)¶
 clear()¶
 update(*others)¶
 union(*others)¶
 intersection_update(*others)¶
 intersection(*others)¶
 difference_update(*others)¶
 difference(*others)¶
 popleft()¶
 popright()¶
 pop¶
 __eq__(other)¶
Return self==value.
 __or__(other)¶
 __ior__(other)¶
 __and__(other)¶
 __iand__(other)¶
 __sub__(other)¶
 __isub__(other)¶
 __len__()¶
 __iter__()¶
 __contains__(x)¶
 __repr__()¶
Return repr(self).
 quimb.experimental.merabuilder.tags_to_oset(tags)¶
Parse a
tags
argument into an ordered set.
 quimb.experimental.merabuilder.rand_uuid(base='')¶
Return a guaranteed unique, shortish identifier, optional appended to
base
.Examples
>>> rand_uuid() '_2e1dae1b'
>>> rand_uuid('virtbond') 'virtbond_bf342e68'
 quimb.experimental.merabuilder._compute_expecs_maybe_in_parallel(fn, tn, terms, return_all=False, executor=None, progbar=False, **kwargs)¶
Unified helper function for the various methods that compute many expectations, possibly in parallel, possibly with a progress bar.
 quimb.experimental.merabuilder._tn_local_expectation(tn, *args, **kwargs)¶
Define as function for pickleability.
 class quimb.experimental.merabuilder.TensorNetwork1DVector(ts=(), *, virtual=False, check_collisions=True)¶
Bases:
TensorNetwork1D
,quimb.tensor.tensor_arbgeom.TensorNetworkGenVector
1D Tensor network which overall is like a vector with a single type of site ind.
 _EXTRA_PROPS = ('_site_tag_id', '_site_ind_id', '_L')¶
 reindex_sites(new_id, where=None, inplace=False)¶
Update the physical site index labels to a new string specifier. Note that this doesn’t change the stored id string with the TN.
 reindex_sites_¶
 site_ind(i)¶
Get the physical index name of site
i
.
 gate(*args, inplace=False, **kwargs)¶
Apply a gate to this vector tensor network at sites
where
. This is essentially a wrapper aroundgate_inds()
apart fromwhere
can be specified as a list of sites, and tags can be optionally, intelligently propagated to the new gate tensor.\[ \psi \rangle \rightarrow G_\mathrm{where}  \psi \rangle\] Parameters:
G (array_ike) – The gate array to apply, should match or be factorable into the shape
(*phys_dims, *phys_dims)
.where (node or sequence[node]) – The sites to apply the gate to.
contract ({False, True, 'split', 'reducesplit', 'splitgate',) – ‘swapsplitgate’, ‘autosplitgate’}, optional How to apply the gate, see
gate_inds()
.tags (str or sequence of str, optional) – Tags to add to the new gate tensor.
propagate_tags ({False, True, 'register', 'sites'}, optional) –
Whether to propagate tags to the new gate tensor:
 False: no tags are propagated  True: all tags are propagated  'register': only site tags corresponding to ``where`` are added.  'sites': all site tags on the current sites are propgated, resulting in a lightcone like tagging.
info (None or dict, optional) – Used to store extra optional information such as the singular values if not absorbed.
inplace (bool, optional) – Whether to perform the gate operation inplace on the tensor network or not.
compress_opts – Supplied to
tensor_split()
for anycontract
methods that involve splitting. Ignored otherwise.
 Return type:
See also
TensorNetwork.gate_inds
 gate_¶
 expec(*args, **kwargs)¶
 correlation(A, i, j, B=None, **expec_opts)¶
Correlation of operator
A
betweeni
andj
. Parameters:
A (array) – The operator to act with, can be multi site.
expec_opts – Supplied to
expec_TN_1D()
.
 Returns:
C – The correlation
<A(i)> + <A(j)>  <A(ij)>
. Return type:
Examples
>>> ghz = (MPS_computational_state('0000') + ... MPS_computational_state('1111')) / 2**0.5 >>> ghz.correlation(pauli('Z'), 0, 1) 1.0 >>> ghz.correlation(pauli('Z'), 0, 1, B=pauli('X')) 0.0
 class quimb.experimental.merabuilder.TensorNetworkGenIso(ts=(), *, virtual=False, check_collisions=True)¶
Bases:
quimb.tensor.tensor_arbgeom.TensorNetworkGenVector
A class for building generic ‘isometric’ or MERA like tensor network states with arbitrary geometry. After supplying the underyling sites of the problem  which can be an arbitrary sequence of hashable objects  one places either unitaries, isometries or tree tensors layered above groups of sites. The isometric and tree tensors effectively coarse grain blocks into a single new site, and the unitaries generally ‘disentangle’ between blocks.
 _EXTRA_PROPS = ('_site_tag_id', '_sites', '_site_ind_id', '_layer_ind_id')¶
 classmethod empty(sites, phys_dim=2, site_tag_id='I{}', site_ind_id='k{}', layer_ind_id='l{}')¶
 property layer_ind_id¶
 layer_ind(site)¶
 layer_gate_raw(G, where, iso=True, new_sites=None, tags=None, all_site_tags=None)¶
Build out this MERA by placing either a new unitary, isometry or tree tensor, given by
G
, at the sites given bywhere
. This handles propagating the lightcone of tags and marking the correct indices of theIsoTensor
asleft_inds
. Parameters:
G (array_like) – The raw array to place at the sites. Its shape determines whether it is a unitary or isometry/tree. It should have
k + len(where)
dimensions. For a unitaryk == len(where)
. If it is an isometry/tree,k
will generally be1
, or0
to ‘cap’ the MERA. The rightmost indices are those attached to the current open layer indices.where (sequence of hashable) – The sites to layer the tensor above.
iso (bool, optional) – Whether to declare the tensor as an unitary/isometry by marking the left indices. If
iso = False
(a ‘tree’ tensor) then one should havek <= 1
. Once you have such a ‘tree’ tensor you cannot place isometries or unitaries above it. It will also have the lightcone tags of every site. Technically one could place ‘PEPS’ style tensor withiso = False
andk > 1
but some methods might break.new_sites (sequence of hashable, optional) – Which sites to make new open sites. If not given, defaults to the first
k
sites inwhere
.tags (sequence of str, optional) – Custom tags to add to the new tensor, in addition to the automatically generated site tags.
all_site_tags (sequence of str, optional) – For performance, supply all site tags to avoid recomputing them.
 layer_gate_fill_fn(fill_fn, operation, where, max_bond, new_sites=None, tags=None, all_site_tags=None)¶
Build out this MERA by placing either a new unitary, isometry or tree tensor at sites
where
, generating the data array usingfill_fn
and maximum bond dimensionmax_bond
. Parameters:
fill_fn (callable) – A function with signature
fill_fn(shape) > array_like
.operation ({"iso", "uni", "cap", "tree", "treecap"}) – The type of tensor to place.
where (sequence of hashable) – The sites to layer the tensor above.
max_bond (int) – The maximum bond dimension of the tensor. This only applies for isometries and trees and when the product of the lower dimensions is greater than
max_bond
.new_sites (sequence of hashable, optional) – Which sites to make new open sites. If not given, defaults to the first
k
sites inwhere
.tags (sequence of str, optional) – Custom tags to add to the new tensor, in addition to the automatically generated site tags.
all_site_tags (sequence of str, optional) – For performance, supply all site tags to avoid recomputing them.
See also
 partial_trace(keep, optimize='autohq', rehearse=False, preserve_tensor=False, **contract_opts)¶
Partial trace out all sites except those in
keep
, making use of the lightcone structure of the MERA. Parameters:
keep (sequence of hashable) – The sites to keep.
optimize (str or PathOptimzer, optional) – The contraction ordering strategy to use.
rehearse ({False, "tn", "tree"}, optional) –
Whether to rehearse the contraction rather than actually performing it. If:
False
: perform the contraction and return the reduced density matrix,”tn”: just the lightcone tensor network is returned,
”tree”: just the contraction tree that will be used is returned.
contract_opts – Additional options to pass to
tensor_contract()
.
 Returns:
The reduced density matrix on sites
keep
. Return type:
array_like
 local_expectation(G, where, optimize='autohq', rehearse=False, **contract_opts)¶
Compute the expectation value of a local operator
G
at siteswhere
. This is done by contracting the lightcone tensor network to form the reduced density matrix, before taking the trace withG
. Parameters:
G (array_like) – The local operator to compute the expectation value of.
where (sequence of hashable) – The sites to compute the expectation value at.
optimize (str or PathOptimzer, optional) – The contraction ordering strategy to use.
rehearse ({False, "tn", "tree"}, optional) – Whether to rehearse the contraction rather than actually performing it. See
partial_trace()
for details.contract_opts – Additional options to pass to
tensor_contract()
.
 Returns:
The expectation value of
G
at siteswhere
. Return type:
See also
 compute_local_expectation(terms, optimize='autohq', return_all=False, rehearse=False, executor=None, progbar=False, **contract_opts)¶
Compute the expectation value of a collection of local operators
terms
at siteswhere
. This is done by contracting the lightcone tensor network to form the reduced density matrices, before taking the trace with eachG
interms
. Parameters:
terms (dict[tuple[hashable], array_like]) – The local operators to compute the expectation value of, keyed by the sites they act on.
optimize (str or PathOptimzer, optional) – The contraction ordering strategy to use.
return_all (bool, optional) – Whether to return all the expectation values, or just the sum.
rehearse ({False, "tn", "tree"}, optional) – Whether to rehearse the contraction rather than actually performing it. See
partial_trace()
for details.executor (Executor, optional) – The executor to use for parallelism.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Additional options to pass to
tensor_contract()
.
 expand_bond_dimension(new_bond_dim, rand_strength=0.0, inds_to_expand=None, inplace=False)¶
Expand the maxmimum bond dimension of this isometric tensor network to
new_bond_dim
. Unlikeexpand_bond_dimension()
this proceeds from the physical indices upwards, and only increases a bonds size ifnew_bond_dim
is larger than product of the lower indices dimensions. Parameters:
new_bond_dim (int) – The new maximum bond dimension to expand to.
rand_strength (float, optional) – The strength of random noise to add to the new array entries, if any.
inds_to_expand (sequence of str, optional) – The indices to expand, if not all.
inplace (bool, optional) – Whether to expand this tensor network in place, or return a new one.
 Return type:
 expand_bond_dimension_¶
 quimb.experimental.merabuilder.calc_1d_unis_isos(sites, block_size, cyclic, group_from_right)¶
Given
sites
, assumed to be in a 1D order, though not neccessarily contiguous, calculate unitary and isometry groupings:│ │ < new grouped site ┐ ┌─────┐ ┌─────┐ ┌ │ │ ISO │ │ ISO │ │ ┘ └─────┘ └─────┘ └ │ │..│..│ │..│..│ │ ┌───┐ │ ┌───┐ │ ┌───┐ │UNI│ │ │UNI│ │ │UNI│ └───┘ │ └───┘ │ └───┘ │ │ ... │ │ ... │ │ ^^^^^^^ < isometry groupings of size, block_size ^^^^^ ^^^^^ < unitary groupings of size 2
 Parameters:
sites (sequence of hashable) – The sites to apply a layer to.
block_size (int) – How many sites to group together per isometry block. Note that currently the unitaries will only ever act on blocks of size 2 across isometry block boundaries.
cyclic (bool) – Whether to apply disentangler / unitaries across the boundary. The isometries will never be applied across the boundary, but since they always form a tree such a bipartition is natural.
group_from_right (bool) – Wether to group the sites starting from the left or right. This only matters if
block_size
does not divide the number of sites. Alternating between left and right more evenly tiles the unitaries and isometries, especially at lower layers.
 Returns:
unis (list[tuple]) – The unitary groupings.
isos (list[tuple]) – The isometry groupings.
 class quimb.experimental.merabuilder.MERA(*args, **kwargs)¶
Bases:
quimb.tensor.tensor_1d.TensorNetwork1DVector
,TensorNetworkGenIso
Replacement class for
MERA
which uses the new infrastructure and thus has methods likecompute_local_expectation
. _EXTRA_PROPS¶
 _CONTRACT_STRUCTURED = False¶
 classmethod from_fill_fn(fill_fn, L, D, phys_dim=2, block_size=2, cyclic=True, uni_fill_fn=None, iso_fill_fn=None, cap_fill_fn=None, **kwargs)¶
Create a 1D MERA using
fill_fn(shape) > array_like
to fill the tensors. Parameters:
fill_fn (callable) – A function which takes a shape and returns an array_like of that shape. You can override this specfically for the unitaries, isometries and cap tensors using the kwargs
uni_fill_fn
,iso_fill_fn
andcap_fill_fn
.L (int) – The number of sites.
D (int) – The maximum bond dimension.
phys_dim (int, optional) – The dimension of the physical indices.
block_size (int, optional) – The size of the isometry blocks. Binary MERA is the default, ternary MERA is
block_size=3
.cyclic (bool, optional) – Whether to apply disentangler / unitaries across the boundary. The isometries will never be applied across the boundary, but since they always form a tree such a bipartition is natural.
uni_fill_fn (callable, optional) – A function which takes a shape and returns an array_like of that shape. This is used to fill the unitary tensors. If
None
thenfill_fn
is used.iso_fill_fn (callable, optional) – A function which takes a shape and returns an array_like of that shape. This is used to fill the isometry tensors. If
None
thenfill_fn
is used.cap_fill_fn (callable, optional) – A function which takes a shape and returns an array_like of that shape. This is used to fill the cap tensors. If
None
thenfill_fn
is used.kwargs – Supplied to
TensorNetworkGenIso.__init__
.
 classmethod rand(L, D, seed=None, block_size=2, phys_dim=2, cyclic=True, isometrize_method='svd', **kwargs)¶
Return a random (optionally isometrized) MERA.
 Parameters:
L (int) – The number of sites.
D (int) – The maximum bond dimension.
seed (int, optional) – A random seed.
block_size (int, optional) – The size of the isometry blocks. Binary MERA is the default, ternary MERA is
block_size=3
.phys_dim (int, optional) – The dimension of the physical indices.
cyclic (bool, optional) – Whether to apply disentangler / unitaries across the boundary. The isometries will never be applied across the boundary, but since they always form a tree such a bipartition is natural.
isometrize_method (str or None, optional) – If given, the method to use to isometrize the MERA. If
None
then the MERA is not isometrized.
 property num_layers¶
 quimb.experimental.merabuilder.TTN_randtree_rand(sites, D, phys_dim=2, group_size=2, iso=False, seed=None, **kwargs)¶
Return a randomly constructed tree tensor network.
 Parameters:
sites (list of hashable) – The sites of the tensor network.
D (int) – The maximum bond dimension.
phys_dim (int, optional) – The dimension of the physical indices.
group_size (int, optional) – How many sites to group together in each tensor.
iso (bool, optional) – Whether to build the tree with an isometric flow towards the top.
seed (int, optional) – A random seed.
kwargs – Supplied to
TensorNetworkGenIso.empty
.
 Returns:
ttn – The tree tensor network.
 Return type: