quimb.tensor.tensor_1d_compress
===============================
.. py:module:: quimb.tensor.tensor_1d_compress
.. autoapi-nested-parse::
Generic methods for compressing 1D-like tensor networks, where the tensor
network can locally have arbitrary structure and outer indices.
- [x] the direct method
- [x] the density matrix method
- [x] the zip-up method
- [x] the zip-up first method
- [x] the 1-site variational fit method, including sums of tensor networks
- [x] the 2-site variational fit method, including sums of tensor networks
- [x] the local projector method (CTMRG and HOTRG style)
- [x] the autofit method (via non-1d specific ALS or autodiff)
Attributes
----------
.. autoapisummary::
quimb.tensor.tensor_1d_compress._TN1D_COMPRESS_METHODS
Classes
-------
.. autoapisummary::
quimb.tensor.tensor_1d_compress.Tensor
quimb.tensor.tensor_1d_compress.TensorNetwork
Functions
---------
.. autoapisummary::
quimb.tensor.tensor_1d_compress.tensor_network_apply_op_vec
quimb.tensor.tensor_1d_compress.tensor_network_ag_compress
quimb.tensor.tensor_1d_compress.TN_matching
quimb.tensor.tensor_1d_compress.ensure_dict
quimb.tensor.tensor_1d_compress.rand_uuid
quimb.tensor.tensor_1d_compress.tensor_contract
quimb.tensor.tensor_1d_compress.enforce_1d_like
quimb.tensor.tensor_1d_compress.possibly_permute_
quimb.tensor.tensor_1d_compress.tensor_network_1d_compress_direct
quimb.tensor.tensor_1d_compress.tensor_network_1d_compress_dm
quimb.tensor.tensor_1d_compress.tensor_network_1d_compress_zipup
quimb.tensor.tensor_1d_compress.tensor_network_1d_compress_zipup_first
quimb.tensor.tensor_1d_compress._tn1d_fit_sum_sweep_1site
quimb.tensor.tensor_1d_compress._tn1d_fit_sum_sweep_2site
quimb.tensor.tensor_1d_compress.tensor_network_1d_compress_fit
quimb.tensor.tensor_1d_compress.tensor_network_1d_compress
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_lazy
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_direct
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_dm
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_zipup
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_zipup_first
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_fit
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_autofit
quimb.tensor.tensor_1d_compress.mps_gate_with_mpo_projector
Module Contents
---------------
.. py:function:: tensor_network_apply_op_vec(A, x, which_A='lower', contract=False, fuse_multibonds=True, compress=False, inplace=False, inplace_A=False, **compress_opts)
Apply a general a general tensor network representing an operator (has
``upper_ind_id`` and ``lower_ind_id``) to a tensor network representing a
vector (has ``site_ind_id``), by contracting each pair of tensors at each
site then compressing the resulting tensor network. How the compression
takes place is determined by the type of tensor network passed in. The
returned tensor network has the same site indices as ``x``, and it is
the ``lower_ind_id`` of ``A`` that is contracted.
This is like performing ``A.to_dense() @ x.to_dense()``, or the transpose
thereof, depending on the value of ``which_A``.
:param A: The tensor network representing the operator.
:type A: TensorNetworkGenOperator
:param x: The tensor network representing the vector.
:type x: TensorNetworkGenVector
:param which_A: Whether to contract the lower or upper indices of ``A`` with the site
indices of ``x``.
:type which_A: {"lower", "upper"}, optional
:param contract: Whether to contract the tensors at each site after applying the
operator, yielding a single tensor at each site.
:type contract: bool
:param fuse_multibonds: If ``contract=True``, whether to fuse any multibonds after contracting
the tensors at each site.
:type fuse_multibonds: bool
:param compress: Whether to compress the resulting tensor network.
:type compress: bool
:param inplace: Whether to modify ``x``, the input vector tensor network inplace.
:type inplace: bool
:param inplace_A: Whether to modify ``A``, the operator tensor network inplace.
:type inplace_A: bool
:param compress_opts: Options to pass to ``tn.compress``, where ``tn`` is the resulting
tensor network, if ``compress=True``.
:returns: The same type as ``x``.
:rtype: TensorNetworkGenVector
.. py:function:: tensor_network_ag_compress(tn, max_bond, cutoff=1e-10, method='local-early', site_tags=None, canonize=True, optimize='auto-hq', equalize_norms=False, inplace=False, **kwargs)
Compress an arbitrary geometry tensor network, with potentially multiple
tensors per site.
:param tn: The tensor network to compress. Every tensor should have exactly one of
the site tags. Each site can have multiple tensors and output indices.
:type tn: TensorNetwork
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param method: The compression method to use:
- 'local-early': explicitly contract each site and interleave with
immediate compression, see
:func:`~quimb.tensor.tensor_arbgeom_compress.tensor_network_ag_compress_local_early`.
- 'local-late': explicitly contract all sites and then compress, see
:func:`~quimb.tensor.tensor_arbgeom_compress.tensor_network_ag_compress_local_late`.
- 'projector': use locally computed projectors, see
:func:`~quimb.tensor.tensor_arbgeom_compress.tensor_network_ag_compress_projector`.
- 'superorthogonal': use the 'superorthogonal' gauge, see
:func:`~quimb.tensor.tensor_arbgeom_compress.tensor_network_ag_compress_superorthogonal`.
- 'l2bp': use lazy 2-norm belief propagation, see
:func:`~quimb.tensor.tensor_arbgeom_compress.tensor_network_ag_compress_l2bp`.
:type method: {'local-early', 'local-late', 'projector', 'superorthogonal', 'l2bp'}, optional
:param site_tags: The tags to use to group the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site.
:type site_tags: sequence of str, optional
:param canonize: Whether to perform canonicalization, pseudo or otherwise depending on
the method, before compressing.
:type canonize: bool, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param equalize_norms: Whether to equalize the norms of the tensors after compression. If an
explicit value is give, then the norms will be set to that value, and
the overall scaling factor will be accumulated into `.exponent`.
:type equalize_norms: bool or float, optional
:param inplace: Whether to perform the compression inplace.
:type inplace: bool, optional
:param kwargs: Supplied to the chosen compression method.
.. py:function:: TN_matching(tn, max_bond, site_tags=None, fill_fn=None, dtype=None, **randn_opts)
Create a tensor network with the same outer indices as ``tn`` but
with a single tensor per site with bond dimension ``max_bond`` between
each connected site. Generally to be used as an initial guess for fitting.
:param tn: The tensor network to match, it can have arbitrary local structure and
output indices, as long as ``site_tags`` effectively partitions it.
:type tn: TensorNetwork
:param max_bond: The bond dimension to use between each site.
:type max_bond: int
:param site_tags: The tags to use to select the tensors from ``tn``. If not given, uses
``tn.site_tags``. The tensor network built will have one tensor per
site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param dtype: The data type to use for the new tensors, if not given uses the same as
the original tensors.
:type dtype: dtype, optional
:param randn_opts: Supplied to :func:`~quimb.gen.rand.randn`.
:rtype: TensorNetwork
.. py:class:: Tensor(data=1.0, inds=(), tags=None, left_inds=None)
A labelled, tagged n-dimensional array. The index labels are used
instead of axis numbers to identify dimensions, and are preserved through
operations. The tags are used to identify the tensor within networks, and
are combined when tensors are contracted together.
:param data: The n-dimensional data.
:type data: numpy.ndarray
:param inds: The index labels for each dimension. Must match the number of
dimensions of ``data``.
:type inds: sequence of str
:param tags: Tags with which to identify and group this tensor. These will
be converted into a ``oset``.
:type tags: sequence of str, optional
:param left_inds: Which, if any, indices to group as 'left' indices of an effective
matrix. This can be useful, for example, when automatically applying
unitary constraints to impose a certain flow on a tensor network but at
the atomistic (Tensor) level.
:type left_inds: sequence of str, optional
.. rubric:: Examples
Basic construction:
>>> from quimb import randn
>>> from quimb.tensor import Tensor
>>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'})
>>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})
Indices are automatically aligned, and tags combined, when contracting:
>>> X @ Y
Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
.. py:attribute:: __slots__
:value: ('_data', '_inds', '_tags', '_left_inds', '_owners')
.. py:method:: _set_data(data)
.. py:method:: _set_inds(inds)
.. py:method:: _set_tags(tags)
.. py:method:: _set_left_inds(left_inds)
.. py:method:: get_params()
A simple function that returns the 'parameters' of the underlying
data array. This is mainly for providing an interface for 'structured'
arrays e.g. with block sparsity to interact with optimization.
.. py:method:: set_params(params)
A simple function that sets the 'parameters' of the underlying
data array. This is mainly for providing an interface for 'structured'
arrays e.g. with block sparsity to interact with optimization.
.. py:method:: copy(deep=False, virtual=False)
Copy this tensor.
.. note::
By default (``deep=False``), the underlying array will *not* be
copied.
:param deep: Whether to copy the underlying data as well.
:type deep: bool, optional
:param virtual: To conveniently mimic the behaviour of taking a virtual copy of
tensor network, this simply returns ``self``.
:type virtual: bool, optional
.. py:attribute:: __copy__
.. py:property:: data
.. py:property:: inds
.. py:property:: tags
.. py:property:: left_inds
.. py:method:: check()
Do some basic diagnostics on this tensor, raising errors if
something is wrong.
.. py:property:: owners
.. py:method:: add_owner(tn, tid)
Add ``tn`` as owner of this Tensor - it's tag and ind maps will
be updated whenever this tensor is retagged or reindexed.
.. py:method:: remove_owner(tn)
Remove TensorNetwork ``tn`` as an owner of this Tensor.
.. py:method:: check_owners()
Check if this tensor is 'owned' by any alive TensorNetworks. Also
trim any weakrefs to dead TensorNetworks.
.. py:method:: _apply_function(fn)
.. py:method:: modify(**kwargs)
Overwrite the data of this tensor in place.
:param data: New data.
:type data: array, optional
:param apply: A function to apply to the current data. If `data` is also given
this is applied subsequently.
:type apply: callable, optional
:param inds: New tuple of indices.
:type inds: sequence of str, optional
:param tags: New tags.
:type tags: sequence of str, optional
:param left_inds: New grouping of indices to be 'on the left'.
:type left_inds: sequence of str, optional
.. py:method:: apply_to_arrays(fn)
Apply the function ``fn`` to the underlying data array(s). This
is meant for changing how the raw arrays are backed (e.g. converting
between dtypes or libraries) but not their 'numerical meaning'.
.. py:method:: isel(selectors, inplace=False)
Select specific values for some dimensions/indices of this tensor,
thereby removing them. Analogous to ``X[:, :, 3, :, :]`` with arrays.
The indices to select from can be specified either by integer, in which
case the correspoding index is removed, or by a slice.
:param selectors: Mapping of index(es) to which value to take.
:type selectors: dict[str, int], dict[str, slice]
:param inplace: Whether to select inplace or not.
:type inplace: bool, optional
:rtype: Tensor
.. rubric:: Examples
>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c'))
>>> T.isel({'b': -1})
Tensor(shape=(2, 4), inds=('a', 'c'), tags=())
.. seealso:: :obj:`TensorNetwork.isel`
.. py:attribute:: isel_
.. py:method:: add_tag(tag)
Add a tag or multiple tags to this tensor. Unlike ``self.tags.add``
this also updates any ``TensorNetwork`` objects viewing this
``Tensor``.
.. py:method:: expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')
Inplace increase the size of the dimension of ``ind``, the new array
entries will be filled with zeros by default.
:param name: Name of the index to expand.
:type name: str
:param size: Size of the expanded index.
:type size: int, optional
:param mode: How to fill any new array entries. If ``'zeros'`` then fill with
zeros, if ``'repeat'`` then repeatedly tile the existing entries.
If ``'random'`` then fill with random entries drawn from
``rand_dist``, multiplied by ``rand_strength``. If ``None`` then
select from zeros or random depening on non-zero ``rand_strength``.
:type mode: {None, 'zeros', 'repeat', 'random'}, optional
:param rand_strength: If ``mode='random'``, a multiplicative scale for the random
entries, defaulting to 1.0. If ``mode is None`` then supplying a
non-zero value here triggers ``mode='random'``.
:type rand_strength: float, optional
:param rand_dist: If ``mode='random'``, the distribution to draw the random entries
from.
:type rand_dist: {'normal', 'uniform', 'exp'}, optional
.. py:method:: new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')
Inplace add a new index - a named dimension. If ``size`` is
specified to be greater than one then the new array entries will be
filled with zeros.
:param name: Name of the new index.
:type name: str
:param size: Size of the new index.
:type size: int, optional
:param axis: Position of the new index.
:type axis: int, optional
:param mode: How to fill any new array entries. If ``'zeros'`` then fill with
zeros, if ``'repeat'`` then repeatedly tile the existing entries.
If ``'random'`` then fill with random entries drawn from
``rand_dist``, multiplied by ``rand_strength``. If ``None`` then
select from zeros or random depening on non-zero ``rand_strength``.
:type mode: {None, 'zeros', 'repeat', 'random'}, optional
:param rand_strength: If ``mode='random'``, a multiplicative scale for the random
entries, defaulting to 1.0. If ``mode is None`` then supplying a
non-zero value here triggers ``mode='random'``.
:type rand_strength: float, optional
:param rand_dist: If ``mode='random'``, the distribution to draw the random entries
from.
:type rand_dist: {'normal', 'uniform', 'exp'}, optional
.. seealso:: :obj:`Tensor.expand_ind`, :obj:`new_bond`
.. py:attribute:: new_bond
.. py:method:: new_ind_with_identity(name, left_inds, right_inds, axis=0)
Inplace add a new index, where the newly stacked array entries form
the identity from ``left_inds`` to ``right_inds``. Selecting 0 or 1 for
the new index ``name`` thus is like 'turning off' this tensor if viewed
as an operator.
:param name: Name of the new index.
:type name: str
:param left_inds: Names of the indices forming the left hand side of the operator.
:type left_inds: tuple[str]
:param right_inds: Names of the indices forming the right hand side of the operator.
The dimensions of these must match those of ``left_inds``.
:type right_inds: tuple[str]
:param axis: Position of the new index.
:type axis: int, optional
.. py:method:: new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)
Expand this tensor with two new indices of size ``d``, by taking an
(outer) tensor product with the identity operator. The two new indices
are added as axes at the start of the tensor.
:param new_left_ind: Name of the new left index.
:type new_left_ind: str
:param new_right_ind: Name of the new right index.
:type new_right_ind: str
:param d: Size of the new indices.
:type d: int
:param inplace: Whether to perform the expansion inplace.
:type inplace: bool, optional
:rtype: Tensor
.. py:attribute:: new_ind_pair_with_identity_
.. py:method:: conj(inplace=False)
Conjugate this tensors data (does nothing to indices).
.. py:attribute:: conj_
.. py:property:: H
Conjugate this tensors data (does nothing to indices).
.. py:property:: shape
The size of each dimension.
.. py:property:: ndim
The number of dimensions.
.. py:property:: size
The total number of array elements.
.. py:property:: dtype
The data type of the array elements.
.. py:property:: backend
The backend inferred from the data.
.. py:method:: iscomplex()
.. py:method:: astype(dtype, inplace=False)
Change the type of this tensor to ``dtype``.
.. py:attribute:: astype_
.. py:method:: max_dim()
Return the maximum size of any dimension, or 1 if scalar.
.. py:method:: ind_size(ind)
Return the size of dimension corresponding to ``ind``.
.. py:method:: inds_size(inds)
Return the total size of dimensions corresponding to ``inds``.
.. py:method:: shared_bond_size(other)
Get the total size of the shared index(es) with ``other``.
.. py:method:: inner_inds()
Get all indices that appear on two or more tensors.
.. py:method:: transpose(*output_inds, inplace=False)
Transpose this tensor - permuting the order of both the data *and*
the indices. This operation is mainly for ensuring a certain data
layout since for most operations the specific order of indices doesn't
matter.
Note to compute the tranditional 'transpose' of an operator within a
contraction for example, you would just use reindexing not this.
:param output_inds: The desired output sequence of indices.
:type output_inds: sequence of str
:param inplace: Perform the tranposition inplace.
:type inplace: bool, optional
:returns: **tt** -- The transposed tensor.
:rtype: Tensor
.. seealso:: :obj:`transpose_like`, :obj:`reindex`
.. py:attribute:: transpose_
.. py:method:: transpose_like(other, inplace=False)
Transpose this tensor to match the indices of ``other``, allowing
for one index to be different. E.g. if
``self.inds = ('a', 'b', 'c', 'x')`` and
``other.inds = ('b', 'a', 'd', 'c')`` then 'x' will be aligned with 'd'
and the output inds will be ``('b', 'a', 'x', 'c')``
:param other: The tensor to match.
:type other: Tensor
:param inplace: Perform the tranposition inplace.
:type inplace: bool, optional
:returns: **tt** -- The transposed tensor.
:rtype: Tensor
.. seealso:: :obj:`transpose`
.. py:attribute:: transpose_like_
.. py:method:: moveindex(ind, axis, inplace=False)
Move the index ``ind`` to position ``axis``. Like ``transpose``,
this permutes the order of both the data *and* the indices and is
mainly for ensuring a certain data layout since for most operations the
specific order of indices doesn't matter.
:param ind: The index to move.
:type ind: str
:param axis: The new position to move ``ind`` to. Can be negative.
:type axis: int
:param inplace: Whether to perform the move inplace or not.
:type inplace: bool, optional
:rtype: Tensor
.. py:attribute:: moveindex_
.. py:method:: item()
Return the scalar value of this tensor, if it has a single element.
.. py:method:: trace(left_inds, right_inds, preserve_tensor=False, inplace=False)
Trace index or indices ``left_inds`` with ``right_inds``, removing
them.
:param left_inds: The left indices to trace, order matching ``right_inds``.
:type left_inds: str or sequence of str
:param right_inds: The right indices to trace, order matching ``left_inds``.
:type right_inds: str or sequence of str
:param preserve_tensor: If ``True``, a tensor will be returned even if no indices remain.
:type preserve_tensor: bool, optional
:param inplace: Perform the trace inplace.
:type inplace: bool, optional
:returns: **z**
:rtype: Tensor or scalar
.. py:method:: sum_reduce(ind, inplace=False)
Sum over index ``ind``, removing it from this tensor.
:param ind: The index to sum over.
:type ind: str
:param inplace: Whether to perform the reduction inplace.
:type inplace: bool, optional
:rtype: Tensor
.. py:attribute:: sum_reduce_
.. py:method:: vector_reduce(ind, v, inplace=False)
Contract the vector ``v`` with the index ``ind`` of this tensor,
removing it.
:param ind: The index to contract.
:type ind: str
:param v: The vector to contract with.
:type v: array_like
:param inplace: Whether to perform the reduction inplace.
:type inplace: bool, optional
:rtype: Tensor
.. py:attribute:: vector_reduce_
.. py:method:: collapse_repeated(inplace=False)
Take the diagonals of any repeated indices, such that each index
only appears once.
.. py:attribute:: collapse_repeated_
.. py:method:: contract(*others, output_inds=None, **opts)
.. py:method:: direct_product(other, sum_inds=(), inplace=False)
.. py:attribute:: direct_product_
.. py:method:: split(*args, **kwargs)
.. py:method:: compute_reduced_factor(side, left_inds, right_inds, **split_opts)
.. py:method:: distance(other, **contract_opts)
.. py:attribute:: distance_normalized
.. py:method:: gate(G, ind, preserve_inds=True, inplace=False)
Gate this tensor - contract a matrix into one of its indices without
changing its indices. Unlike ``contract``, ``G`` is a raw array and the
tensor remains with the same set of indices.
:param G: The matrix to gate the tensor index with.
:type G: 2D array_like
:param ind: Which index to apply the gate to.
:type ind: str
:rtype: Tensor
.. rubric:: Examples
Create a random tensor of 4 qubits:
>>> t = qtn.rand_tensor(
... shape=[2, 2, 2, 2],
... inds=['k0', 'k1', 'k2', 'k3'],
... )
Create another tensor with an X gate applied to qubit 2:
>>> Gt = t.gate(qu.pauli('X'), 'k2')
The contraction of these two tensors is now the expectation of that
operator:
>>> t.H @ Gt
-4.108910576149794
.. py:attribute:: gate_
.. py:method:: singular_values(left_inds, method='svd')
Return the singular values associated with splitting this tensor
according to ``left_inds``.
:param left_inds: A subset of this tensors indices that defines 'left'.
:type left_inds: sequence of str
:param method: Whether to use the SVD or eigenvalue decomposition to get the
singular values.
:type method: {'svd', 'eig'}
:returns: The singular values.
:rtype: 1d-array
.. py:method:: entropy(left_inds, method='svd')
Return the entropy associated with splitting this tensor
according to ``left_inds``.
:param left_inds: A subset of this tensors indices that defines 'left'.
:type left_inds: sequence of str
:param method: Whether to use the SVD or eigenvalue decomposition to get the
singular values.
:type method: {'svd', 'eig'}
:rtype: float
.. py:method:: retag(retag_map, inplace=False)
Rename the tags of this tensor, optionally, in-place.
:param retag_map: Mapping of pairs ``{old_tag: new_tag, ...}``.
:type retag_map: dict-like
:param inplace: If ``False`` (the default), a copy of this tensor with the changed
tags will be returned.
:type inplace: bool, optional
.. py:attribute:: retag_
.. py:method:: reindex(index_map, inplace=False)
Rename the indices of this tensor, optionally in-place.
:param index_map: Mapping of pairs ``{old_ind: new_ind, ...}``.
:type index_map: dict-like
:param inplace: If ``False`` (the default), a copy of this tensor with the changed
inds will be returned.
:type inplace: bool, optional
.. py:attribute:: reindex_
.. py:method:: fuse(fuse_map, inplace=False)
Combine groups of indices into single indices.
:param fuse_map: Mapping like: ``{new_ind: sequence of existing inds, ...}`` or an
ordered mapping like ``[(new_ind_1, old_inds_1), ...]`` in which
case the output tensor's fused inds will be ordered. In both cases
the new indices are created at the minimum axis of any of the
indices that will be fused.
:type fuse_map: dict_like or sequence of tuples.
:returns: The transposed, reshaped and re-labeled tensor.
:rtype: Tensor
.. py:attribute:: fuse_
.. py:method:: unfuse(unfuse_map, shape_map, inplace=False)
Reshape single indices into groups of multiple indices
:param unfuse_map: Mapping like: ``{existing_ind: sequence of new inds, ...}`` or an
ordered mapping like ``[(old_ind_1, new_inds_1), ...]`` in which
case the output tensor's new inds will be ordered. In both cases
the new indices are created at the old index's position of the
tensor's shape
:type unfuse_map: dict_like or sequence of tuples.
:param shape_map: Mapping like: ``{old_ind: new_ind_sizes, ...}`` or an
ordered mapping like ``[(old_ind_1, new_ind_sizes_1), ...]``.
:type shape_map: dict_like or sequence of tuples
:returns: The transposed, reshaped and re-labeled tensor
:rtype: Tensor
.. py:attribute:: unfuse_
.. py:method:: to_dense(*inds_seq, to_qarray=False)
Convert this Tensor into an dense array, with a single dimension
for each of inds in ``inds_seqs``. E.g. to convert several sites
into a density matrix: ``T.to_dense(('k0', 'k1'), ('b0', 'b1'))``.
.. py:attribute:: to_qarray
.. py:method:: squeeze(include=None, exclude=None, inplace=False)
Drop any singlet dimensions from this tensor.
:param inplace: Whether modify the original or return a new tensor.
:type inplace: bool, optional
:param include: Only squeeze dimensions with indices in this list.
:type include: sequence of str, optional
:param exclude: Squeeze all dimensions except those with indices in this list.
:type exclude: sequence of str, optional
:param inplace: Whether to perform the squeeze inplace or not.
:type inplace: bool, optional
:rtype: Tensor
.. py:attribute:: squeeze_
.. py:method:: largest_element()
Return the largest element, in terms of absolute magnitude, of this
tensor.
.. py:method:: idxmin(f=None)
Get the index configuration of the minimum element of this tensor,
optionally applying ``f`` first.
:param f: If a callable, apply this function to the tensor data before
finding the minimum element. If a string, apply
``autoray.do(f, data)``.
:type f: callable or str, optional
:returns: Mapping of index names to their values at the minimum element.
:rtype: dict[str, int]
.. py:method:: idxmax(f=None)
Get the index configuration of the maximum element of this tensor,
optionally applying ``f`` first.
:param f: If a callable, apply this function to the tensor data before
finding the maximum element. If a string, apply
``autoray.do(f, data)``.
:type f: callable or str, optional
:returns: Mapping of index names to their values at the maximum element.
:rtype: dict[str, int]
.. py:method:: norm()
Frobenius norm of this tensor:
.. math::
\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}
where the trace is taken over all indices. Equivalent to the square
root of the sum of squared singular values across any partition.
.. py:method:: normalize(inplace=False)
.. py:attribute:: normalize_
.. py:method:: symmetrize(ind1, ind2, inplace=False)
Hermitian symmetrize this tensor for indices ``ind1`` and ``ind2``.
I.e. ``T = (T + T.conj().T) / 2``, where the transpose is taken only
over the specified indices.
.. py:attribute:: symmetrize_
.. py:method:: isometrize(left_inds=None, method='qr', inplace=False)
Make this tensor unitary (or isometric) with respect to
``left_inds``. The underlying method is set by ``method``.
:param left_inds: The indices to group together and treat as the left hand side of a
matrix.
:type left_inds: sequence of str
:param method: The method used to generate the isometry. The options are:
- "qr": use the Q factor of the QR decomposition of ``x`` with the
constraint that the diagonal of ``R`` is positive.
- "svd": uses ``U @ VH`` of the SVD decomposition of ``x``. This is
useful for finding the 'closest' isometric matrix to ``x``, such
as when it has been expanded with noise etc. But is less stable
for differentiation / optimization.
- "exp": use the matrix exponential of ``x - dag(x)``, first
completing ``x`` with zeros if it is rectangular. This is a good
parametrization for optimization, but more expensive for
non-square ``x``.
- "cayley": use the Cayley transform of ``x - dag(x)``, first
completing ``x`` with zeros if it is rectangular. This is a good
parametrization for optimization (one the few compatible with
`HIPS/autograd` e.g.), but more expensive for non-square ``x``.
- "householder": use the Householder reflection method directly.
This requires that the backend implements
"linalg.householder_product".
- "torch_householder": use the Householder reflection method
directly, using the ``torch_householder`` package. This requires
that the package is installed and that the backend is
``"torch"``. This is generally the best parametrizing method for
"torch" if available.
- "mgs": use a python implementation of the modified Gram Schmidt
method directly. This is slow if not compiled but a useful
reference.
Not all backends support all methods or differentiating through all
methods.
:type method: str, optional
:param inplace: Whether to perform the unitization inplace.
:type inplace: bool, optional
:rtype: Tensor
.. py:attribute:: isometrize_
.. py:attribute:: unitize
.. py:attribute:: unitize_
.. py:method:: randomize(dtype=None, inplace=False, **randn_opts)
Randomize the entries of this tensor.
:param dtype: The data type of the random entries. If left as the default
``None``, then the data type of the current array will be used.
:type dtype: {None, str}, optional
:param inplace: Whether to perform the randomization inplace, by default ``False``.
:type inplace: bool, optional
:param randn_opts: Supplied to :func:`~quimb.gen.rand.randn`.
:rtype: Tensor
.. py:attribute:: randomize_
.. py:method:: flip(ind, inplace=False)
Reverse the axis on this tensor corresponding to ``ind``. Like
performing e.g. ``X[:, :, ::-1, :]``.
.. py:attribute:: flip_
.. py:method:: multiply_index_diagonal(ind, x, inplace=False)
Multiply this tensor by 1D array ``x`` as if it were a diagonal
tensor being contracted into index ``ind``.
.. py:attribute:: multiply_index_diagonal_
.. py:method:: almost_equals(other, **kwargs)
Check if this tensor is almost the same as another.
.. py:method:: drop_tags(tags=None)
Drop certain tags, defaulting to all, from this tensor.
.. py:method:: bonds(other)
Return a tuple of the shared indices between this tensor
and ``other``.
.. py:method:: filter_bonds(other)
Sort this tensor's indices into a list of those that it shares and
doesn't share with another tensor.
:param other: The other tensor.
:type other: Tensor
:returns: **shared, unshared** -- The shared and unshared indices.
:rtype: (tuple[str], tuple[str])
.. py:method:: __imul__(other)
.. py:method:: __itruediv__(other)
.. py:method:: __and__(other)
Combine with another ``Tensor`` or ``TensorNetwork`` into a new
``TensorNetwork``.
.. py:method:: __or__(other)
Combine virtually (no copies made) with another ``Tensor`` or
``TensorNetwork`` into a new ``TensorNetwork``.
.. py:method:: __matmul__(other)
Explicitly contract with another tensor. Avoids some slight overhead
of calling the full :func:`~quimb.tensor.tensor_core.tensor_contract`.
.. py:method:: negate(inplace=False)
Negate this tensor.
.. py:attribute:: negate_
.. py:method:: __neg__()
Negate this tensor.
.. py:method:: as_network(virtual=True)
Return a ``TensorNetwork`` with only this tensor.
.. py:method:: draw(*args, **kwargs)
Plot a graph of this tensor and its indices.
.. py:attribute:: graph
.. py:attribute:: visualize
.. py:method:: __getstate__()
Helper for pickle.
.. py:method:: __setstate__(state)
.. py:method:: _repr_info()
General info to show in various reprs. Sublasses can add more
relevant info to this dict.
.. py:method:: _repr_info_extra()
General detailed info to show in various reprs. Sublasses can add
more relevant info to this dict.
.. py:method:: _repr_info_str(normal=True, extra=False)
Render the general info as a string.
.. py:method:: _repr_html_()
Render this Tensor as HTML, for Jupyter notebooks.
.. py:method:: __str__()
Return str(self).
.. py:method:: __repr__()
Return repr(self).
.. py:class:: TensorNetwork(ts=(), *, virtual=False, check_collisions=True)
Bases: :py:obj:`object`
A collection of (as yet uncontracted) Tensors.
:param ts: The objects to combine. The new network will copy these (but not the
underlying data) by default. For a *view* set ``virtual=True``.
:type ts: sequence of Tensor or TensorNetwork
:param virtual: Whether the TensorNetwork should be a *view* onto the tensors it is
given, or a copy of them. E.g. if a virtual TN is constructed, any
changes to a Tensor's indices or tags will propagate to all TNs viewing
that Tensor.
:type virtual: bool, optional
:param check_collisions: If True, the default, then ``TensorNetwork`` instances with double
indices which match another ``TensorNetwork`` instances double indices
will have those indices' names mangled. Can be explicitly turned off
when it is known that no collisions will take place -- i.e. when not
adding any new tensors.
:type check_collisions: bool, optional
.. attribute:: tensor_map
Mapping of unique ids to tensors, like``{tensor_id: tensor, ...}``.
I.e. this is where the tensors are 'stored' by the network.
:type: dict
.. attribute:: tag_map
Mapping of tags to a set of tensor ids which have those tags. I.e.
``{tag: {tensor_id_1, tensor_id_2, ...}}``. Thus to select those
tensors could do: ``map(tensor_map.__getitem__, tag_map[tag])``.
:type: dict
.. attribute:: ind_map
Like ``tag_map`` but for indices. So ``ind_map[ind]]`` returns the
tensor ids of those tensors with ``ind``.
:type: dict
.. attribute:: exponent
A scalar prefactor for the tensor network, stored in base 10 like
``10**exponent``. This is mostly for conditioning purposes and will be
``0.0`` unless you use use ``equalize_norms(value)`` or
``tn.strip_exponent(tid_or_tensor)``.
:type: float
.. py:attribute:: _EXTRA_PROPS
:value: ()
.. py:attribute:: _CONTRACT_STRUCTURED
:value: False
.. py:method:: combine(other, *, virtual=False, check_collisions=True)
Combine this tensor network with another, returning a new tensor
network. This can be overriden by subclasses to check for a compatible
structured type.
:param other: The other tensor network to combine with.
:type other: TensorNetwork
:param virtual: Whether the new tensor network should copy all the incoming tensors
(``False``, the default), or view them as virtual (``True``).
:type virtual: bool, optional
:param check_collisions: Whether to check for index collisions between the two tensor
networks before combining them. If ``True`` (the default), any
inner indices that clash will be mangled.
:type check_collisions: bool, optional
:rtype: TensorNetwork
.. py:method:: __and__(other)
Combine this tensor network with more tensors, without contracting.
Copies the tensors.
.. py:method:: __or__(other)
Combine this tensor network with more tensors, without contracting.
Views the constituent tensors.
.. py:method:: _update_properties(cls, like=None, current=None, **kwargs)
.. py:method:: new(like=None, **kwargs)
:classmethod:
Create a new tensor network, without any tensors, of type ``cls``,
with all the requisite properties specified by ``kwargs`` or inherited
from ``like``.
.. py:method:: from_TN(tn, like=None, inplace=False, **kwargs)
:classmethod:
Construct a specific tensor network subclass (i.e. one with some
promise about structure/geometry and tags/inds such as an MPS) from
a generic tensor network which should have that structure already.
:param cls: The TensorNetwork subclass to convert ``tn`` to.
:type cls: class
:param tn: The TensorNetwork to convert.
:type tn: TensorNetwork
:param like: If specified, try and retrieve the neccesary attribute values from
this tensor network.
:type like: TensorNetwork, optional
:param inplace: Whether to perform the conversion inplace or not.
:type inplace: bool, optional
:param kwargs: Extra properties of the TN subclass that should be specified.
.. py:method:: view_as(cls, inplace=False, **kwargs)
View this tensor network as subclass ``cls``.
.. py:attribute:: view_as_
.. py:method:: view_like(like, inplace=False, **kwargs)
View this tensor network as the same subclass ``cls`` as ``like``
inheriting its extra properties as well.
.. py:attribute:: view_like_
.. py:method:: copy(virtual=False, deep=False)
Copy this ``TensorNetwork``. If ``deep=False``, (the default), then
everything but the actual numeric data will be copied.
.. py:attribute:: __copy__
.. py:method:: get_params()
Get a pytree of the 'parameters', i.e. all underlying data arrays.
.. py:method:: set_params(params)
Take a pytree of the 'parameters', i.e. all underlying data arrays,
as returned by ``get_params`` and set them.
.. py:method:: _link_tags(tags, tid)
Link ``tid`` to each of ``tags``.
.. py:method:: _unlink_tags(tags, tid)
"Unlink ``tid`` from each of ``tags``.
.. py:method:: _link_inds(inds, tid)
Link ``tid`` to each of ``inds``.
.. py:method:: _unlink_inds(inds, tid)
"Unlink ``tid`` from each of ``inds``.
.. py:method:: _reset_inner_outer(inds)
.. py:method:: _next_tid()
.. py:method:: add_tensor(tensor, tid=None, virtual=False)
Add a single tensor to this network - mangle its tid if neccessary.
.. py:method:: add_tensor_network(tn, virtual=False, check_collisions=True)
.. py:method:: add(t, virtual=False, check_collisions=True)
Add Tensor, TensorNetwork or sequence thereof to self.
.. py:method:: make_tids_consecutive(tid0=0)
Reset the `tids` - node identifies - to be consecutive integers.
.. py:method:: __iand__(tensor)
Inplace, but non-virtual, addition of a Tensor or TensorNetwork to
this network. It should not have any conflicting indices.
.. py:method:: __ior__(tensor)
Inplace, virtual, addition of a Tensor or TensorNetwork to this
network. It should not have any conflicting indices.
.. py:method:: _modify_tensor_tags(old, new, tid)
.. py:method:: _modify_tensor_inds(old, new, tid)
.. py:property:: num_tensors
The total number of tensors in the tensor network.
.. py:property:: num_indices
The total number of indices in the tensor network.
.. py:method:: pop_tensor(tid)
Remove tensor with ``tid`` from this network, and return it.
.. py:method:: remove_all_tensors()
Remove all tensors from this network.
.. py:attribute:: _pop_tensor
.. py:method:: delete(tags, which='all')
Delete any tensors which match all or any of ``tags``.
:param tags: The tags to match.
:type tags: str or sequence of str
:param which: Whether to match all or any of the tags.
:type which: {'all', 'any'}, optional
.. py:method:: check()
Check some basic diagnostics of the tensor network.
.. py:method:: add_tag(tag, where=None, which='all')
Add tag to every tensor in this network, or if ``where`` is
specified, the tensors matching those tags -- i.e. adds the tag to
all tensors in ``self.select_tensors(where, which=which)``.
.. py:method:: drop_tags(tags=None)
Remove a tag or tags from this tensor network, defaulting to all.
This is an inplace operation.
:param tags: The tag or tags to drop. If ``None``, drop all tags.
:type tags: str or sequence of str or None, optional
.. py:method:: retag(tag_map, inplace=False)
Rename tags for all tensors in this network, optionally in-place.
:param tag_map: Mapping of pairs ``{old_tag: new_tag, ...}``.
:type tag_map: dict-like
:param inplace: Perform operation inplace or return copy (default).
:type inplace: bool, optional
.. py:attribute:: retag_
.. py:method:: reindex(index_map, inplace=False)
Rename indices for all tensors in this network, optionally in-place.
:param index_map: Mapping of pairs ``{old_ind: new_ind, ...}``.
:type index_map: dict-like
.. py:attribute:: reindex_
.. py:method:: mangle_inner_(append=None, which=None)
Generate new index names for internal bonds, meaning that when this
tensor network is combined with another, there should be no collisions.
:param append: Whether and what to append to the indices to perform the mangling.
If ``None`` a whole new random UUID will be generated.
:type append: None or str, optional
:param which: Which indices to rename, if ``None`` (the default), all inner
indices.
:type which: sequence of str, optional
.. py:method:: conj(mangle_inner=False, inplace=False)
Conjugate all the tensors in this network (leaves all indices).
.. py:attribute:: conj_
.. py:property:: H
Conjugate all the tensors in this network (leaves all indices).
.. py:method:: item()
Return the scalar value of this tensor network, if it is a scalar.
.. py:method:: largest_element()
Return the 'largest element', in terms of absolute magnitude, of
this tensor network. This is defined as the product of the largest
elements of each tensor in the network, which would be the largest
single term occuring if the TN was summed explicitly.
.. py:method:: norm(**contract_opts)
Frobenius norm of this tensor network. Computed by exactly
contracting the TN with its conjugate:
.. math::
\|T\|_F = \sqrt{\mathrm{Tr} \left(T^{\dagger} T\right)}
where the trace is taken over all indices. Equivalent to the square
root of the sum of squared singular values across any partition.
.. py:method:: make_norm(mangle_append='*', layer_tags=('KET', 'BRA'), return_all=False)
Make the norm tensor network of this tensor network ``tn.H & tn``.
:param mangle_append: How to mangle the inner indices of the bra.
:type mangle_append: {str, False or None}, optional
:param layer_tags: The tags to identify the top and bottom.
:type layer_tags: (str, str), optional
:param return_all: Return the norm, the ket and the bra.
:type return_all: bool, optional
.. py:method:: multiply(x, inplace=False, spread_over=8)
Scalar multiplication of this tensor network with ``x``.
:param x: The number to multiply this tensor network by.
:type x: scalar
:param inplace: Whether to perform the multiplication inplace.
:type inplace: bool, optional
:param spread_over: How many tensors to try and spread the multiplication over, in
order that the effect of multiplying by a very large or small
scalar is not concentrated.
:type spread_over: int, optional
.. py:attribute:: multiply_
.. py:method:: multiply_each(x, inplace=False)
Scalar multiplication of each tensor in this
tensor network with ``x``. If trying to spread a
multiplicative factor ``fac`` uniformly over all tensors in the
network and the number of tensors is large, then calling
``multiply(fac)`` can be inaccurate due to precision loss.
If one has a routine that can precisely compute the ``x``
to be applied to each tensor, then this function avoids
the potential inaccuracies in ``multiply()``.
:param x: The number that multiplies each tensor in the network
:type x: scalar
:param inplace: Whether to perform the multiplication inplace.
:type inplace: bool, optional
.. py:attribute:: multiply_each_
.. py:method:: negate(inplace=False)
Negate this tensor network.
.. py:attribute:: negate_
.. py:method:: __mul__(other)
Scalar multiplication.
.. py:method:: __rmul__(other)
Right side scalar multiplication.
.. py:method:: __imul__(other)
Inplace scalar multiplication.
.. py:method:: __truediv__(other)
Scalar division.
.. py:method:: __itruediv__(other)
Inplace scalar division.
.. py:method:: __neg__()
Negate this tensor network.
.. py:method:: __iter__()
.. py:property:: tensors
Get the tuple of tensors in this tensor network.
.. py:property:: arrays
Get the tuple of raw arrays containing all the tensor network data.
.. py:method:: get_symbol_map()
Get the mapping of the current indices to ``einsum`` style single
unicode characters. The symbols are generated in the order they appear
on the tensors.
.. seealso:: :obj:`get_equation`, :obj:`get_inputs_output_size_dict`
.. py:method:: get_equation(output_inds=None)
Get the 'equation' describing this tensor network, in ``einsum``
style with a single unicode letter per index. The symbols are generated
in the order they appear on the tensors.
:param output_inds: Manually specify which are the output indices.
:type output_inds: None or sequence of str, optional
:returns: **eq**
:rtype: str
.. rubric:: Examples
>>> tn = qtn.TN_rand_reg(10, 3, 2)
>>> tn.get_equation()
'abc,dec,fgb,hia,jke,lfk,mnj,ing,omd,ohl->'
.. seealso:: :obj:`get_symbol_map`, :obj:`get_inputs_output_size_dict`
.. py:method:: get_inputs_output_size_dict(output_inds=None)
Get a tuple of ``inputs``, ``output`` and ``size_dict`` suitable for
e.g. passing to path optimizers. The symbols are generated in the order
they appear on the tensors.
:param output_inds: Manually specify which are the output indices.
:type output_inds: None or sequence of str, optional
:returns: * **inputs** (*tuple[str]*)
* **output** (*str*)
* **size_dict** (*dict[str, ix]*)
.. seealso:: :obj:`get_symbol_map`, :obj:`get_equation`
.. py:method:: geometry_hash(output_inds=None, strict_index_order=False)
A hash of this tensor network's shapes & geometry. A useful check
for determinism. Moreover, if this matches for two tensor networks then
they can be contracted using the same tree for the same cost. Order of
tensors matters for this - two isomorphic tensor networks with shuffled
tensor order will not have the same hash value. Permuting the indices
of individual of tensors or the output does not matter unless you set
``strict_index_order=True``.
:param output_inds: Manually specify which indices are output indices and their order,
otherwise assumed to be all indices that appear once.
:type output_inds: None or sequence of str, optional
:param strict_index_order: If ``False``, then the permutation of the indices of each tensor
and the output does not matter.
:type strict_index_order: bool, optional
:rtype: str
.. rubric:: Examples
If we transpose some indices, then only the strict hash changes:
>>> tn = qtn.TN_rand_reg(100, 3, 2, seed=0)
>>> tn.geometry_hash()
'18c702b2d026dccb1a69d640b79d22f3e706b6ad'
>>> tn.geometry_hash(strict_index_order=True)
'c109fdb43c5c788c0aef7b8df7bb83853cf67ca1'
>>> t = tn['I0']
>>> t.transpose_(t.inds[2], t.inds[1], t.inds[0])
>>> tn.geometry_hash()
'18c702b2d026dccb1a69d640b79d22f3e706b6ad'
>>> tn.geometry_hash(strict_index_order=True)
'52c32c1d4f349373f02d512f536b1651dfe25893'
.. py:method:: tensors_sorted()
Return a tuple of tensors sorted by their respective tags, such that
the tensors of two networks with the same tag structure can be
iterated over pairwise.
.. py:method:: apply_to_arrays(fn)
Modify every tensor's array inplace by applying ``fn`` to it. This
is meant for changing how the raw arrays are backed (e.g. converting
between dtypes or libraries) but not their 'numerical meaning'.
.. py:method:: _get_tids_from(xmap, xs, which)
.. py:method:: _get_tids_from_tags(tags, which='all')
Return the set of tensor ids that match ``tags``.
:param tags: Tag specifier(s).
:type tags: seq or str, str, None, ..., int, slice
:param which: How to select based on the tags, if:
- 'all': get ids of tensors matching all tags
- 'any': get ids of tensors matching any tags
- '!all': get ids of tensors *not* matching all tags
- '!any': get ids of tensors *not* matching any tags
:type which: {'all', 'any', '!all', '!any'}
:rtype: set[str]
.. py:method:: _get_tids_from_inds(inds, which='all')
Like ``_get_tids_from_tags`` but specify inds instead.
.. py:method:: _tids_get(*tids)
Convenience function that generates unique tensors from tids.
.. py:method:: _inds_get(*inds)
Convenience function that generates unique tensors from inds.
.. py:method:: _tags_get(*tags)
Convenience function that generates unique tensors from tags.
.. py:method:: select_tensors(tags, which='all')
Return the sequence of tensors that match ``tags``. If
``which='all'``, each tensor must contain every tag. If
``which='any'``, each tensor can contain any of the tags.
:param tags: The tag or tag sequence.
:type tags: str or sequence of str
:param which: Whether to require matching all or any of the tags.
:type which: {'all', 'any'}
:returns: **tagged_tensors** -- The tagged tensors.
:rtype: tuple of Tensor
.. seealso:: :obj:`select`, :obj:`select_neighbors`, :obj:`partition`, :obj:`partition_tensors`
.. py:method:: _select_tids(tids, virtual=True)
Get a copy or a virtual copy (doesn't copy the tensors) of this
``TensorNetwork``, only with the tensors corresponding to ``tids``.
.. py:method:: _select_without_tids(tids, virtual=True)
Get a copy or a virtual copy (doesn't copy the tensors) of this
``TensorNetwork``, without the tensors corresponding to ``tids``.
.. py:method:: select(tags, which='all', virtual=True)
Get a TensorNetwork comprising tensors that match all or any of
``tags``, inherit the network properties/structure from ``self``.
This returns a view of the tensors not a copy.
:param tags: The tag or tag sequence.
:type tags: str or sequence of str
:param which: Whether to require matching all or any of the tags.
:type which: {'all', 'any'}
:param virtual: Whether the returned tensor network views the same tensors (the
default) or takes copies (``virtual=False``) from ``self``.
:type virtual: bool, optional
:returns: **tagged_tn** -- A tensor network containing the tagged tensors.
:rtype: TensorNetwork
.. seealso:: :obj:`select_tensors`, :obj:`select_neighbors`, :obj:`partition`, :obj:`partition_tensors`
.. py:attribute:: select_any
.. py:attribute:: select_all
.. py:method:: select_neighbors(tags, which='any')
Select any neighbouring tensors to those specified by ``tags``.self
:param tags: Tags specifying tensors.
:type tags: sequence of str, int
:param which: How to select tensors based on ``tags``.
:type which: {'any', 'all'}, optional
:returns: The neighbouring tensors.
:rtype: tuple[Tensor]
.. seealso:: :obj:`select_tensors`, :obj:`partition_tensors`
.. py:method:: _select_local_tids(tids, max_distance=1, fillin=False, reduce_outer=None, inwards=False, virtual=True, include=None, exclude=None)
.. py:method:: select_local(tags, which='all', max_distance=1, fillin=False, reduce_outer=None, virtual=True, include=None, exclude=None)
Select a local region of tensors, based on graph distance
``max_distance`` to any tagged tensors.
:param tags: The tag or tag sequence defining the initial region.
:type tags: str or sequence of str
:param which: Whether to require matching all or any of the tags.
:type which: {'all', 'any', '!all', '!any'}, optional
:param max_distance: The maximum distance to the initial tagged region.
:type max_distance: int, optional
:param fillin: Once the local region has been selected based on graph distance,
whether and how many times to 'fill-in' corners by adding tensors
connected multiple times. For example, if ``R`` is an initially
tagged tensor and ``x`` are locally selected tensors::
fillin=0 fillin=1 fillin=2
| | | | | | | | | | | | | | |
-o-o-x-o-o- -o-x-x-x-o- -x-x-x-x-x-
| | | | | | | | | | | | | | |
-o-x-x-x-o- -x-x-x-x-x- -x-x-x-x-x-
| | | | | | | | | | | | | | |
-x-x-R-x-x- -x-x-R-x-x- -x-x-R-x-x-
:type fillin: bool or int, optional
:param reduce_outer: Whether and how to reduce any outer indices of the selected region.
:type reduce_outer: {'sum', 'svd', 'svd-sum', 'reflect'}, optional
:param virtual: Whether the returned tensor network should be a view of the tensors
or a copy (``virtual=False``).
:type virtual: bool, optional
:param include: Only include tensor with these ``tids``.
:type include: sequence of int, optional
:param exclude: Only include tensor without these ``tids``.
:type exclude: sequence of int, optional
:rtype: TensorNetwork
.. py:method:: __getitem__(tags)
Get the tensor(s) associated with ``tags``.
:param tags: The tags used to select the tensor(s).
:type tags: str or sequence of str
:rtype: Tensor or sequence of Tensors
.. py:method:: __setitem__(tags, tensor)
Set the single tensor uniquely associated with ``tags``.
.. py:method:: __delitem__(tags)
Delete any tensors which have all of ``tags``.
.. py:method:: partition_tensors(tags, inplace=False, which='any')
Split this TN into a list of tensors containing any or all of
``tags`` and a ``TensorNetwork`` of the the rest.
:param tags: The list of tags to filter the tensors by. Use ``...``
(``Ellipsis``) to filter all.
:type tags: sequence of str
:param inplace: If true, remove tagged tensors from self, else create a new network
with the tensors removed.
:type inplace: bool, optional
:param which: Whether to require matching all or any of the tags.
:type which: {'all', 'any'}
:returns: **(u_tn, t_ts)** -- The untagged tensor network, and the sequence of tagged Tensors.
:rtype: (TensorNetwork, tuple of Tensors)
.. seealso:: :obj:`partition`, :obj:`select`, :obj:`select_tensors`
.. py:method:: partition(tags, which='any', inplace=False)
Split this TN into two, based on which tensors have any or all of
``tags``. Unlike ``partition_tensors``, both results are TNs which
inherit the structure of the initial TN.
:param tags: The tags to split the network with.
:type tags: sequence of str
:param which: Whether to split based on matching any or all of the tags.
:type which: {'any', 'all'}
:param inplace: If True, actually remove the tagged tensors from self.
:type inplace: bool
:returns: **untagged_tn, tagged_tn** -- The untagged and tagged tensor networs.
:rtype: (TensorNetwork, TensorNetwork)
.. seealso:: :obj:`partition_tensors`, :obj:`select`, :obj:`select_tensors`
.. py:method:: _split_tensor_tid(tid, left_inds, **split_opts)
.. py:method:: split_tensor(tags, left_inds, **split_opts)
Split the single tensor uniquely identified by ``tags``, adding the
resulting tensors from the decomposition back into the network. Inplace
operation.
.. py:method:: replace_with_identity(where, which='any', inplace=False)
Replace all tensors marked by ``where`` with an
identity. E.g. if ``X`` denote ``where`` tensors::
---1 X--X--2--- ---1---2---
| | | | ==> |
X--X--X | |
:param where: Tags specifying the tensors to replace.
:type where: tag or seq of tags
:param which: Whether to replace tensors matching any or all the tags ``where``.
:type which: {'any', 'all'}
:param inplace: Perform operation in place.
:type inplace: bool
:returns: The TN, with section replaced with identity.
:rtype: TensorNetwork
.. seealso:: :obj:`replace_with_svd`
.. py:method:: replace_with_svd(where, left_inds, eps, *, which='any', right_inds=None, method='isvd', max_bond=None, absorb='both', cutoff_mode='rel', renorm=None, ltags=None, rtags=None, keep_tags=True, start=None, stop=None, inplace=False)
Replace all tensors marked by ``where`` with an iteratively
constructed SVD. E.g. if ``X`` denote ``where`` tensors::
:__ ___:
---X X--X X--- : \ / :
| | | | ==> : U~s~VH---:
---X--X--X--X--- :__/ \ :
| +--- : \__:
X left_inds :
right_inds
:param where: Tags specifying the tensors to replace.
:type where: tag or seq of tags
:param left_inds: The indices defining the left hand side of the SVD.
:type left_inds: ind or sequence of inds
:param eps: The tolerance to perform the SVD with, affects the number of
singular values kept. See
:func:`quimb.linalg.rand_linalg.estimate_rank`.
:type eps: float
:param which: Whether to replace tensors matching any or all the tags ``where``,
prefix with '!' to invert the selection.
:type which: {'any', 'all', '!any', '!all'}, optional
:param right_inds: The indices defining the right hand side of the SVD, these can be
automatically worked out, but for hermitian decompositions the
order is important and thus can be given here explicitly.
:type right_inds: ind or sequence of inds, optional
:param method: How to perform the decomposition, if not an iterative method
the subnetwork dense tensor will be formed first, see
:func:`~quimb.tensor.tensor_core.tensor_split` for options.
:type method: str, optional
:param max_bond: The maximum bond to keep, defaults to no maximum (-1).
:type max_bond: int, optional
:param ltags: Tags to add to the left tensor.
:type ltags: sequence of str, optional
:param rtags: Tags to add to the right tensor.
:type rtags: sequence of str, optional
:param keep_tags: Whether to propagate tags found in the subnetwork to both new
tensors or drop them, defaults to ``True``.
:type keep_tags: bool, optional
:param start: If given, assume can use ``TNLinearOperator1D``.
:type start: int, optional
:param stop: If given, assume can use ``TNLinearOperator1D``.
:type stop: int, optional
:param inplace: Perform operation in place.
:type inplace: bool, optional
:rtype: TensorNetwork
.. seealso:: :obj:`replace_with_identity`
.. py:attribute:: replace_with_svd_
.. py:method:: replace_section_with_svd(start, stop, eps, **replace_with_svd_opts)
Take a 1D tensor network, and replace a section with a SVD.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.replace_with_svd`.
:param start: Section start index.
:type start: int
:param stop: Section stop index, not included itself.
:type stop: int
:param eps: Precision of SVD.
:type eps: float
:param replace_with_svd_opts: Supplied to
:meth:`~quimb.tensor.tensor_core.TensorNetwork.replace_with_svd`.
:rtype: TensorNetwork
.. py:method:: convert_to_zero()
Inplace conversion of this network to an all zero tensor network.
.. py:method:: _contract_between_tids(tid1, tid2, equalize_norms=False, gauges=None, output_inds=None, **contract_opts)
.. py:method:: contract_between(tags1, tags2, **contract_opts)
Contract the two tensors specified by ``tags1`` and ``tags2``
respectively. This is an inplace operation. No-op if the tensor
specified by ``tags1`` and ``tags2`` is the same tensor.
:param tags1: Tags uniquely identifying the first tensor.
:param tags2: Tags uniquely identifying the second tensor.
:type tags2: str or sequence of str
:param contract_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_contract`.
.. py:method:: contract_ind(ind, output_inds=None, **contract_opts)
Contract tensors connected by ``ind``.
.. py:attribute:: gate_inds
.. py:attribute:: gate_inds_
.. py:method:: gate_inds_with_tn(inds, gate, gate_inds_inner, gate_inds_outer, inplace=False)
Gate some indices of this tensor network with another tensor
network. That is, rewire and then combine them such that the new tensor
network has the same outer indices as before, but now includes gate::
gate_inds_outer
:
: gate_inds_inner
: :
: : inds inds
: ┌────┐ : : ┌────┬─── : ┌───────┬───
───┤ ├── a──┤ │ a──┤ │
│ │ │ ├─── │ ├───
───┤gate├── b──┤self│ --> b──┤ new │
│ │ │ ├─── │ ├───
───┤ ├── c──┤ │ c──┤ │
└────┘ └────┴─── └───────┴───
Where there can be arbitrary structure of tensors within both ``self``
and ``gate``.
The case where some of target ``inds`` are not present is handled as
so (here 'c' is missing so 'x' and 'y' are kept)::
gate_inds_outer
:
: gate_inds_inner
: :
: : inds inds
: ┌────┐ : : ┌────┬─── : ┌───────┬───
───┤ ├── a──┤ │ a──┤ │
│ │ │ ├─── │ ├───
───┤gate├── b──┤self│ --> b──┤ new │
│ │ │ ├─── │ ├───
x───┤ ├──y └────┘ x──┤ ┌──┘
└────┘ └────┴───y
Which enables convinient construction of various tensor networks, for
example propagators, from scratch.
:param inds: The current indices to gate. If an index is not present on the
target tensor network, it is ignored and instead the resulting
tensor network will have both the corresponding inner and outer
index of the gate tensor network.
:type inds: str or sequence of str
:param gate: The tensor network to gate with.
:type gate: Tensor or TensorNetwork
:param gate_inds_inner: The indices of ``gate`` to join to the old ``inds``, must be the
same length as ``inds``.
:type gate_inds_inner: sequence of str
:param gate_inds_outer: The indices of ``gate`` to make the new outer ``inds``, must be the
same length as ``inds``.
:type gate_inds_outer: sequence of str
:returns: **tn_gated**
:rtype: TensorNetwork
.. seealso:: :obj:`TensorNetwork.gate_inds`
.. py:attribute:: gate_inds_with_tn_
.. py:method:: _compute_tree_gauges(tree, outputs)
Given a ``tree`` of connected tensors, absorb the gauges from
outside inwards, finally outputing the gauges associated with the
``outputs``.
:param tree: The tree of connected tensors, see :meth:`get_tree_span`.
:type tree: sequence of (tid_outer, tid_inner, distance)
:param outputs: Each output is specified by a tensor id and an index, such that
having absorbed all gauges in the tree, the effective reduced
factor of the tensor with respect to the index is returned.
:type outputs: sequence of (tid, ind)
:returns: **Gouts** -- The effective reduced factors of the tensor index pairs specified
in ``outputs``, each a matrix.
:rtype: sequence of array
.. py:method:: _compress_between_virtual_tree_tids(tidl, tidr, max_bond, cutoff, r, absorb='both', include=None, exclude=None, span_opts=None, **compress_opts)
.. py:method:: _compute_bond_env(tid1, tid2, select_local_distance=None, select_local_opts=None, max_bond=None, cutoff=None, method='contract_around', contract_around_opts=None, contract_compressed_opts=None, optimize='auto-hq', include=None, exclude=None)
Compute the local tensor environment of the bond(s), if cut,
between two tensors.
.. py:method:: _compress_between_full_bond_tids(tid1, tid2, max_bond, cutoff=0.0, absorb='both', renorm=False, method='eigh', select_local_distance=None, select_local_opts=None, env_max_bond='max_bond', env_cutoff='cutoff', env_method='contract_around', contract_around_opts=None, contract_compressed_opts=None, env_optimize='auto-hq', include=None, exclude=None)
.. py:method:: _compress_between_local_fit(tid1, tid2, max_bond, cutoff=0.0, absorb='both', method='als', select_local_distance=1, select_local_opts=None, include=None, exclude=None, **fit_opts)
.. py:method:: _compress_between_tids(tid1, tid2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, mode='basic', equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback=None, **compress_opts)
.. py:method:: compress_between(tags1, tags2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=0, canonize_opts=None, equalize_norms=False, **compress_opts)
Compress the bond between the two single tensors in this network
specified by ``tags1`` and ``tags2`` using
:func:`~quimb.tensor.tensor_core.tensor_compress_bond`::
| | | | | | | |
==●====●====●====●== ==●====●====●====●==
/| /| /| /| /| /| /| /|
| | | | | | | |
==●====1====2====●== ==> ==●====L----R====●==
/| /| /| /| /| /| /| /|
| | | | | | | |
==●====●====●====●== ==●====●====●====●==
/| /| /| /| /| /| /| /|
This is an inplace operation. The compression is unlikely to be optimal
with respect to the frobenius norm, unless the TN is already
canonicalized at the two tensors. The ``absorb`` kwarg can be
specified to yield an isometry on either the left or right resulting
tensors.
:param tags1: Tags uniquely identifying the first ('left') tensor.
:param tags2: Tags uniquely identifying the second ('right') tensor.
:type tags2: str or sequence of str
:param max_bond: The maxmimum bond dimension.
:type max_bond: int or None, optional
:param cutoff: The singular value cutoff to use.
:type cutoff: float, optional
:param canonize_distance: How far to locally canonize around the target tensors first.
:type canonize_distance: int, optional
:param canonize_opts: Other options for the local canonization.
:type canonize_opts: None or dict, optional
:param equalize_norms: If set, rescale the norms of all tensors modified to this value,
stripping the rescaling factor into the ``exponent`` attribute.
:type equalize_norms: bool or float, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_compress_bond`.
.. seealso:: :obj:`canonize_between`
.. py:method:: compress_all(max_bond=None, cutoff=1e-10, canonize=True, tree_gauge_distance=None, canonize_distance=None, canonize_after_distance=None, mode='auto', inplace=False, **compress_opts)
Compress all bonds one by one in this network.
:param max_bond: The maxmimum bond dimension to compress to.
:type max_bond: int or None, optional
:param cutoff: The singular value cutoff to use.
:type cutoff: float, optional
:param tree_gauge_distance: How far to include local tree gauge information when compressing.
If the local geometry is a tree, then each compression will be
locally optimal up to this distance.
:type tree_gauge_distance: int, optional
:param canonize_distance: How far to locally canonize around the target tensors first, this
is set automatically by ``tree_gauge_distance`` if not specified.
:type canonize_distance: int, optional
:param canonize_after_distance: How far to locally canonize around the target tensors after, this
is set automatically by ``tree_gauge_distance``, depending on
``mode`` if not specified.
:type canonize_after_distance: int, optional
:param mode: The mode to use for compressing the bonds. If 'auto', will use
'basic' if ``tree_gauge_distance == 0`` else 'virtual-tree'.
:type mode: {'auto', 'basic', 'virtual-tree'}, optional
:param inplace: Whether to perform the compression inplace.
:type inplace: bool, optional
:param compress_opts: Supplied to
:func:`~quimb.tensor.tensor_core.TensorNetwork.compress_between`.
:rtype: TensorNetwork
.. seealso:: :obj:`compress_between`, :obj:`canonize_all`
.. py:attribute:: compress_all_
.. py:method:: compress_all_tree(inplace=False, **compress_opts)
Canonically compress this tensor network, assuming it to be a tree.
This generates a tree spanning out from the most central tensor, then
compresses all bonds inwards in a depth-first manner, using an infinite
``canonize_distance`` to shift the orthogonality center.
.. py:attribute:: compress_all_tree_
.. py:method:: compress_all_1d(max_bond=None, cutoff=1e-10, canonize=True, inplace=False, **compress_opts)
Compress a tensor network that you know has a 1D topology, this
proceeds by generating a spanning 'tree' from around the least central
tensor, then optionally canonicalizing all bonds outwards and
compressing inwards.
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int, optional
:param cutoff: The singular value cutoff to use.
:type cutoff: float, optional
:param canonize: Whether to canonize all bonds outwards first.
:type canonize: bool, optional
:param inplace: Whether to perform the compression inplace.
:type inplace: bool, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_compress_bond`.
:rtype: TensorNetwork
.. py:attribute:: compress_all_1d_
.. py:method:: compress_all_simple(max_bond=None, cutoff=1e-10, gauges=None, max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, inplace=False, **gauge_simple_opts)
.. py:attribute:: compress_all_simple_
.. py:method:: _canonize_between_tids(tid1, tid2, absorb='right', gauges=None, gauge_smudge=1e-06, equalize_norms=False, **canonize_opts)
.. py:method:: canonize_between(tags1, tags2, absorb='right', **canonize_opts)
'Canonize' the bond between the two single tensors in this network
specified by ``tags1`` and ``tags2`` using ``tensor_canonize_bond``::
| | | | | | | |
--●----●----●----●-- --●----●----●----●--
/| /| /| /| /| /| /| /|
| | | | | | | |
--●----1----2----●-- ==> --●---->~~~~R----●--
/| /| /| /| /| /| /| /|
| | | | | | | |
--●----●----●----●-- --●----●----●----●--
/| /| /| /| /| /| /| /|
This is an inplace operation. This can only be used to put a TN into
truly canonical form if the geometry is a tree, such as an MPS.
:param tags1: Tags uniquely identifying the first ('left') tensor, which will
become an isometry.
:param tags2: Tags uniquely identifying the second ('right') tensor.
:type tags2: str or sequence of str
:param absorb: Which side of the bond to absorb the non-isometric operator.
:type absorb: {'left', 'both', 'right'}, optional
:param canonize_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_canonize_bond`.
.. seealso:: :obj:`compress_between`
.. py:method:: reduce_inds_onto_bond(inda, indb, tags=None, drop_tags=False, combine=True, ndim_cutoff=3)
Use QR factorization to 'pull' the indices ``inda`` and ``indb`` off
of their respective tensors and onto the bond between them. This is an
inplace operation.
.. py:method:: _get_neighbor_tids(tids, exclude_inds=())
Get the tids of tensors connected to the tensor(s) at ``tids``.
:param tids: The tensor identifier(s) to get the neighbors of.
:type tids: int or sequence of int
:param exclude_inds: Exclude these indices from being considered as connections.
:type exclude_inds: sequence of str, optional
:rtype: oset[int]
.. py:method:: _get_neighbor_inds(inds)
Get the indices connected to the index(es) at ``inds``.
:param inds: The index(es) to get the neighbors of.
:type inds: str or sequence of str
:rtype: oset[str]
.. py:method:: _get_subgraph_tids(tids)
Get the tids of tensors connected, by any distance, to the tensor or
region of tensors ``tids``.
.. py:method:: _ind_to_subgraph_tids(ind)
Get the tids of tensors connected, by any distance, to the index
``ind``.
.. py:method:: istree()
Check if this tensor network has a tree structure, (treating
multibonds as a single edge).
.. rubric:: Examples
>>> MPS_rand_state(10, 7).istree()
True
>>> MPS_rand_state(10, 7, cyclic=True).istree()
False
.. py:method:: isconnected()
Check whether this tensor network is connected, i.e. whether
there is a path between any two tensors, (including size 1 indices).
.. py:method:: subgraphs(virtual=False)
Split this tensor network into disconneceted subgraphs.
:param virtual: Whether the tensor networks should view the original tensors or
not - by default take copies.
:type virtual: bool, optional
:rtype: list[TensorNetwork]
.. py:method:: get_tree_span(tids, min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', sorter=None, weight_bonds=True, inwards=True)
Generate a tree on the tensor network graph, fanning out from the
tensors identified by ``tids``, up to a maximum of ``max_distance``
away. The tree can be visualized with
:meth:`~quimb.tensor.tensor_core.TensorNetwork.draw_tree_span`.
:param tids: The nodes that define the region to span out of.
:type tids: sequence of str
:param min_distance: Don't add edges to the tree until this far from the region. For
example, ``1`` will not include the last merges from neighboring
tensors in the region defined by ``tids``.
:type min_distance: int, optional
:param max_distance: Terminate branches once they reach this far away. If ``None`` there
is no limit,
:type max_distance: None or int, optional
:param include: If specified, only ``tids`` specified here can be part of the tree.
:type include: sequence of str, optional
:param exclude: If specified, ``tids`` specified here cannot be part of the tree.
:type exclude: sequence of str, optional
:param ndim_sort: When expanding the tree, how to choose what nodes to expand to
next, once connectivity to the current surface has been taken into
account.
:type ndim_sort: {'min', 'max', 'none'}, optional
:param distance_sort: When expanding the tree, how to choose what nodes to expand to
next, once connectivity to the current surface has been taken into
account.
:type distance_sort: {'min', 'max', 'none'}, optional
:param weight_bonds: Whether to weight the 'connection' of a candidate tensor to expand
out to using bond size as well as number of bonds.
:type weight_bonds: bool, optional
:returns: The ordered list of merges, each given as tuple ``(tid1, tid2, d)``
indicating merge ``tid1 -> tid2`` at distance ``d``.
:rtype: list[(str, str, int)]
.. seealso:: :obj:`draw_tree_span`
.. py:method:: _draw_tree_span_tids(tids, span=None, min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', sorter=None, weight_bonds=True, color='order', colormap='Spectral', **draw_opts)
.. py:method:: draw_tree_span(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', weight_bonds=True, color='order', colormap='Spectral', **draw_opts)
Visualize a generated tree span out of the tensors tagged by
``tags``.
:param tags: Tags specifiying a region of tensors to span out of.
:type tags: str or sequence of str
:param which: How to select tensors based on the tags.
:type which: {'all', 'any': '!all', '!any'}, optional
:param min_distance: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type min_distance: int, optional
:param max_distance: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type max_distance: None or int, optional
:param include: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type include: sequence of str, optional
:param exclude: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type exclude: sequence of str, optional
:param distance_sort: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type distance_sort: {'min', 'max'}, optional
:param color: Whether to color nodes based on the order of the contraction or the
graph distance from the specified region.
:type color: {'order', 'distance'}, optional
:param colormap: The name of a ``matplotlib`` colormap to use.
:type colormap: str
.. seealso:: :obj:`get_tree_span`
.. py:attribute:: graph_tree_span
.. py:method:: _canonize_around_tids(tids, min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', inwards=True, gauges=None, gauge_smudge=1e-06, **canonize_opts)
.. py:method:: canonize_around(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', equalize_norms=False, inplace=False, **canonize_opts)
Expand a locally canonical region around ``tags``::
--●---●--
| | | |
--●---v---v---●--
| | | | | |
--●--->---v---v---<---●--
| | | | | | | |
●--->--->---O---O---<---<---●
| | | | | | | |
--●--->---^---^---^---●--
| | | | | |
--●---^---^---●--
| | | |
--●---●--
<=====>
max_distance = 2 e.g.
Shown on a grid here but applicable to arbitrary geometry. This is a
way of gauging a tensor network that results in a canonical form if the
geometry is described by a tree (e.g. an MPS or TTN). The canonizations
proceed inwards via QR decompositions.
The sequence generated by round-robin expanding the boundary of the
originally specified tensors - it will only be unique for trees.
:param tags: Tags defining which set of tensors to locally canonize around.
:type tags: str, or sequence or str
:param which: How select the tensors based on tags.
:type which: {'all', 'any', '!all', '!any'}, optional
:param min_distance: How close, in terms of graph distance, to canonize tensors away.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type min_distance: int, optional
:param max_distance: How far, in terms of graph distance, to canonize tensors away.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type max_distance: None or int, optional
:param include: How to build the spanning tree to canonize along.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type include: sequence of str, optional
:param exclude: How to build the spanning tree to canonize along.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:type exclude: sequence of str, optional
:param distance_sort {'min': How to build the spanning tree to canonize along.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:param 'max'}: How to build the spanning tree to canonize along.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:param optional: How to build the spanning tree to canonize along.
See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`.
:param absorb: As we canonize inwards from tensor A to tensor B which to absorb
the singular values into.
:type absorb: {'right', 'left', 'both'}, optional
:param gauge_links: Whether to gauge the links *between* branches of the spanning tree
generated (in a Simple Update like fashion).
:type gauge_links: bool, optional
:param link_absorb: If performing the link gauging, how to absorb the singular values.
:type link_absorb: {'both', 'right', 'left'}, optional
:param equalize_norms: Scale the norms of tensors acted on to this value, accumulating the
log10 scaled factors in ``self.exponent``.
:type equalize_norms: bool or float, optional
:param inplace: Whether to perform the canonization inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. seealso:: :obj:`get_tree_span`
.. py:attribute:: canonize_around_
.. py:method:: gauge_all_canonize(max_iterations=5, absorb='both', gauges=None, gauge_smudge=1e-06, equalize_norms=False, inplace=False, **canonize_opts)
Iterative gauge all the bonds in this tensor network with a basic
'canonization' strategy.
.. py:attribute:: gauge_all_canonize_
.. py:method:: gauge_all_simple(max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, gauges=None, equalize_norms=False, progbar=False, inplace=False)
Iterative gauge all the bonds in this tensor network with a 'simple
update' like strategy.
.. py:attribute:: gauge_all_simple_
.. py:method:: gauge_all_random(max_iterations=1, unitary=True, seed=None, inplace=False)
Gauge all the bonds in this network randomly. This is largely for
testing purposes.
.. py:attribute:: gauge_all_random_
.. py:method:: gauge_all(method='canonize', **gauge_opts)
Gauge all bonds in this network using one of several strategies.
:param method: The method to use for gauging. One of "canonize", "simple", or
"random". Default is "canonize".
:type method: str, optional
:param gauge_opts: Additional keyword arguments to pass to the chosen method.
:type gauge_opts: dict, optional
.. seealso:: :obj:`gauge_all_canonize`, :obj:`gauge_all_simple`, :obj:`gauge_all_random`
.. py:attribute:: gauge_all_
.. py:method:: _gauge_local_tids(tids, max_distance=1, max_iterations='max_distance', method='canonize', inwards=False, include=None, exclude=None, **gauge_local_opts)
Iteratively gauge all bonds in the local tensor network defined by
``tids`` according to one of several strategies.
.. py:method:: gauge_local(tags, which='all', max_distance=1, max_iterations='max_distance', method='canonize', inplace=False, **gauge_local_opts)
Iteratively gauge all bonds in the tagged sub tensor network
according to one of several strategies.
.. py:attribute:: gauge_local_
.. py:method:: gauge_simple_insert(gauges, remove=False, smudge=0.0, power=1.0)
Insert the simple update style bond gauges found in ``gauges`` if
they are present in this tensor network. The gauges inserted are also
returned so that they can be removed later.
:param gauges: The store of bond gauges, the keys being indices and the values
being the vectors. Only bonds present in this dictionary will be
gauged.
:type gauges: dict[str, array_like]
:param remove: Whether to remove the gauges from the store after inserting them.
:type remove: bool, optional
:param smudge: A small value to add to the gauge vectors to avoid singularities.
:type smudge: float, optional
:returns: * **outer** (*list[(Tensor, str, array_like)]*) -- The sequence of gauges applied to outer indices, each a tuple of
the tensor, the index and the gauge vector.
* **inner** (*list[((Tensor, Tensor), str, array_like)]*) -- The sequence of gauges applied to inner indices, each a tuple of
the two inner tensors, the inner bond and the gauge vector applied.
.. py:method:: gauge_simple_remove(outer=None, inner=None)
:staticmethod:
Remove the simple update style bond gauges inserted by
``gauge_simple_insert``.
.. py:method:: gauge_simple_temp(gauges, smudge=1e-12, ungauge_outer=True, ungauge_inner=True)
Context manager that temporarily inserts simple update style bond
gauges into this tensor network, before optionally ungauging them.
:param self: The TensorNetwork to be gauge-bonded.
:type self: TensorNetwork
:param gauges: The store of gauge bonds, the keys being indices and the values
being the vectors. Only bonds present in this dictionary will be
gauged.
:type gauges: dict[str, array_like]
:param ungauge_outer: Whether to ungauge the outer bonds.
:type ungauge_outer: bool, optional
:param ungauge_inner: Whether to ungauge the inner bonds.
:type ungauge_inner: bool, optional
:Yields: * **outer** (*list[(Tensor, int, array_like)]*) -- The tensors, indices and gauges that were performed on outer
indices.
* **inner** (*list[((Tensor, Tensor), int, array_like)]*) -- The tensors, indices and gauges that were performed on inner bonds.
.. rubric:: Examples
>>> tn = TN_rand_reg(10, 4, 3)
>>> tn ^ all
-51371.66630218866
>>> gauges = {}
>>> tn.gauge_all_simple_(gauges=gauges)
>>> len(gauges)
20
>>> tn ^ all
28702551.673767876
>>> with gauged_bonds(tn, gauges):
... # temporarily insert gauges
... print(tn ^ all)
-51371.66630218887
>>> tn ^ all
28702551.67376789
.. py:method:: _contract_compressed_tid_sequence(seq, max_bond=None, cutoff=1e-10, output_inds=None, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_opts=None, compress_late=True, compress_mode='auto', compress_min_size=None, compress_span=False, compress_matrices=True, compress_exclude=None, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, preserve_tensor=False, progbar=False, inplace=False)
.. py:method:: _contract_around_tids(tids, seq=None, min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, canonize_opts=None, **kwargs)
Contract around ``tids``, by following a greedily generated
spanning tree, and compressing whenever two tensors in the outer
'boundary' share more than one index.
.. py:method:: compute_centralities()
.. py:method:: most_central_tid()
.. py:method:: least_central_tid()
.. py:method:: contract_around_center(**opts)
.. py:method:: contract_around_corner(**opts)
.. py:method:: contract_around(tags, which='all', min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=True, compress_min_size=None, compress_opts=None, compress_span=False, compress_matrices=True, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, inplace=False, **kwargs)
Perform a compressed contraction inwards towards the tensors
identified by ``tags``.
.. py:attribute:: contract_around_
.. py:method:: contract_compressed(optimize, output_inds=None, max_bond=None, cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=True, compress_min_size=None, compress_opts=None, compress_span=True, compress_matrices=True, compress_exclude=None, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, progbar=False, **kwargs)
.. py:attribute:: contract_compressed_
.. py:method:: new_bond(tags1, tags2, **opts)
Inplace addition of a dummmy (size 1) bond between the single
tensors specified by by ``tags1`` and ``tags2``.
:param tags1: Tags identifying the first tensor.
:type tags1: sequence of str
:param tags2: Tags identifying the second tensor.
:type tags2: sequence of str
:param opts: Supplied to :func:`~quimb.tensor.tensor_core.new_bond`.
.. seealso:: :obj:`new_bond`
.. py:method:: _cut_between_tids(tid1, tid2, left_ind, right_ind)
.. py:method:: cut_between(left_tags, right_tags, left_ind, right_ind)
Cut the bond between the tensors specified by ``left_tags`` and
``right_tags``, giving them the new inds ``left_ind`` and
``right_ind`` respectively.
.. py:method:: cut_bond(bond, new_left_ind=None, new_right_ind=None)
Cut the bond index specified by ``bond`` between the tensors it
connects. Use ``cut_between`` for control over which tensor gets which
new index ``new_left_ind`` or ``new_right_ind``. The index must
connect exactly two tensors.
:param bond: The index to cut.
:type bond: str
:param new_left_ind: The new index to give to the left tensor (lowest ``tid`` value).
:type new_left_ind: str, optional
:param new_right_ind: The new index to give to the right tensor (highest ``tid`` value).
:type new_right_ind: str, optional
.. py:method:: drape_bond_between(tagsa, tagsb, tags_target, left_ind=None, right_ind=None, inplace=False)
Take the bond(s) connecting the tensors tagged at ``tagsa`` and
``tagsb``, and 'drape' it through the tensor tagged at ``tags_target``,
effectively adding an identity tensor between the two and contracting
it with the third::
┌─┐ ┌─┐ ┌─┐ ┌─┐
─┤A├─Id─┤B├─ ─┤A├─┐ ┌─┤B├─
└─┘ └─┘ └─┘ │ │ └─┘
left_ind│ │right_ind
┌─┐ --> ├─┤
─┤C├─ ─┤D├─
└┬┘ └┬┘ where D = C ⊗ Id
│ │
This increases the size of the target tensor by ``d**2``, and
disconnects the tensors at ``tagsa`` and ``tagsb``.
:param tagsa: The tag(s) identifying the first tensor.
:type tagsa: str or sequence of str
:param tagsb: The tag(s) identifying the second tensor.
:type tagsb: str or sequence of str
:param tags_target: The tag(s) identifying the target tensor.
:type tags_target: str or sequence of str
:param left_ind: The new index to give to the left tensor.
:type left_ind: str, optional
:param right_ind: The new index to give to the right tensor.
:type right_ind: str, optional
:param inplace: Whether to perform the draping inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: drape_bond_between_
.. py:method:: isel(selectors, inplace=False)
Select specific values for some dimensions/indices of this tensor
network, thereby removing them.
:param selectors: Mapping of index(es) to which value to take.
:type selectors: dict[str, int]
:param inplace: Whether to select inplace or not.
:type inplace: bool, optional
:rtype: TensorNetwork
.. seealso:: :obj:`Tensor.isel`
.. py:attribute:: isel_
.. py:method:: sum_reduce(ind, inplace=False)
Sum over the index ``ind`` of this tensor network, removing it. This
is like contracting a vector of ones in, or marginalizing a classical
probability distribution.
:param ind: The index to sum over.
:type ind: str
:param inplace: Whether to perform the reduction inplace.
:type inplace: bool, optional
.. py:attribute:: sum_reduce_
.. py:method:: vector_reduce(ind, v, inplace=False)
Contract the vector ``v`` with the index ``ind`` of this tensor
network, removing it.
:param ind: The index to contract.
:type ind: str
:param v: The vector to contract with.
:type v: array_like
:param inplace: Whether to perform the reduction inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: vector_reduce_
.. py:method:: cut_iter(*inds)
Cut and iterate over one or more indices in this tensor network.
Each network yielded will have that index removed, and the sum of all
networks will equal the original network. This works by iterating over
the product of all combinations of each bond supplied to ``isel``.
As such, the number of networks produced is exponential in the number
of bonds cut.
:param inds: The bonds to cut.
:type inds: sequence of str
:Yields: *TensorNetwork*
.. rubric:: Examples
Here we'll cut the two extra bonds of a cyclic MPS and sum the
contraction of the resulting 49 OBC MPS norms:
>>> psi = MPS_rand_state(10, bond_dim=7, cyclic=True)
>>> norm = psi.H & psi
>>> bnds = bonds(norm[0], norm[-1])
>>> sum(tn ^ all for tn in norm.cut_iter(*bnds))
1.0
.. seealso:: :obj:`TensorNetwork.isel`, :obj:`TensorNetwork.cut_between`
.. py:method:: insert_operator(A, where1, where2, tags=None, inplace=False)
Insert an operator on the bond between the specified tensors,
e.g.::
| | | |
--1---2-- -> --1-A-2--
| |
:param A: The operator to insert.
:type A: array
:param where1: The tags defining the 'left' tensor.
:type where1: str, sequence of str, or int
:param where2: The tags defining the 'right' tensor.
:type where2: str, sequence of str, or int
:param tags: Tags to add to the new operator's tensor.
:type tags: str or sequence of str
:param inplace: Whether to perform the insertion inplace.
:type inplace: bool, optional
.. py:attribute:: insert_operator_
.. py:method:: _insert_gauge_tids(U, tid1, tid2, Uinv=None, tol=1e-10, bond=None)
.. py:method:: insert_gauge(U, where1, where2, Uinv=None, tol=1e-10)
Insert the gauge transformation ``U^-1 @ U`` into the bond between
the tensors, ``T1`` and ``T2``, defined by ``where1`` and ``where2``.
The resulting tensors at those locations will be ``T1 @ U^-1`` and
``U @ T2``.
:param U: The gauge to insert.
:type U: array
:param where1: Tags defining the location of the 'left' tensor.
:type where1: str, sequence of str, or int
:param where2: Tags defining the location of the 'right' tensor.
:type where2: str, sequence of str, or int
:param Uinv: The inverse gauge, ``U @ Uinv == Uinv @ U == eye``, to insert.
If not given will be calculated using :func:`numpy.linalg.inv`.
:type Uinv: array
.. py:method:: contract_tags(tags, which='any', output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, inplace=False, **contract_opts)
Contract the tensors that match any or all of ``tags``.
:param tags: The list of tags to filter the tensors by. Use ``all`` or ``...``
(``Ellipsis``) to contract all tensors.
:type tags: sequence of str
:param which: Whether to require matching all or any of the tags.
:type which: {'all', 'any'}
:param output_inds: The indices to specify as outputs of the contraction. If not given,
and the tensor network has no hyper-indices, these are computed
automatically as every index appearing once.
:type output_inds: sequence of str, optional
:param optimize:
The contraction path optimization strategy to use.
- ``None``: use the default strategy,
- str: use the preset strategy with the given name,
- path_like: use this exact path,
- ``cotengra.HyperOptimizer``: find the contraction using this
optimizer, supports slicing,
- ``cotengra.ContractionTree``: use this exact tree, supports
slicing,
- ``opt_einsum.PathOptimizer``: find the path using this
optimizer.
Contraction with ``cotengra`` might be a bit more efficient but the
main reason would be to handle sliced contraction automatically.
:type optimize: {None, str, path_like, PathOptimizer}, optional
:param get:
What to return. If:
* ``None`` (the default) - return the resulting scalar or
Tensor.
* ``'expression'`` - return a callbable expression that
performs the contraction and operates on the raw arrays.
* ``'tree'`` - return the ``cotengra.ContractionTree``
describing the contraction.
* ``'path'`` - return the raw 'path' as a list of tuples.
* ``'symbol-map'`` - return the dict mapping indices to
'symbols' (single unicode letters) used internally by
``cotengra``
* ``'path-info'`` - return the ``opt_einsum.PathInfo`` path
object with detailed information such as flop cost. The
symbol-map is also added to the ``quimb_symbol_map``
attribute.
:type get: str, optional
:param backend: Which backend to use to perform the contraction. Supplied to
`cotengra`.
:type backend: {'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional
:param preserve_tensor: Whether to return a tensor regardless of whether the output object
is a scalar (has no indices) or not.
:type preserve_tensor: bool, optional
:param inplace: Whether to perform the contraction inplace.
:type inplace: bool, optional
:param contract_opts: Passed to :func:`~quimb.tensor.tensor_core.tensor_contract`.
:returns: The result of the contraction, still a ``TensorNetwork`` if the
contraction was only partial.
:rtype: TensorNetwork, Tensor or scalar
.. seealso:: :obj:`contract`, :obj:`contract_cumulative`
.. py:attribute:: contract_tags_
.. py:method:: contract(tags=..., output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, max_bond=None, inplace=False, **opts)
Contract some, or all, of the tensors in this network. This method
dispatches to ``contract_tags``, ``contract_structured``, or
``contract_compressed`` based on the various arguments.
:param tags: Any tensors with any of these tags with be contracted. Use ``all``
or ``...`` (``Ellipsis``) to contract all tensors. ``...`` will try
and use a 'structured' contract method if possible.
:type tags: sequence of str, all, or Ellipsis, optional
:param output_inds: The indices to specify as outputs of the contraction. If not given,
and the tensor network has no hyper-indices, these are computed
automatically as every index appearing once.
:type output_inds: sequence of str, optional
:param optimize:
The contraction path optimization strategy to use.
- ``None``: use the default strategy,
- str: use the preset strategy with the given name,
- path_like: use this exact path,
- ``cotengra.HyperOptimizer``: find the contraction using this
optimizer, supports slicing,
- ``cotengra.ContractionTree``: use this exact tree, supports
slicing,
- ``opt_einsum.PathOptimizer``: find the path using this
optimizer.
Contraction with ``cotengra`` might be a bit more efficient but the
main reason would be to handle sliced contraction automatically.
:type optimize: {None, str, path_like, PathOptimizer}, optional
:param get:
What to return. If:
* ``None`` (the default) - return the resulting scalar or
Tensor.
* ``'expression'`` - return a callbable expression that
performs the contraction and operates on the raw arrays.
* ``'tree'`` - return the ``cotengra.ContractionTree``
describing the contraction.
* ``'path'`` - return the raw 'path' as a list of tuples.
* ``'symbol-map'`` - return the dict mapping indices to
'symbols' (single unicode letters) used internally by
``cotengra``
* ``'path-info'`` - return the ``opt_einsum.PathInfo`` path
object with detailed information such as flop cost. The
symbol-map is also added to the ``quimb_symbol_map``
attribute.
:type get: str, optional
:param backend: Which backend to use to perform the contraction. Supplied to
`cotengra`.
:type backend: {'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional
:param preserve_tensor: Whether to return a tensor regardless of whether the output object
is a scalar (has no indices) or not.
:type preserve_tensor: bool, optional
:param inplace: Whether to perform the contraction inplace. This is only valid
if not all tensors are contracted (which doesn't produce a TN).
:type inplace: bool, optional
:param opts: Passed to :func:`~quimb.tensor.tensor_core.tensor_contract`,
:meth:`~quimb.tensor.tensor_core.TensorNetwork.contract_compressed`
.
:returns: The result of the contraction, still a ``TensorNetwork`` if the
contraction was only partial.
:rtype: TensorNetwork, Tensor or scalar
.. seealso:: :obj:`contract_tags`, :obj:`contract_cumulative`
.. py:attribute:: contract_
.. py:method:: contract_cumulative(tags_seq, output_inds=None, preserve_tensor=False, equalize_norms=False, inplace=False, **opts)
Cumulative contraction of tensor network. Contract the first set of
tags, then that set with the next set, then both of those with the next
and so forth. Could also be described as an manually ordered
contraction of all tags in ``tags_seq``.
:param tags_seq: The list of tag-groups to cumulatively contract.
:type tags_seq: sequence of sequence of str
:param output_inds: The indices to specify as outputs of the contraction. If not given,
and the tensor network has no hyper-indices, these are computed
automatically as every index appearing once.
:type output_inds: sequence of str, optional
:param preserve_tensor: Whether to return a tensor regardless of whether the output object
is a scalar (has no indices) or not.
:type preserve_tensor: bool, optional
:param inplace: Whether to perform the contraction inplace.
:type inplace: bool, optional
:param opts: Passed to :func:`~quimb.tensor.tensor_core.tensor_contract`.
:returns: The result of the contraction, still a ``TensorNetwork`` if the
contraction was only partial.
:rtype: TensorNetwork, Tensor or scalar
.. seealso:: :obj:`contract`, :obj:`contract_tags`
.. py:method:: contraction_path(optimize=None, **contract_opts)
Compute the contraction path, a sequence of (int, int), for
the contraction of this entire tensor network using path optimizer
``optimize``.
.. py:method:: contraction_info(optimize=None, **contract_opts)
Compute the ``opt_einsum.PathInfo`` object decsribing the
contraction of this entire tensor network using path optimizer
``optimize``.
.. py:method:: contraction_tree(optimize=None, output_inds=None, **kwargs)
Return the :class:`cotengra.ContractionTree` corresponding to
contracting this entire tensor network with path finder ``optimize``.
.. py:method:: contraction_width(optimize=None, **contract_opts)
Compute the 'contraction width' of this tensor network. This
is defined as log2 of the maximum tensor size produced during the
contraction sequence. If every index in the network has dimension 2
this corresponds to the maximum rank tensor produced.
.. py:method:: contraction_cost(optimize=None, **contract_opts)
Compute the 'contraction cost' of this tensor network. This
is defined as log10 of the total number of scalar operations during the
contraction sequence.
.. py:method:: __rshift__(tags_seq)
Overload of '>>' for TensorNetwork.contract_cumulative.
.. py:method:: __irshift__(tags_seq)
Overload of '>>=' for inplace TensorNetwork.contract_cumulative.
.. py:method:: __xor__(tags)
Overload of '^' for TensorNetwork.contract.
.. py:method:: __ixor__(tags)
Overload of '^=' for inplace TensorNetwork.contract.
.. py:method:: __matmul__(other)
Overload "@" to mean full contraction with another network.
.. py:method:: as_network(virtual=True)
Matching method (for ensuring object is a tensor network) to
:meth:`~quimb.tensor.tensor_core.Tensor.as_network`, which simply
returns ``self`` if ``virtual=True``.
.. py:method:: aslinearoperator(left_inds, right_inds, ldims=None, rdims=None, backend=None, optimize=None)
View this ``TensorNetwork`` as a
:class:`~quimb.tensor.tensor_core.TNLinearOperator`.
.. py:method:: split(left_inds, right_inds=None, **split_opts)
Decompose this tensor network across a bipartition of outer indices.
This method matches ``Tensor.split`` by converting to a
``TNLinearOperator`` first. Note unless an iterative method is passed
to ``method``, the full dense tensor will be contracted.
.. py:method:: trace(left_inds, right_inds, **contract_opts)
Trace over ``left_inds`` joined with ``right_inds``
.. py:method:: to_dense(*inds_seq, to_qarray=False, **contract_opts)
Convert this network into an dense array, with a single dimension
for each of inds in ``inds_seqs``. E.g. to convert several sites
into a density matrix: ``TN.to_dense(('k0', 'k1'), ('b0', 'b1'))``.
.. py:attribute:: to_qarray
.. py:method:: compute_reduced_factor(side, left_inds, right_inds, optimize='auto-hq', **contract_opts)
Compute either the left or right 'reduced factor' of this tensor
network. I.e., view as an operator, ``X``, mapping ``left_inds`` to
``right_inds`` and compute ``L`` or ``R`` such that ``X = U_R @ R`` or
``X = L @ U_L``, with ``U_R`` and ``U_L`` unitary operators that are
not computed. Only ``dag(X) @ X`` or ``X @ dag(X)`` is contracted,
which is generally cheaper than contracting ``X`` itself.
:param self: The tensor network to compute the reduced factor of.
:type self: TensorNetwork
:param side: Whether to compute the left or right reduced factor. If 'right'
then ``dag(X) @ X`` is contracted, otherwise ``X @ dag(X)``.
:type side: {'left', 'right'}
:param left_inds: The indices forming the left side of the operator.
:type left_inds: sequence of str
:param right_inds: The indices forming the right side of the operator.
:type right_inds: sequence of str
:param contract_opts: Options to pass to
:meth:`~quimb.tensor.tensor_core.TensorNetwork.to_dense`.
:type contract_opts: dict, optional
:rtype: array_like
.. py:method:: insert_compressor_between_regions(ltags, rtags, max_bond=None, cutoff=1e-10, select_which='any', insert_into=None, new_tags=None, new_ltags=None, new_rtags=None, bond_ind=None, optimize='auto-hq', inplace=False, **compress_opts)
Compute and insert a pair of 'oblique' projection tensors (see for
example https://arxiv.org/abs/1905.02351) that effectively compresses
between two regions of the tensor network. Useful for various
approximate contraction methods such as HOTRG and CTMRG.
:param ltags: The tags of the tensors in the left region.
:type ltags: sequence of str
:param rtags: The tags of the tensors in the right region.
:type rtags: sequence of str
:param max_bond: The maximum bond dimension to use for the compression (i.e. shared
by the two projection tensors). If ``None`` then the maximum
is controlled by ``cutoff``.
:type max_bond: int or None, optional
:param cutoff: The cutoff to use for the compression.
:type cutoff: float, optional
:param select_which: How to select the regions based on the tags, see
:meth:`~quimb.tensor.tensor_core.TensorNetwork.select`.
:type select_which: {'any', 'all', 'none'}, optional
:param insert_into: If given, insert the new tensors into this tensor network, assumed
to have the same relevant indices as ``self``.
:type insert_into: TensorNetwork, optional
:param new_tags: The tag(s) to add to both the new tensors.
:type new_tags: str or sequence of str, optional
:param new_ltags: The tag(s) to add to the new left projection tensor.
:type new_ltags: str or sequence of str, optional
:param new_rtags: The tag(s) to add to the new right projection tensor.
:type new_rtags: str or sequence of str, optional
:param optimize: How to optimize the contraction of the projection tensors.
:type optimize: str or PathOptimizer, optional
:param inplace: Whether perform the insertion in-place. If ``insert_into`` is
supplied then this doesn't matter, and that tensor network will
be modified and returned.
:type inplace: bool, optional
:rtype: TensorNetwork
.. seealso:: :obj:`compute_reduced_factor`, :obj:`select`
.. py:attribute:: insert_compressor_between_regions_
.. py:method:: distance(*args, **kwargs)
.. py:attribute:: distance_normalized
.. py:method:: fit(tn_target, method='als', tol=1e-09, inplace=False, progbar=False, **fitting_opts)
Optimize the entries of this tensor network with respect to a least
squares fit of ``tn_target`` which should have the same outer indices.
Depending on ``method`` this calls
:func:`~quimb.tensor.tensor_core.tensor_network_fit_als` or
:func:`~quimb.tensor.tensor_core.tensor_network_fit_autodiff`. The
quantity minimized is:
.. math::
D(A, B)
= | A - B |_{\mathrm{fro}}
= \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2}
= ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle|
+ \langle B | B \rangle ) ^{1/2}
:param tn_target: The target tensor network to try and fit the current one to.
:type tn_target: TensorNetwork
:param method: Whether to use alternating least squares (ALS) or automatic
differentiation to perform the optimization. Generally ALS is
better for simple geometries, autodiff better for complex ones.
:type method: {'als', 'autodiff'}, optional
:param tol: The target norm distance.
:type tol: float, optional
:param inplace: Update the current tensor network in place.
:type inplace: bool, optional
:param progbar: Show a live progress bar of the fitting process.
:type progbar: bool, optional
:param fitting_opts: Supplied to either
:func:`~quimb.tensor.tensor_core.tensor_network_fit_als` or
:func:`~quimb.tensor.tensor_core.tensor_network_fit_autodiff`.
:returns: **tn_opt** -- The optimized tensor network.
:rtype: TensorNetwork
.. seealso:: :obj:`tensor_network_fit_als`, :obj:`tensor_network_fit_autodiff`, :obj:`tensor_network_distance`
.. py:attribute:: fit_
.. py:property:: tags
.. py:method:: all_inds()
Return a tuple of all indices in this network.
.. py:method:: ind_size(ind)
Find the size of ``ind``.
.. py:method:: inds_size(inds)
Return the total size of dimensions corresponding to ``inds``.
.. py:method:: ind_sizes()
Get dict of each index mapped to its size.
.. py:method:: inner_inds()
Tuple of interior indices, assumed to be any indices that appear
twice or more (this only holds generally for non-hyper tensor
networks).
.. py:method:: outer_inds()
Tuple of exterior indices, assumed to be any lone indices (this only
holds generally for non-hyper tensor networks).
.. py:method:: outer_dims_inds()
Get the 'outer' pairs of dimension and indices, i.e. as if this
tensor network was fully contracted.
.. py:method:: outer_size()
Get the total size of the 'outer' indices, i.e. as if this tensor
network was fully contracted.
.. py:method:: get_multibonds(include=None, exclude=None)
Get a dict of 'multibonds' in this tensor network, i.e. groups of
two or more indices that appear on exactly the same tensors and thus
could be fused, for example.
:param include: Only consider these indices, by default all indices.
:type include: sequence of str, optional
:param exclude: Ignore these indices, by default the outer indices of this TN.
:type exclude: sequence of str, optional
:returns: A dict mapping the tuple of indices that could be fused to the
tuple of tensor ids they appear on.
:rtype: dict[tuple[str], tuple[int]]
.. py:method:: get_hyperinds(output_inds=None)
Get a tuple of all 'hyperinds', defined as those indices which don't
appear exactly twice on either the tensors *or* in the 'outer' (i.e.
output) indices.
Note the default set of 'outer' indices is calculated as only those
indices that appear once on the tensors, so these likely need to be
manually specified, otherwise, for example, an index that appears on
two tensors *and* the output will incorrectly be identified as
non-hyper.
:param output_inds: The outer or output index or indices. If not specified then taken
as every index that appears only once on the tensors (and thus
non-hyper).
:type output_inds: None, str or sequence of str, optional
:returns: The tensor network hyperinds.
:rtype: tuple[str]
.. py:method:: compute_contracted_inds(*tids, output_inds=None)
Get the indices describing the tensor contraction of tensors
corresponding to ``tids``.
.. py:method:: squeeze(fuse=False, include=None, exclude=None, inplace=False)
Drop singlet bonds and dimensions from this tensor network. If
``fuse=True`` also fuse all multibonds between tensors.
:param fuse: Whether to fuse multibonds between tensors as well as squeezing.
:type fuse: bool, optional
:param include: Only squeeze these indices, by default all indices.
:type include: sequence of str, optional
:param exclude: Ignore these indices, by default the outer indices of this TN.
:type exclude: sequence of str, optional
:param inplace: Whether to perform the squeeze and optional fuse inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: squeeze_
.. py:method:: isometrize(method='qr', allow_no_left_inds=False, inplace=False)
Project every tensor in this network into an isometric form,
assuming they have ``left_inds`` marked.
:param method: The method used to generate the isometry. The options are:
- "qr": use the Q factor of the QR decomposition of ``x`` with the
constraint that the diagonal of ``R`` is positive.
- "svd": uses ``U @ VH`` of the SVD decomposition of ``x``. This is
useful for finding the 'closest' isometric matrix to ``x``, such
as when it has been expanded with noise etc. But is less stable
for differentiation / optimization.
- "exp": use the matrix exponential of ``x - dag(x)``, first
completing ``x`` with zeros if it is rectangular. This is a good
parametrization for optimization, but more expensive for
non-square ``x``.
- "cayley": use the Cayley transform of ``x - dag(x)``, first
completing ``x`` with zeros if it is rectangular. This is a good
parametrization for optimization (one the few compatible with
`HIPS/autograd` e.g.), but more expensive for non-square ``x``.
- "householder": use the Householder reflection method directly.
This requires that the backend implements
"linalg.householder_product".
- "torch_householder": use the Householder reflection method
directly, using the ``torch_householder`` package. This requires
that the package is installed and that the backend is
``"torch"``. This is generally the best parametrizing method for
"torch" if available.
- "mgs": use a python implementation of the modified Gram Schmidt
method directly. This is slow if not compiled but a useful
reference.
Not all backends support all methods or differentiating through all
methods.
:type method: str, optional
:param allow_no_left_inds: If ``True`` then allow tensors with no ``left_inds`` to be
left alone, rather than raising an error.
:type allow_no_left_inds: bool, optional
:param inplace: If ``True`` then perform the operation in-place.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: isometrize_
.. py:attribute:: unitize
.. py:attribute:: unitize_
.. py:method:: randomize(dtype=None, seed=None, inplace=False, **randn_opts)
Randomize every tensor in this TN - see
:meth:`quimb.tensor.tensor_core.Tensor.randomize`.
:param dtype: The data type of the random entries. If left as the default
``None``, then the data type of the current array will be used.
:type dtype: {None, str}, optional
:param seed: Seed for the random number generator.
:type seed: None or int, optional
:param inplace: Whether to perform the randomization inplace, by default ``False``.
:type inplace: bool, optional
:param randn_opts: Supplied to :func:`~quimb.gen.rand.randn`.
:rtype: TensorNetwork
.. py:attribute:: randomize_
.. py:method:: strip_exponent(tid_or_tensor, value=None)
Scale the elements of tensor corresponding to ``tid`` so that the
norm of the array is some value, which defaults to ``1``. The log of
the scaling factor, base 10, is then accumulated in the ``exponent``
attribute.
:param tid: The tensor identifier or actual tensor.
:type tid: str or Tensor
:param value: The value to scale the norm of the tensor to.
:type value: None or float, optional
.. py:method:: distribute_exponent()
Distribute the exponent ``p`` of this tensor network (i.e.
corresponding to ``tn * 10**p``) equally among all tensors.
.. py:method:: equalize_norms(value=None, inplace=False)
Make the Frobenius norm of every tensor in this TN equal without
changing the overall value if ``value=None``, or set the norm of every
tensor to ``value`` by scalar multiplication only.
:param value: Set the norm of each tensor to this value specifically. If supplied
the change in overall scaling will be accumulated in
``tn.exponent`` in the form of a base 10 power.
:type value: None or float, optional
:param inplace: Whether to perform the norm equalization inplace or not.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: equalize_norms_
.. py:method:: balance_bonds(inplace=False)
Apply :func:`~quimb.tensor.tensor_contract.tensor_balance_bond` to
all bonds in this tensor network.
:param inplace: Whether to perform the bond balancing inplace or not.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: balance_bonds_
.. py:method:: fuse_multibonds(gauges=None, include=None, exclude=None, inplace=False)
Fuse any multi-bonds (more than one index shared by the same pair
of tensors) into a single bond.
:param gauges: If supplied, also fuse the gauges contained in this dict.
:type gauges: None or dict[str, array_like], optional
:param include: Only consider these indices, by default all indices.
:type include: sequence of str, optional
:param exclude: Ignore these indices, by default the outer indices of this TN.
:type exclude: sequence of str, optional
.. py:attribute:: fuse_multibonds_
.. py:method:: expand_bond_dimension(new_bond_dim, mode=None, rand_strength=None, rand_dist='normal', inds_to_expand=None, inplace=False)
Increase the dimension of all or some of the bonds in this tensor
network to at least ``new_bond_dim``, optinally adding some random
noise to the new entries.
:param new_bond_dim: The minimum bond dimension to expand to, if the bond dimension is
already larger than this it will be left unchanged.
:type new_bond_dim: int
:param rand_strength: The strength of random noise to add to the new array entries,
if any. The noise is drawn from a normal distribution with
standard deviation ``rand_strength``.
:type rand_strength: float, optional
:param inds_to_expand: The indices to expand, if not all.
:type inds_to_expand: sequence of str, optional
:param inplace: Whether to expand this tensor network in place, or return a new
one.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: expand_bond_dimension_
.. py:method:: flip(inds, inplace=False)
Flip the dimension corresponding to indices ``inds`` on all tensors
that share it.
.. py:attribute:: flip_
.. py:method:: rank_simplify(output_inds=None, equalize_norms=False, cache=None, max_combinations=500, inplace=False)
Simplify this tensor network by performing contractions that don't
increase the rank of any tensors.
:param output_inds: Explicitly set which indices of the tensor network are output
indices and thus should not be modified.
:type output_inds: sequence of str, optional
:param equalize_norms: Actively renormalize the tensors during the simplification process.
Useful for very large TNs. The scaling factor will be stored as an
exponent in ``tn.exponent``.
:type equalize_norms: bool or float
:param cache: Persistent cache used to mark already checked tensors.
:type cache: None or set
:param inplace: Whether to perform the rand reduction inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. seealso:: :obj:`full_simplify`, :obj:`column_reduce`, :obj:`diagonal_reduce`
.. py:attribute:: rank_simplify_
.. py:method:: diagonal_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False)
Find tensors with diagonal structure and collapse those axes. This
will create a tensor 'hyper' network with indices repeated 2+ times, as
such, output indices should be explicitly supplied when contracting, as
they can no longer be automatically inferred. For example:
>>> tn_diag = tn.diagonal_reduce()
>>> tn_diag.contract(all, output_inds=[])
:param output_inds: Which indices to explicitly consider as outer legs of the tensor
network and thus not replace. If not given, these will be taken as
all the indices that appear once.
:type output_inds: sequence of str, optional
:param atol: When identifying diagonal tensors, the absolute tolerance with
which to compare to zero with.
:type atol: float, optional
:param cache: Persistent cache used to mark already checked tensors.
:type cache: None or set
:param inplace: Whether to perform the diagonal reduction inplace.
:param bool: Whether to perform the diagonal reduction inplace.
:param optional: Whether to perform the diagonal reduction inplace.
:rtype: TensorNetwork
.. seealso:: :obj:`full_simplify`, :obj:`rank_simplify`, :obj:`antidiag_gauge`, :obj:`column_reduce`
.. py:attribute:: diagonal_reduce_
.. py:method:: antidiag_gauge(output_inds=None, atol=1e-12, cache=None, inplace=False)
Flip the order of any bonds connected to antidiagonal tensors.
Whilst this is just a gauge fixing (with the gauge being the flipped
identity) it then allows ``diagonal_reduce`` to then simplify those
indices.
:param output_inds: Which indices to explicitly consider as outer legs of the tensor
network and thus not flip. If not given, these will be taken as
all the indices that appear once.
:type output_inds: sequence of str, optional
:param atol: When identifying antidiagonal tensors, the absolute tolerance with
which to compare to zero with.
:type atol: float, optional
:param cache: Persistent cache used to mark already checked tensors.
:type cache: None or set
:param inplace: Whether to perform the antidiagonal gauging inplace.
:param bool: Whether to perform the antidiagonal gauging inplace.
:param optional: Whether to perform the antidiagonal gauging inplace.
:rtype: TensorNetwork
.. seealso:: :obj:`full_simplify`, :obj:`rank_simplify`, :obj:`diagonal_reduce`, :obj:`column_reduce`
.. py:attribute:: antidiag_gauge_
.. py:method:: column_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False)
Find bonds on this tensor network which have tensors where all but
one column (of the respective index) is non-zero, allowing the
'cutting' of that bond.
:param output_inds: Which indices to explicitly consider as outer legs of the tensor
network and thus not slice. If not given, these will be taken as
all the indices that appear once.
:type output_inds: sequence of str, optional
:param atol: When identifying singlet column tensors, the absolute tolerance
with which to compare to zero with.
:type atol: float, optional
:param cache: Persistent cache used to mark already checked tensors.
:type cache: None or set
:param inplace: Whether to perform the column reductions inplace.
:param bool: Whether to perform the column reductions inplace.
:param optional: Whether to perform the column reductions inplace.
:rtype: TensorNetwork
.. seealso:: :obj:`full_simplify`, :obj:`rank_simplify`, :obj:`diagonal_reduce`, :obj:`antidiag_gauge`
.. py:attribute:: column_reduce_
.. py:method:: split_simplify(atol=1e-12, equalize_norms=False, cache=None, inplace=False, **split_opts)
Find tensors which have low rank SVD decompositions across any
combination of bonds and perform them.
:param atol: Cutoff used when attempting low rank decompositions.
:type atol: float, optional
:param equalize_norms: Actively renormalize the tensors during the simplification process.
Useful for very large TNs. The scaling factor will be stored as an
exponent in ``tn.exponent``.
:type equalize_norms: bool or float
:param cache: Persistent cache used to mark already checked tensors.
:type cache: None or set
:param inplace: Whether to perform the split simplification inplace.
:param bool: Whether to perform the split simplification inplace.
:param optional: Whether to perform the split simplification inplace.
.. py:attribute:: split_simplify_
.. py:method:: gen_loops(max_loop_length=None)
Generate sequences of tids that represent loops in the TN.
:param max_loop_length: Set the maximum number of tensors that can appear in a loop. If
``None``, wait until any loop is found and set that as the
maximum length.
:type max_loop_length: None or int
:Yields: *tuple[int]*
.. seealso:: :obj:`gen_inds_loops`
.. py:method:: gen_inds_loops(max_loop_length=None)
Generate all sequences of indices, up to a specified length, that
represent loops in this tensor network. Unlike ``gen_loops`` this
function will return the indices of the tensors in the loop rather
than the tensor ids, allowing one to differentiate between e.g. a
double loop and a 'figure of eight' loop.
:param max_loop_length: Set the maximum number of indices that can appear in a loop. If
``None``, wait until any loop is found and set that as the
maximum length.
:type max_loop_length: None or int
:Yields: *tuple[str]*
.. seealso:: :obj:`gen_loops`, :obj:`gen_inds_connected`
.. py:method:: gen_inds_connected(max_length)
Generate all index 'patches' of size up to ``max_length``.
:param max_length: The maximum number of indices in the patch.
:type max_length: int
:Yields: *tuple[str]*
.. seealso:: :obj:`gen_inds_loops`
.. py:method:: _get_string_between_tids(tida, tidb)
.. py:method:: tids_are_connected(tids)
Check whether nodes ``tids`` are connected.
:param tids: Nodes to check.
:type tids: sequence of int
:rtype: bool
.. py:method:: compute_shortest_distances(tids=None, exclude_inds=())
Compute the minimum graph distances between all or some nodes
``tids``.
.. py:method:: compute_hierarchical_linkage(tids=None, method='weighted', optimal_ordering=True, exclude_inds=())
.. py:method:: compute_hierarchical_ssa_path(tids=None, method='weighted', optimal_ordering=True, exclude_inds=(), are_sorted=False, linkage=None)
Compute a hierarchical grouping of ``tids``, as a ``ssa_path``.
.. py:method:: compute_hierarchical_ordering(tids=None, method='weighted', optimal_ordering=True, exclude_inds=(), linkage=None)
.. py:method:: compute_hierarchical_grouping(max_group_size, tids=None, method='weighted', optimal_ordering=True, exclude_inds=(), linkage=None)
Group ``tids`` (by default, all tensors) into groups of size
``max_group_size`` or less, using a hierarchical clustering.
.. py:method:: pair_simplify(cutoff=1e-12, output_inds=None, max_inds=10, cache=None, equalize_norms=False, max_combinations=500, inplace=False, **split_opts)
.. py:attribute:: pair_simplify_
.. py:method:: loop_simplify(output_inds=None, max_loop_length=None, max_inds=10, cutoff=1e-12, loops=None, cache=None, equalize_norms=False, inplace=False, **split_opts)
Try and simplify this tensor network by identifying loops and
checking for low-rank decompositions across groupings of the loops
outer indices.
:param max_loop_length: Largest length of loop to search for, if not set, the size will be
set to the length of the first (and shortest) loop found.
:type max_loop_length: None or int, optional
:param cutoff: Cutoff to use for the operator decomposition.
:type cutoff: float, optional
:param loops: Loops to check, or a function that generates them.
:type loops: None, sequence or callable
:param cache: For performance reasons can supply a cache for already checked
loops.
:type cache: set, optional
:param inplace: Whether to replace the loops inplace.
:type inplace: bool, optional
:param split_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_split`.
:rtype: TensorNetwork
.. py:attribute:: loop_simplify_
.. py:method:: full_simplify(seq='ADCR', output_inds=None, atol=1e-12, equalize_norms=False, cache=None, inplace=False, progbar=False, rank_simplify_opts=None, loop_simplify_opts=None, split_simplify_opts=None, custom_methods=(), split_method='svd')
Perform a series of tensor network 'simplifications' in a loop until
there is no more reduction in the number of tensors or indices. Note
that apart from rank-reduction, the simplification methods make use of
the non-zero structure of the tensors, and thus changes to this will
potentially produce different simplifications.
:param seq:
Which simplifications and which order to perform them in.
* ``'A'`` : stands for ``antidiag_gauge``
* ``'D'`` : stands for ``diagonal_reduce``
* ``'C'`` : stands for ``column_reduce``
* ``'R'`` : stands for ``rank_simplify``
* ``'S'`` : stands for ``split_simplify``
* ``'L'`` : stands for ``loop_simplify``
If you want to keep the tensor network 'simple', i.e. with no
hyperedges, then don't use ``'D'`` (moreover ``'A'`` is redundant).
:type seq: str, optional
:param output_inds: Explicitly set which indices of the tensor network are output
indices and thus should not be modified. If not specified the
tensor network is assumed to be a 'standard' one where indices that
only appear once are the output indices.
:type output_inds: sequence of str, optional
:param atol: The absolute tolerance when indentifying zero entries of tensors
and performing low-rank decompositions.
:type atol: float, optional
:param equalize_norms: Actively renormalize the tensors during the simplification process.
Useful for very large TNs. If `True`, the norms, in the formed of
stripped exponents, will be redistributed at the end. If an actual
number, the final tensors will all have this norm, and the scaling
factor will be stored as a base-10 exponent in ``tn.exponent``.
:type equalize_norms: bool or float
:param cache: A persistent cache for each simplification process to mark
already processed tensors.
:type cache: None or set
:param progbar: Show a live progress bar of the simplification process.
:type progbar: bool, optional
:param inplace: Whether to perform the simplification inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. seealso:: :obj:`diagonal_reduce`, :obj:`rank_simplify`, :obj:`antidiag_gauge`, :obj:`column_reduce`, :obj:`split_simplify`, :obj:`loop_simplify`
.. py:attribute:: full_simplify_
.. py:method:: hyperinds_resolve(mode='dense', sorter=None, output_inds=None, inplace=False)
Convert this into a regular tensor network, where all indices
appear at most twice, by inserting COPY tensor or tensor networks
for each hyper index.
:param mode: What type of COPY tensor(s) to insert.
:type mode: {'dense', 'mps', 'tree'}, optional
:param sorter: If given, a function to sort the indices that a single hyperindex
will be turned into. Th function is called like
``tids.sort(key=sorter)``.
:type sorter: None or callable, optional
:param inplace: Whether to insert the COPY tensors inplace.
:type inplace: bool, optional
:rtype: TensorNetwork
.. py:attribute:: hyperinds_resolve_
.. py:method:: compress_simplify(output_inds=None, atol=1e-06, simplify_sequence_a='ADCRS', simplify_sequence_b='RPL', hyperind_resolve_mode='tree', hyperind_resolve_sort='clustering', final_resolve=False, split_method='svd', max_simplification_iterations=100, converged_tol=0.01, equalize_norms=True, progbar=False, inplace=False, **full_simplify_opts)
.. py:attribute:: compress_simplify_
.. py:method:: max_bond()
Return the size of the largest bond in this network.
.. py:property:: shape
Actual, i.e. exterior, shape of this TensorNetwork.
.. py:property:: dtype
The dtype of this TensorNetwork, this is the minimal common type
of all the tensors data.
.. py:method:: iscomplex()
.. py:method:: astype(dtype, inplace=False)
Convert the type of all tensors in this network to ``dtype``.
.. py:attribute:: astype_
.. py:method:: __getstate__()
Helper for pickle.
.. py:method:: __setstate__(state)
.. py:method:: _repr_info()
General info to show in various reprs. Sublasses can add more
relevant info to this dict.
.. py:method:: _repr_info_str()
Render the general info as a string.
.. py:method:: _repr_html_()
Render this TensorNetwork as HTML, for Jupyter notebooks.
.. py:method:: __str__()
Return str(self).
.. py:method:: __repr__()
Return repr(self).
.. py:attribute:: draw
.. py:attribute:: draw_3d
.. py:attribute:: draw_interactive
.. py:attribute:: draw_3d_interactive
.. py:attribute:: graph
.. py:attribute:: visualize_tensors
.. py:function:: ensure_dict(x)
Make sure ``x`` is a ``dict``, creating an empty one if ``x is None``.
.. py:function:: rand_uuid(base='')
Return a guaranteed unique, shortish identifier, optional appended
to ``base``.
.. rubric:: Examples
>>> rand_uuid()
'_2e1dae1b'
>>> rand_uuid('virt-bond')
'virt-bond_bf342e68'
.. py:function:: tensor_contract(*tensors, output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, drop_tags=False, **contract_opts)
Contract a collection of tensors into a scalar or tensor, automatically
aligning their indices and computing an optimized contraction path.
The output tensor will have the union of tags from the input tensors.
:param tensors: The tensors to contract.
:type tensors: sequence of Tensor
:param output_inds: The output indices. These can be inferred if the contraction has no
'hyper' indices, in which case the output indices are those that appear
only once in the input indices, and ordered as they appear in the
inputs. For hyper indices or a specific ordering, these must be
supplied.
:type output_inds: sequence of str
:param optimize:
The contraction path optimization strategy to use.
- ``None``: use the default strategy,
- str: use the preset strategy with the given name,
- path_like: use this exact path,
- ``cotengra.HyperOptimizer``: find the contraction using this
optimizer, supports slicing,
- ``cotengra.ContractionTree``: use this exact tree, supports
slicing,
- ``opt_einsum.PathOptimizer``: find the path using this optimizer.
Contraction with ``cotengra`` might be a bit more efficient but the
main reason would be to handle sliced contraction automatically, as
well as the fact that it uses ``autoray`` internally.
:type optimize: {None, str, path_like, PathOptimizer}, optional
:param get:
What to return. If:
* ``None`` (the default) - return the resulting scalar or Tensor.
* ``'expression'`` - return a callbable expression that performs
the contraction and operates on the raw arrays.
* ``'tree'`` - return the ``cotengra.ContractionTree`` describing
the contraction.
* ``'path'`` - return the raw 'path' as a list of tuples.
* ``'symbol-map'`` - return the dict mapping indices to 'symbols'
(single unicode letters) used internally by ``cotengra``
* ``'path-info'`` - return the ``opt_einsum.PathInfo`` path
object with detailed information such as flop cost. The
symbol-map is also added to the ``quimb_symbol_map`` attribute.
:type get: str, optional
:param backend: Which backend to use to perform the contraction. Supplied to
`cotengra`.
:type backend: {'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional
:param preserve_tensor: Whether to return a tensor regardless of whether the output object
is a scalar (has no indices) or not.
:type preserve_tensor: bool, optional
:param drop_tags: Whether to drop all tags from the output tensor. By default the output
tensor will keep the union of all tags from the input tensors.
:type drop_tags: bool, optional
:param contract_opts: Passed to ``cotengra.array_contract``.
:rtype: scalar or Tensor
.. py:function:: enforce_1d_like(tn, site_tags=None, fix_bonds=True, inplace=False)
Check that ``tn`` is 1D-like with OBC, i.e. 1) that each tensor has
exactly one of the given ``site_tags``. If not, raise a ValueError. 2) That
there are no hyper indices. And 3) that there are only bonds within sites
or between nearest neighbor sites. This issue can be optionally
automatically fixed by inserting a string of identity tensors.
:param tn: The tensor network to check.
:type tn: TensorNetwork
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``.
:type site_tags: sequence of str, optional
:param fix_bonds: Whether to fix the bond structure by inserting identity tensors.
:type fix_bonds: bool, optional
:param inplace: Whether to perform the fix inplace or not.
:type inplace: bool, optional
:raises ValueError: If the tensor network is not 1D-like.
.. py:function:: possibly_permute_(tn, permute_arrays)
.. py:function:: tensor_network_1d_compress_direct(tn, max_bond=None, cutoff=1e-10, site_tags=None, normalize=False, canonize=True, cutoff_mode='rsum2', permute_arrays=True, optimize='auto-hq', sweep_reverse=False, equalize_norms=False, inplace=False, **compress_opts)
Compress a 1D-like tensor network using the 'direct' or 'naive' method,
that is, explicitly contracting site-wise to form a MPS-like TN,
canonicalizing in one direction, then compressing in the other. This has
the same scaling as the density matrix (dm) method, but a larger prefactor.
It can still be faster for small bond dimensions however, and is
potentially higher precision since it works in the space of singular values
directly rather than singular values squared. It is not quite optimal in
terms of error due to the compounding errors of the SVDs.
:param tn: The tensor network to compress. Every tensor should have exactly one of
the site tags. Each site can have multiple tensors and output indices.
:type tn: TensorNetwork
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param normalize: Whether to normalize the final tensor network, making use of the fact
that the output tensor network is in right canonical form.
:type normalize: bool, optional
:param canonize: Whether to canonicalize the network in one direction before compressing
in the other.
:type canonize: bool, optional
:param cutoff_mode: The mode to use when truncating the singular values of the decomposed
tensors. See :func:`~quimb.tensor.tensor_split`.
:type cutoff_mode: {"rsum2", "rel", ...}, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param sweep_reverse: Whether to sweep in the reverse direction, resulting in a left
canonical form instead of right canonical.
:type sweep_reverse: bool, optional
:param equalize_norms: Whether to renormalize the tensors during the compression procedure.
If ``True`` the gathered exponent will be redistributed equally among
the tensors. If a float, all tensors will be renormalized to this
value, and the gathered exponent is tracked in ``tn.exponent`` of the
returned tensor network.
:type equalize_norms: bool, optional
:param inplace: Whether to perform the compression inplace or not.
:type inplace: bool, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
:returns: The compressed tensor network, with canonical center at
``site_tags[0]`` ('right canonical' form) or ``site_tags[-1]`` ('left
canonical' form) if ``sweep_reverse``.
:rtype: TensorNetwork
.. py:function:: tensor_network_1d_compress_dm(tn, max_bond=None, cutoff=1e-10, site_tags=None, normalize=False, cutoff_mode='rsum1', permute_arrays=True, optimize='auto-hq', sweep_reverse=False, canonize=True, equalize_norms=False, inplace=False, **compress_opts)
Compress any 1D-like tensor network using the 'density matrix' method
(https://tensornetwork.org/mps/algorithms/denmat_mpo_mps/).
While this has the same scaling as the direct method, in practice it can
often be faster, especially at large bond dimensions. Potentially there are
some situations where the direct method is more stable with regard to
precision, since the density matrix method works in the 'squared' picture.
:param tn: The tensor network to compress. Every tensor should have exactly one of
the site tags. Each site can have multiple tensors and output indices.
:type tn: TensorNetwork
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: The truncation error to use when compressing the double layer tensor
network.
:type cutoff: float, optional
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param normalize: Whether to normalize the final tensor network, making use of the fact
that the output tensor network is in right canonical form.
:type normalize: bool, optional
:param cutoff_mode: The mode to use when truncating the singular values of the decomposed
tensors. See :func:`~quimb.tensor.tensor_split`. Note for the density
matrix method the default 'rsum1' mode acts like 'rsum2' for the direct
method due to truncating in the squared space.
:type cutoff_mode: {"rsum1", "rel", ...}, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param sweep_reverse: Whether to sweep in the reverse direction, resulting in a left
canonical form instead of right canonical.
:type sweep_reverse: bool, optional
:param canonize: Dummy argument to match the signature of other compression methods.
:type canonize: bool, optional
:param equalize_norms: Whether to equalize the norms of the tensors after compression. If an
explicit value is give, then the norms will be set to that value, and
the overall scaling factor will be accumulated into `.exponent`.
:type equalize_norms: bool or float, optional
:param inplace: Whether to perform the compression inplace or not.
:type inplace: bool, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
:returns: The compressed tensor network, with canonical center at
``site_tags[0]`` ('right canonical' form) or ``site_tags[-1]`` ('left
canonical' form) if ``sweep_reverse``.
:rtype: TensorNetwork
.. py:function:: tensor_network_1d_compress_zipup(tn, max_bond=None, cutoff=1e-10, site_tags=None, canonize=True, normalize=False, cutoff_mode='rsum2', permute_arrays=True, optimize='auto-hq', sweep_reverse=False, equalize_norms=False, inplace=False, **compress_opts)
Compress a 1D-like tensor network using the 'zip-up' algorithm due to
'Minimally Entangled Typical Thermal State Algorithms', E.M. Stoudenmire &
Steven R. White (https://arxiv.org/abs/1002.1305). The returned tensor
network will have one tensor per site, in the order given by ``site_tags``,
with canonical center at ``site_tags[0]`` ('right' canonical form).
The zipup algorithm scales better than the direct and density matrix
methods when multiple tensors are present at each site (such as MPO-MPS
multiplication), but is less accurate due to the compressions taking place
in an only pseudo-canonical gauge. It generally also only makes sense in
the fixed bond dimension case, as opposed to relying on a specific
`cutoff` only.
:param tn: The tensor network to compress. Every tensor should have exactly one of
the site tags. Each site can have multiple tensors and output indices.
:type tn: TensorNetwork
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param canonize: Whether to pseudo canonicalize the initial tensor network.
:type canonize: bool, optional
:param normalize: Whether to normalize the final tensor network, making use of the fact
that the output tensor network is in right canonical form.
:type normalize: bool, optional
:param cutoff_mode: The mode to use when truncating the singular values of the decomposed
tensors. See :func:`~quimb.tensor.tensor_split`.
:type cutoff_mode: {"rsum2", "rel", ...}, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param sweep_reverse: Whether to sweep in the reverse direction, resulting in a left
canonical form instead of right canonical.
:type sweep_reverse: bool, optional
:param equalize_norms: Whether to equalize the norms of the tensors after compression. If an
explicit value is give, then the norms will be set to that value, and
the overall scaling factor will be accumulated into `.exponent`.
:type equalize_norms: bool or float, optional
:param inplace: Whether to perform the compression inplace or not.
:type inplace: bool, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
:returns: The compressed tensor network, with canonical center at
``site_tags[0]`` ('right canonical' form) or ``site_tags[-1]`` ('left
canonical' form) if ``sweep_reverse``.
:rtype: TensorNetwork
.. py:function:: tensor_network_1d_compress_zipup_first(tn, max_bond=None, max_bond_zipup=None, cutoff=1e-10, cutoff_zipup=None, site_tags=None, canonize=True, normalize=False, cutoff_mode='rsum2', permute_arrays=True, optimize='auto-hq', sweep_reverse=False, equalize_norms=False, inplace=False, **compress_opts)
Compress this 1D-like tensor network using the 'zip-up first' algorithm,
that is, first compressing the tensor network to a larger bond dimension
using the 'zip-up' algorithm, then compressing to the desired bond
dimension using a direct sweep.
Depending on the value of ``max_bond`` and ``max_bond_zipup``, this can be
scale better than the direct and density matrix methods, but reach close to
the same accuracy. As with the 'zip-up' method, there is no advantage
unless there are multiple tensors per site, and it generally only makes
sense in the fixed bond dimension case, as opposed to relying on a
specific `cutoff` only.
:param tn: The tensor network to compress. Every tensor should have exactly one of
the site tags. Each site can have multiple tensors and output indices.
:type tn: TensorNetwork
:param max_bond: The final maximum bond dimension to compress to.
:type max_bond: int
:param max_bond_zipup: The intermediate maximum bond dimension to compress to using the
'zip-up' algorithm. If not given and `max_bond` is, this is set as
twice the target bond dimension, ``2 * max_bond``.
:type max_bond_zipup: int, optional
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param cutoff_zipup: A dynamic threshold for discarding singular values when compressing to
the intermediate bond dimension using the 'zip-up' algorithm. If not
given, this is set to the same as ``cutoff`` if a maximum bond is
given, else ``cutoff / 10``.
:type cutoff_zipup: float, optional
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param canonize: Whether to pseudo canonicalize the initial tensor network.
:type canonize: bool, optional
:param normalize: Whether to normalize the final tensor network, making use of the fact
that the output tensor network is in right canonical form.
:type normalize: bool, optional
:param cutoff_mode: The mode to use when truncating the singular values of the decomposed
tensors. See :func:`~quimb.tensor.tensor_split`.
:type cutoff_mode: {"rsum2", "rel", ...}, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param sweep_reverse: Whether to sweep in the reverse direction, resulting in a left
canonical form instead of right canonical.
:type sweep_reverse: bool, optional
:param equalize_norms: Whether to equalize the norms of the tensors after compression. If an
explicit value is give, then the norms will be set to that value, and
the overall scaling factor will be accumulated into `.exponent`.
:type equalize_norms: bool or float, optional
:param inplace: Whether to perform the compression inplace or not.
:type inplace: bool, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
:returns: The compressed tensor network, with canonical center at
``site_tags[0]`` ('right canonical' form) or ``site_tags[-1]`` ('left
canonical' form) if ``sweep_reverse``.
:rtype: TensorNetwork
.. py:function:: _tn1d_fit_sum_sweep_1site(tn_fit, tn_overlaps, site_tags, max_bond=None, cutoff=0.0, envs=None, prepare=True, reverse=False, compute_tdiff=True, optimize='auto-hq')
Core sweep of the 1-site 1D fit algorithm.
.. py:function:: _tn1d_fit_sum_sweep_2site(tn_fit, tn_overlaps, site_tags, max_bond=None, cutoff=1e-10, envs=None, prepare=True, reverse=False, optimize='auto-hq', compute_tdiff=True, **compress_opts)
Core sweep of the 2-site 1D fit algorithm.
.. py:function:: tensor_network_1d_compress_fit(tns, max_bond=None, cutoff=None, tn_fit=None, bsz='auto', initial_bond_dim=8, max_iterations=10, tol=0.0, site_tags=None, cutoff_mode='rsum2', sweep_sequence='RL', normalize=False, permute_arrays=True, optimize='auto-hq', canonize=True, sweep_reverse=False, equalize_norms=False, inplace_fit=False, inplace=False, progbar=False, **compress_opts)
Compress any 1D-like (can have multiple tensors per site) tensor network
or sum of tensor networks to an exactly 1D (one tensor per site) tensor
network of bond dimension `max_bond` using the 1-site or 2-site variational
fitting (or 'DMRG-style') method. The tensor network(s) can have arbitrary
inner and outer structure.
This method has the lowest scaling of the standard 1D compression methods
and can also provide the most accurate compression, but the actual speed
and accuracy depend on the number of iterations required and initial guess,
making it a more 'hands-on' method.
It's also the only method to support fitting to a sum of tensor networks
directly, rather than having to forming the explicitly summed TN first.
:param tns: The tensor network or tensor networks to compress. Each tensor network
should have the same outer index structure, and within each tensor
network every tensor should have exactly one of the site tags.
:type tns: TensorNetwork or Sequence[TensorNetwork]
:param max_bond: The maximum bond dimension to compress to. If not given, this is set
as the maximum bond dimension of the initial guess tensor network, if
any, else infinite for ``bsz=2``.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
This is only relevant for the 2-site sweeping algorithm (``bsz=2``),
where it defaults to 1e-10.
:type cutoff: float, optional
:param tn_fit: An initial guess for the compressed tensor network. It should matching
outer indices and site tags with ``tn``. If a `dict`, this is assumed
to be options to supply to `tensor_network_1d_compress` to construct
the initial guess, inheriting various defaults like `initial_bond_dim`.
If a string, e.g. ``"zipup"``, this is shorthand for that compression
method with default settings. If not given, a random 1D tensor network
will be used.
:type tn_fit: TensorNetwork, dict, or str, optional
:param bsz: The size of the block to optimize while sweeping. If ``"auto"``, this
will be inferred from the value of ``max_bond`` and ``cutoff``.
:type bsz: {"auto", 1, 2}, optional
:param initial_bond_dim: The initial bond dimension to use when creating the initial guess. This
is only relevant if ``tn_fit`` is not given. For each sweep the allowed
bond dimension is doubled, up to ``max_bond``. For 1-site this occurs
via explicit bond expansion, while for 2-site it occurs during the
2-site tensor decomposition.
:type initial_bond_dim: int, optional
:param max_iterations: The maximum number of variational sweeps to perform.
:type max_iterations: int, optional
:param tol: The convergence tolerance, in terms of local tensor distance
normalized. If zero, there will be exactly ``max_iterations`` sweeps.
:type tol: float, optional
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param cutoff_mode: The mode to use when truncating the singular values of the decomposed
tensors. See :func:`~quimb.tensor.tensor_split`, if using the 2-site
sweeping algorithm.
:type cutoff_mode: {"rsum2", "rel", ...}, optional
:param sweep_sequence: The sequence of sweeps to perform, e.g. ``"LR"`` means first sweep left
to right, then right to left. The sequence is cycled.
:type sweep_sequence: str, optional
:param normalize: Whether to normalize the final tensor network, making use of the fact
that the output tensor network is in left or right canonical form.
:type normalize: bool, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param canonize: Dummy argument to match the signature of other compression methods.
:type canonize: bool, optional
:param sweep_reverse: Whether to sweep in the reverse direction, swapping whether the final
tensor network is in right or left canonical form, which also depends
on the last sweep direction.
:type sweep_reverse: bool, optional
:param equalize_norms: Whether to equalize the norms of the tensors after compression. If an
explicit value is give, then the norms will be set to that value, and
the overall scaling factor will be accumulated into `.exponent`.
:type equalize_norms: bool or float, optional
:param inplace_fit: Whether to perform the compression inplace on the initial guess tensor
network, ``tn_fit``, if supplied.
:type inplace_fit: bool, optional
:param inplace: Whether to perform the compression inplace on the target tensor network
supplied, or ``tns[0]`` if a sequence to sum is supplied.
:type inplace: bool, optional
:param progbar: Whether to show a progress bar. Note the progress bar shows the maximum
change of any single tensor norm, *not* the global change in norm or
truncation error.
:type progbar: bool, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`, if using the 2-site
sweeping algorithm.
:returns: The compressed tensor network. Depending on ``sweep_reverse`` and the
last sweep direction, the canonical center will be at either L:
``site_tags[0]`` or R: ``site_tags[-1]``, or the opposite if
``sweep_reverse``.
:rtype: TensorNetwork
.. py:data:: _TN1D_COMPRESS_METHODS
.. py:function:: tensor_network_1d_compress(tn, max_bond=None, cutoff=1e-10, method='dm', site_tags=None, canonize=True, permute_arrays=True, optimize='auto-hq', sweep_reverse=False, equalize_norms=False, compress_opts=None, inplace=False, **kwargs)
Compress a 1D-like tensor network using the specified method.
:param tn: The tensor network to compress. Every tensor should have exactly one of
the site tags. Each site can have multiple tensors and output indices.
:type tn: TensorNetwork
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param method: The compression method to use.
:type method: {"direct", "dm", "zipup", "zipup-first", "fit", "projector"}
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param canonize: Whether to perform canonicalization, pseudo or otherwise depending on
the method, before compressing. Ignored for ``method='dm'`` and
``method='fit'``.
:type canonize: bool, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param sweep_reverse: Whether to sweep in the reverse direction, resulting in a left
canonical form instead of right canonical (for the fit method, this
also depends on the last sweep direction).
:type sweep_reverse: bool, optional
:param equalize_norms: Whether to equalize the norms of the tensors after compression. If an
explicit value is give, then the norms will be set to that value, and
the overall scaling factor will be accumulated into `.exponent`.
:type equalize_norms: bool or float, optional
:param inplace: Whether to perform the compression inplace.
:type inplace: bool, optional
:param kwargs: Supplied to the chosen compression method.
:rtype: TensorNetwork
.. py:function:: mps_gate_with_mpo_lazy(mps, mpo, inplace=False)
Apply an MPO to an MPS lazily, i.e. nothing is contracted, but the new
TN object has the same outer indices as the original MPS.
.. py:function:: mps_gate_with_mpo_direct(mps, mpo, max_bond=None, cutoff=1e-10, inplace=False, **compress_opts)
Apply an MPO to an MPS using the boundary compression method, that is,
explicitly contracting site-wise to form a MPS-like TN, canonicalizing in
one direction, then compressing in the other. This has the same scaling as
the density matrix (dm) method, but a larger prefactor. It can still be
faster for small bond dimensions however, and is potentially higher
precision since it works in the space of singular values directly rather
than singular values squared. It is not quite optimal in terms of error due
to the compounding errors of the SVDs.
:param mps: The MPS to gate.
:type mps: MatrixProductState
:param mpo: The MPO to gate with.
:type mpo: MatrixProductOperator
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
.. py:function:: mps_gate_with_mpo_dm(mps, mpo, max_bond=None, cutoff=1e-10, inplace=False, **compress_opts)
Gate this MPS with an MPO, using the density matrix compression method.
:param mps: The MPS to gate.
:type mps: MatrixProductState
:param mpo: The MPO to gate with.
:type mpo: MatrixProductOperator
:param max_bond: The maximum bond dimension to keep when compressing the double layer
tensor network, if any.
:type max_bond: int, optional
:param cutoff: The truncation error to use when compressing the double layer tensor
network, if any.
:type cutoff: float, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
.. py:function:: mps_gate_with_mpo_zipup(mps, mpo, max_bond=None, cutoff=1e-10, canonize=True, optimize='auto-hq', **compress_opts)
Apply an MPO to an MPS using the 'zip-up' algorithm due to
'Minimally Entangled Typical Thermal State Algorithms', E.M. Stoudenmire &
Steven R. White (https://arxiv.org/abs/1002.1305).
:param mps: The MPS to gate.
:type mps: MatrixProductState
:param mpo: The MPO to gate with.
:type mpo: MatrixProductOperator
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:param cutoff: A dynamic threshold for discarding singular values when compressing.
:type cutoff: float, optional
:param site_tags: The tags to use to group and order the tensors from ``tn``. If not
given, uses ``tn.site_tags``. The tensor network built will have one
tensor per site, in the order given by ``site_tags``.
:type site_tags: sequence of str, optional
:param canonize: Whether to pseudo canonicalize the initial tensor network.
:type canonize: bool, optional
:param normalize: Whether to normalize the final tensor network, making use of the fact
that the output tensor network is in right canonical form.
:type normalize: bool, optional
:param permute_arrays: Whether to permute the array indices of the final tensor network into
canonical order. If ``True`` will use the default order, otherwise if a
string this specifies a custom order.
:type permute_arrays: bool or str, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split`.
:returns: The compressed MPS, in right canonical form.
:rtype: MatrixProductState
.. py:function:: mps_gate_with_mpo_zipup_first(mps, mpo, max_bond=None, max_bond_zipup=None, cutoff=1e-10, cutoff_zipup=None, canonize=True, optimize='auto-hq', **compress_opts)
Apply an MPO to an MPS by first using the zip-up method with a larger
bond dimension, then doing a regular compression sweep to the target final
bond dimension. This avoids forming an intermediate MPS with bond dimension
``mps.max_bond() * mpo.max_bond()``.
:param mps: The MPS to gate.
:type mps: MatrixProductState
:param mpo: The MPO to gate with.
:type mpo: MatrixProductOperator
:param max_bond: The target final bond dimension.
:type max_bond: int
:param max_bond_zipup: The maximum bond dimension to use when zip-up compressing the double
layer tensor network. If not given, defaults to ``2 * max_bond``.
Needs to be smaller than ``mpo.max_bond()`` for any savings.
:type max_bond_zipup: int, optional
:param cutoff: The truncation error to use when performing the final regular
compression sweep.
:type cutoff: float, optional
:param cutoff_zipup: The truncation error to use when performing the zip-up compression.
:type cutoff_zipup: float, optional
:param canonize: Whether to pseudo canonicalize the initial tensor network.
:type canonize: bool, optional
:param optimize: The contraction path optimizer to use.
:type optimize: str, optional
:param compress_opts: Supplied to :func:`~quimb.tensor.tensor_split` (both the zip-up and
final sweep).
:returns: The compressed MPS, in right canonical form.
:rtype: MatrixProductState
.. py:function:: mps_gate_with_mpo_fit(mps, mpo, max_bond, **kwargs)
Gate an MPS with an MPO using the variational fitting or DMRG-style
method.
:param mps: The MPS to gate.
:type mps: MatrixProductState
:param mpo: The MPO to gate with.
:type mpo: MatrixProductOperator
:param max_bond: The maximum bond dimension to compress to.
:type max_bond: int
:returns: The gated MPS.
:rtype: MatrixProductState
.. py:function:: mps_gate_with_mpo_autofit(self, mpo, max_bond, cutoff=0.0, init_guess=None, **fit_opts)
Fit a MPS to a MPO applied to an MPS using geometry generic versions
of either ALS or autodiff. This is usually much less efficient that using
the 1D specific methods.
Some nice alternatives to the default fit_opts:
- method="autodiff"
- method="als", solver="lstsq"
.. py:function:: mps_gate_with_mpo_projector(self, mpo, max_bond, cutoff=1e-10, canonize=True, canonize_opts=None, inplace=False, **compress_opts)
Apply an MPO to an MPS using local projectors, in the style of CTMRG
or HOTRG, without using information beyond the neighboring 4 tensors.