quimb.tensor.tensor_core ======================== .. py:module:: quimb.tensor.tensor_core .. autoapi-nested-parse:: Core tensor network tools. Attributes ---------- .. autoapisummary:: quimb.tensor.tensor_core._inds_to_eq quimb.tensor.tensor_core.get_symbol quimb.tensor.tensor_core._VALID_CONTRACT_GET quimb.tensor.tensor_core._RAND_PREFIX quimb.tensor.tensor_core._RAND_ALPHABET quimb.tensor.tensor_core.RAND_UUIDS quimb.tensor.tensor_core._VALID_SPLIT_GET quimb.tensor.tensor_core._SPLIT_FNS quimb.tensor.tensor_core._SPLIT_VALUES_FNS quimb.tensor.tensor_core._FULL_SPLIT_METHODS quimb.tensor.tensor_core._RANK_HIDDEN_METHODS quimb.tensor.tensor_core._DENSE_ONLY_METHODS quimb.tensor.tensor_core._LEFT_ISOM_METHODS quimb.tensor.tensor_core._RIGHT_ISOM_METHODS quimb.tensor.tensor_core._ISOM_METHODS quimb.tensor.tensor_core._CUTOFF_LOOKUP quimb.tensor.tensor_core._ABSORB_LOOKUP quimb.tensor.tensor_core._MAX_BOND_LOOKUP quimb.tensor.tensor_core._CUTOFF_MODES quimb.tensor.tensor_core._RENORM_LOOKUP quimb.tensor.tensor_core._BASIC_GATE_CONTRACT quimb.tensor.tensor_core._SPLIT_GATE_CONTRACT quimb.tensor.tensor_core._VALID_GATE_CONTRACT quimb.tensor.tensor_core.TNLO_HANDLED_FUNCTIONS Classes ------- .. autoapisummary:: quimb.tensor.tensor_core.Tensor quimb.tensor.tensor_core.TensorNetwork quimb.tensor.tensor_core.TNLinearOperator quimb.tensor.tensor_core.PTensor quimb.tensor.tensor_core.IsoTensor Functions --------- .. autoapisummary:: quimb.tensor.tensor_core.oset_union quimb.tensor.tensor_core.oset_intersection quimb.tensor.tensor_core.tags_to_oset quimb.tensor.tensor_core.sortedtuple quimb.tensor.tensor_core._gen_output_inds quimb.tensor.tensor_core._tensor_contract_get_other quimb.tensor.tensor_core.maybe_realify_scalar quimb.tensor.tensor_core.tensor_contract quimb.tensor.tensor_core.rand_uuid quimb.tensor.tensor_core._parse_split_opts quimb.tensor.tensor_core._check_left_right_isom quimb.tensor.tensor_core.tensor_split quimb.tensor.tensor_core.tensor_canonize_bond quimb.tensor.tensor_core.choose_local_compress_gauge_settings quimb.tensor.tensor_core.tensor_compress_bond quimb.tensor.tensor_core.tensor_balance_bond quimb.tensor.tensor_core.tensor_multifuse quimb.tensor.tensor_core.tensor_make_single_bond quimb.tensor.tensor_core.tensor_fuse_squeeze quimb.tensor.tensor_core.new_bond quimb.tensor.tensor_core.rand_padder quimb.tensor.tensor_core.array_direct_product quimb.tensor.tensor_core.tensor_direct_product quimb.tensor.tensor_core.tensor_network_sum quimb.tensor.tensor_core.bonds quimb.tensor.tensor_core.bonds_size quimb.tensor.tensor_core.group_inds quimb.tensor.tensor_core.connect quimb.tensor.tensor_core.get_tags quimb.tensor.tensor_core.maybe_unwrap quimb.tensor.tensor_core._make_copy_ndarray quimb.tensor.tensor_core.COPY_tensor quimb.tensor.tensor_core.COPY_mps_tensors quimb.tensor.tensor_core.COPY_tree_tensors quimb.tensor.tensor_core._make_promote_array_func quimb.tensor.tensor_core._make_rhand_array_promote_func quimb.tensor.tensor_core._tensor_network_gate_inds_basic quimb.tensor.tensor_core._tensor_network_gate_inds_lazy_split quimb.tensor.tensor_core.tensor_network_gate_inds quimb.tensor.tensor_core.tnlo_implements quimb.tensor.tensor_core._tnlo_trace Module Contents --------------- .. py:data:: _inds_to_eq .. py:data:: get_symbol .. py:function:: oset_union(xs) Non-variadic ordered set union taking any sequence of iterables. .. py:function:: oset_intersection(xs) .. py:function:: tags_to_oset(tags) Parse a ``tags`` argument into an ordered set. .. py:function:: sortedtuple(x) .. py:function:: _gen_output_inds(all_inds) Generate the output, i.e. unique, indices from the set ``inds``. Raise if any index found more than twice. .. py:data:: _VALID_CONTRACT_GET .. py:function:: _tensor_contract_get_other(arrays, inds, inds_out, shapes, get, **contract_opts) .. py:function:: maybe_realify_scalar(data) If ``data`` is a numpy array, check if its complex with small imaginary part, and if so return only the real part, otherwise do nothing. .. py:function:: tensor_contract(*tensors: Tensor, output_inds=None, optimize=None, get=None, backend=None, preserve_tensor=False, drop_tags=False, strip_exponent=False, exponent=None, **contract_opts) Contract a collection of tensors into a scalar or tensor, automatically aligning their indices and computing an optimized contraction path. The output tensor will have the union of tags from the input tensors. :param tensors: The tensors to contract. :type tensors: sequence of Tensor :param output_inds: The output indices. These can be inferred if the contraction has no 'hyper' indices, in which case the output indices are those that appear only once in the input indices, and ordered as they appear in the inputs. For hyper indices or a specific ordering, these must be supplied. :type output_inds: sequence of str :param optimize: The contraction path optimization strategy to use. - ``None``: use the default strategy, - ``str``: use the preset strategy with the given name, - ``cotengra.HyperOptimizer``: find the contraction using this optimizer, supports slicing, - ``opt_einsum.PathOptimizer``: find the path using this optimizer. - ``cotengra.ContractionTree``: use this exact tree, supports slicing, - ``path_like``: use this exact path. Contraction with ``cotengra`` might be a bit more efficient but the main reason would be to handle sliced contraction automatically, as well as the fact that it uses ``autoray`` internally. :type optimize: str, PathOptimizer, ContractionTree or path_like, optional :param get: What to return. If: - ``None`` (the default) - return the resulting scalar or Tensor. - ``'expression'`` - return a callbable expression that performs the contraction and operates on the raw arrays. - ``'tree'`` - return the ``cotengra.ContractionTree`` describing the contraction. - ``'path'`` - return the raw 'path' as a list of tuples. - ``'symbol-map'`` - return the dict mapping indices to 'symbols' (single unicode letters) used internally by ``cotengra`` - ``'path-info'`` - return the ``opt_einsum.PathInfo`` path object with detailed information such as flop cost. The symbol-map is also added to the ``quimb_symbol_map`` attribute. :type get: str, optional :param backend: Which backend to use to perform the contraction. Supplied to `cotengra`. :type backend: {'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional :param preserve_tensor: Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not. :type preserve_tensor: bool, optional :param drop_tags: Whether to drop all tags from the output tensor. By default the output tensor will keep the union of all tags from the input tensors. :type drop_tags: bool, optional :param strip_exponent: If `True`, return the exponent of the result, log10, as well as the rescaled 'mantissa'. Useful for very large or small values. :type strip_exponent: bool, optional :param exponent: If supplied, an overall base exponent to scale the result by. :type exponent: float, optional :param contract_opts: Passed to ``cotengra.array_contract``. :rtype: scalar or Tensor .. py:data:: _RAND_PREFIX :value: '' .. py:data:: _RAND_ALPHABET :value: 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' .. py:data:: RAND_UUIDS .. py:function:: rand_uuid(base='') Return a guaranteed unique, shortish identifier, optional appended to ``base``. .. rubric:: Examples >>> rand_uuid() '_2e1dae1b' >>> rand_uuid('virt-bond') 'virt-bond_bf342e68' .. py:data:: _VALID_SPLIT_GET .. py:data:: _SPLIT_FNS .. py:data:: _SPLIT_VALUES_FNS .. py:data:: _FULL_SPLIT_METHODS .. py:data:: _RANK_HIDDEN_METHODS .. py:data:: _DENSE_ONLY_METHODS .. py:data:: _LEFT_ISOM_METHODS .. py:data:: _RIGHT_ISOM_METHODS .. py:data:: _ISOM_METHODS .. py:data:: _CUTOFF_LOOKUP .. py:data:: _ABSORB_LOOKUP .. py:data:: _MAX_BOND_LOOKUP .. py:data:: _CUTOFF_MODES .. py:data:: _RENORM_LOOKUP .. py:function:: _parse_split_opts(method, cutoff, absorb, max_bond, cutoff_mode, renorm) .. py:function:: _check_left_right_isom(method, absorb) .. py:function:: tensor_split(T: Tensor, left_inds, method='svd', get=None, absorb='both', max_bond=None, cutoff=1e-10, cutoff_mode='rel', renorm=None, ltags=None, rtags=None, stags=None, bond_ind=None, right_inds=None, matrix_svals=False) Decompose this tensor into two tensors. :param T: The tensor (network) to split. :type T: Tensor or TNLinearOperator :param left_inds: The index or sequence of inds, which ``T`` should already have, to split to the 'left'. You can supply ``None`` here if you supply ``right_inds`` instead. :type left_inds: str or sequence of str :param method: How to split the tensor, only some methods allow bond truncation: - ``'svd'``: full SVD, allows truncation. - ``'eig'``: full SVD via eigendecomp, allows truncation. - ``'lu'``: full LU decomposition, allows truncation. This method favors tensor sparsity but is not rank optimal. - ``'svds'``: iterative svd, allows truncation. - ``'isvd'``: iterative svd using interpolative methods, allows truncation. - ``'rsvd'`` : randomized iterative svd with truncation. - ``'eigh'``: full eigen-decomposition, tensor must he hermitian. - ``'eigsh'``: iterative eigen-decomposition, tensor must be hermitian. - ``'qr'``: full QR decomposition. - ``'lq'``: full LR decomposition. - ``'polar_right'``: full polar decomposition as ``A = UP``. - ``'polar_left'``: full polar decomposition as ``A = PU``. - ``'cholesky'``: full cholesky decomposition, tensor must be positive. :type method: str, optional :param get: If given, what to return instead of a TN describing the split: - ``None``: a tensor network of the two (or three) tensors. - ``'arrays'``: the raw data arrays as a tuple ``(l, r)`` or ``(l, s, r)`` depending on ``absorb``. - ``'tensors '``: the new tensors as a tuple ``(Tl, Tr)`` or ``(Tl, Ts, Tr)`` depending on ``absorb``. - ``'values'``: only compute and return the singular values ``s``. :type get: {None, 'arrays', 'tensors', 'values'} :param absorb: Whether to absorb the singular values into both, the left, or the right unitary matrix respectively, or neither. If neither (``absorb=None``) then the singular values will be returned separately in their own 1D tensor or array. In that case if ``get=None`` the tensor network returned will have a hyperedge corresponding to the new bond index connecting three tensors. If ``get='tensors'`` or ``get='arrays'`` then a tuple like ``(left, s, right)`` is returned. :type absorb: {'both', 'left', 'right', None}, optional :param max_bond: If integer, the maximum number of singular values to keep, regardless of ``cutoff``. :type max_bond: None or int :param cutoff: The threshold below which to discard singular values, only applies to rank revealing methods (not QR, LQ, or cholesky). :type cutoff: float, optional :param cutoff_mode: Method with which to apply the cutoff threshold: - ``'rel'``: values less than ``cutoff * s[0]`` discarded. - ``'abs'``: values less than ``cutoff`` discarded. - ``'sum2'``: sum squared of values discarded must be ``< cutoff``. - ``'rsum2'``: sum squared of values discarded must be less than ``cutoff`` times the total sum of squared values. - ``'sum1'``: sum values discarded must be ``< cutoff``. - ``'rsum1'``: sum of values discarded must be less than ``cutoff`` times the total sum of values. :type cutoff_mode: {'sum2', 'rel', 'abs', 'rsum2'} :param renorm: Whether to renormalize the kept singular values, assuming the bond has a canonical environment, corresponding to maintaining the frobenius or nuclear norm. If ``None`` (the default) then this is automatically turned on only for ``cutoff_method in {'sum2', 'rsum2', 'sum1', 'rsum1'}`` with ``method in {'svd', 'eig', 'eigh'}``. :type renorm: {None, bool, or int}, optional :param ltags: Add these new tags to the left tensor. :type ltags: sequence of str, optional :param rtags: Add these new tags to the right tensor. :type rtags: sequence of str, optional :param stags: Add these new tags to the singular value tensor. :type stags: sequence of str, optional :param bond_ind: Explicitly name the new bond, else a random one will be generated. If ``matrix_svals=True`` then this should be a tuple of two indices, one for the left and right bond respectively. :type bond_ind: str, optional :param right_inds: Explicitly give the right indices, otherwise they will be worked out. This is a minor performance feature. :type right_inds: sequence of str, optional :param matrix_svals: If ``True``, return the singular values as a diagonal 2D array or Tensor, otherwise return them as a 1D array. This is only relevant if returning the singular value in some form. :type matrix_svals: bool, optional :returns: Depending on if ``get`` is ``None``, ``'tensors'``, ``'arrays'``, or ``'values'``. In the first three cases, if ``absorb`` is set, then the returned objects correspond to ``(left, right)`` whereas if ``absorb=None`` the returned objects correspond to ``(left, singular_values, right)``. :rtype: TensorNetwork or tuple[Tensor] or tuple[array] or 1D-array .. py:function:: tensor_canonize_bond(T1: Tensor, T2: Tensor, absorb='right', gauges=None, gauge_smudge=1e-06, create_bond=False, **split_opts) Inplace 'canonization' of two tensors. This gauges the bond between the two such that ``T1`` is isometric:: | | | | | | --1---2-- => -->~R-2-- => -->~~~O-- | | | | | | . ... contract :param T1: The tensor to be isometrized. :type T1: Tensor :param T2: The tensor to absorb the R-factor into. :type T2: Tensor :param absorb: Which tensor to effectively absorb the singular values into. :type absorb: {'right', 'left', 'both', None}, optional :param gauges: If supplied, a dict of bond gauges to perform the canonization with respect to. :type gauges: None or dict, optional :param gauge_smudge: If gauges are supplied, the smudge to use when gauging. :type gauge_smudge: float, optional :param create_bond: If ``True``, and there is no bond between the two tensors, create a new bond with size 1 before canonizing. Else raise an error. :type create_bond: bool, optional :param split_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_split`, with modified defaults of ``method=='qr'`` and ``absorb='right'``. .. py:function:: choose_local_compress_gauge_settings(canonize=True, tree_gauge_distance=None, canonize_distance=None, canonize_after_distance=None, mode='auto') Choose default gauge settings for arbitrary geometry local compression. .. py:function:: tensor_compress_bond(T1: Tensor, T2: Tensor, reduced=True, absorb='both', gauges=None, gauge_smudge=1e-06, create_bond=False, info=None, **compress_opts) Inplace compress between the two single tensors. It follows the following steps (by default) to minimize the size of SVD performed:: a)│ │ b)│ │ c)│ │ ━━●━━━●━━ -> ━━>━━○━━○━━<━━ -> ━━>━━━M━━━<━━ │ │ │ .... │ │ │ <*> <*> contract <*> QR LQ -><- SVD d)│ │ e)│ │ -> ━━>━━━ML──MR━━━<━━ -> ━━●───●━━ │.... ....│ │ │ contract contract ^compressed bond -><- -><- :param T1: The left tensor. :type T1: Tensor :param T2: The right tensor. :type T2: Tensor :param max_bond: The maxmimum bond dimension. :type max_bond: int or None, optional :param cutoff: The singular value cutoff to use. :type cutoff: float, optional :param reduced: Whether to perform the QR reduction as above or not. If False, contract both tensors together and perform a single SVD. If 'left' or 'right' then just perform the svd on the left or right tensor respectively. This can still be optimal if the other tensor is already isometric, i.e. the pair are right or left canonical respectively. :type reduced: {True, False, "left", "right"}, optional :param absorb: Where to absorb the singular values after decomposition. :type absorb: {'both', 'left', 'right', None}, optional :param gauges: If supplied, a dict of bond gauges to perform the compression with respect to. :type gauges: None or dict, optional :param gauge_smudge: If gauges are supplied, the smudge to use when gauging. :type gauge_smudge: float, optional :param create_bond: If ``True``, and there is no bond between the two tensors, create a new bond with size 1 before compressing. Else raise an error. :type create_bond: bool, optional :param info: A dict for returning extra information such as the singular values. :type info: None or dict, optional :param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_split`. .. py:function:: tensor_balance_bond(t1: Tensor, t2: Tensor, smudge=1e-06) Gauge the bond between two tensors such that the norm of the 'columns' of the tensors on each side is the same for each index of the bond. :param t1: The first tensor, should share a single index with ``t2``. :type t1: Tensor :param t2: The second tensor, should share a single index with ``t1``. :type t2: Tensor :param smudge: Avoid numerical issues by 'smudging' the correctional factor by this much - the gauging introduced is still exact. :type smudge: float, optional .. py:function:: tensor_multifuse(ts, inds, gauges=None) For tensors ``ts`` which should all have indices ``inds``, fuse the those bonds together, optionally updating ``gauges`` if present. Inplace operation. .. py:function:: tensor_make_single_bond(t1: Tensor, t2: Tensor, gauges=None, create_bond=False) If two tensors share multibonds, fuse them together and return the left indices, bond if it exists, and right indices. Handles simple ``gauges``. Inplace operation. :param t1: The first tensor. :type t1: Tensor :param t2: The second tensor. :type t2: Tensor :param gauges: A dictionary of gauge tensors, which will be updated in place. :type gauges: dict, optional :param create: If ``True``, create a new bond if none exists. :type create: bool, optional :returns: * **left** (*list of str*) -- Indices appearing only on the left tensor. * **bond** (*str or None*) -- The bond index of the tensors, or None if they don't share one and ``create=False``. * **right** (*list of str*) -- Indices appearing only on the right tensor. .. py:function:: tensor_fuse_squeeze(t1: Tensor, t2: Tensor, squeeze=True, gauges=None) If ``t1`` and ``t2`` share more than one bond fuse it, and if the size of the shared dimenion(s) is 1, squeeze it. Inplace operation. .. py:function:: new_bond(T1: Tensor, T2: Tensor, size=1, name=None, axis1=0, axis2=0) Inplace addition of a new bond between tensors ``T1`` and ``T2``. The size of the new bond can be specified, in which case the new array parts will be filled with zeros. :param T1: First tensor to modify. :type T1: Tensor :param T2: Second tensor to modify. :type T2: Tensor :param size: Size of the new dimension. :type size: int, optional :param name: Name for the new index. :type name: str, optional :param axis1: Position on the first tensor for the new dimension. :type axis1: int, optional :param axis2: Position on the second tensor for the new dimension. :type axis2: int, optional .. py:function:: rand_padder(vector, pad_width, iaxis, kwargs) Helper function for padding tensor with random entries. .. py:function:: array_direct_product(X, Y, sum_axes=()) Direct product of two arrays. :param X: First tensor. :type X: numpy.ndarray :param Y: Second tensor, same shape as ``X``. :type Y: numpy.ndarray :param sum_axes: Axes to sum over rather than direct product, e.g. physical indices when adding tensor networks. :type sum_axes: sequence of int :returns: **Z** -- Same shape as ``X`` and ``Y``, but with every dimension the sum of the two respective dimensions, unless it is included in ``sum_axes``. :rtype: numpy.ndarray .. py:function:: tensor_direct_product(T1: Tensor, T2: Tensor, sum_inds=(), inplace=False) Direct product of two Tensors. Any axes included in ``sum_inds`` must be the same size and will be summed over rather than concatenated. Summing over contractions of TensorNetworks equates to contracting a TensorNetwork made of direct products of each set of tensors. I.e. (a1 @ b1) + (a2 @ b2) == (a1 (+) a2) @ (b1 (+) b2). :param T1: The first tensor. :type T1: Tensor :param T2: The second tensor, with matching indices and dimensions to ``T1``. :type T2: Tensor :param sum_inds: Axes to sum over rather than combine, e.g. physical indices when adding tensor networks. :type sum_inds: sequence of str, optional :param inplace: Whether to modify ``T1`` inplace. :type inplace: bool, optional :returns: Like ``T1``, but with each dimension doubled in size if not in ``sum_inds``. :rtype: Tensor .. py:function:: tensor_network_sum(tnA: TensorNetwork, tnB: TensorNetwork, inplace=False) Sum of two tensor networks, whose indices should match exactly, using direct products. :param tnA: The first tensor network. :type tnA: TensorNetwork :param tnB: The second tensor network. :type tnB: TensorNetwork :returns: The sum of ``tnA`` and ``tnB``, with increased bond dimensions. :rtype: TensorNetwork .. py:function:: bonds(t1: Tensor | TensorNetwork, t2: Tensor | TensorNetwork) Get any indices shared between the Tensor(s) or TensorNetwork(s) ``t1`` and ``t2``. :param t1: The first tensor or tensor network. :type t1: Tensor or TensorNetwork :param t2: The second tensor or tensor network. :type t2: Tensor or TensorNetwork :returns: **bonds** -- The indices shared between ``t1`` and ``t2``. :rtype: oset[str] .. py:function:: bonds_size(t1: Tensor, t2: Tensor) Get the size of the bonds linking tensors or tensor networks ``t1`` and ``t2``. .. py:function:: group_inds(t1: Tensor, t2: Tensor) Group bonds into left only, shared, and right only. If ``t1`` or ``t2`` are ``TensorNetwork`` objects, then only outer indices are considered. :param t1: The first tensor or tensor network. :type t1: Tensor or TensorNetwork :param t2: The second tensor or tensor network. :type t2: Tensor or TensorNetwork :returns: * **left_inds** (*list[str]*) -- Indices only in ``t1``. * **shared_inds** (*list[str]*) -- Indices in both ``t1`` and ``t2``. * **right_inds** (*list[str]*) -- Indices only in ``t2``. .. py:function:: connect(t1: Tensor, t2: Tensor, ax1, ax2) Connect two tensors by setting a shared index for the specified dimensions. This is an inplace operation that will also affect any tensor networks viewing these tensors. :param t1: The first tensor. :type t1: Tensor :param t2: The second tensor. :param ax1: The dimension (axis) to connect on the first tensor. :type ax1: int :param ax2: The dimension (axis) to connect on the second tensor. :type ax2: int .. rubric:: Examples >>> X = rand_tensor([2, 3], inds=['a', 'b']) >>> Y = rand_tensor([3, 4], inds=['c', 'd']) >>> tn = (X | Y) # is *view* of tensors (``&`` would copy them) >>> print(tn) TensorNetwork([ Tensor(shape=(2, 3), inds=('a', 'b'), tags=()), Tensor(shape=(3, 4), inds=('c', 'd'), tags=()), ]) >>> connect(X, Y, 1, 0) # modifies tensors *and* viewing TN >>> print(tn) TensorNetwork([ Tensor(shape=(2, 3), inds=('a', '_e9021e0000002'), tags=()), Tensor(shape=(3, 4), inds=('_e9021e0000002', 'd'), tags=()), ]) >>> tn ^ all Tensor(shape=(2, 4), inds=('a', 'd'), tags=()) .. py:function:: get_tags(ts) Return all the tags in found in ``ts``. :param ts: The objects to combine tags from. :type ts: Tensor, TensorNetwork or sequence of either .. py:function:: maybe_unwrap(t, preserve_tensor_network=False, preserve_tensor=False, strip_exponent=False, equalize_norms=False, output_inds=None) Maybe unwrap a ``TensorNetwork`` or ``Tensor`` into a ``Tensor`` or scalar, depending on how many tensors and indices it has, optionally handling accrued exponent normalization and output index ordering (if a tensor). :param t: The tensor or tensor network to unwrap. :type t: Tensor or TensorNetwork :param preserve_tensor_network: If ``True``, then don't unwrap a ``TensorNetwork`` to a ``Tensor`` even if it has only one tensor. :type preserve_tensor_network: bool, optional :param preserve_tensor: If ``True``, then don't unwrap a ``Tensor`` to a scalar even if it has no indices. :type preserve_tensor: bool, optional :param strip_exponent: If ``True``, then return the overall exponent of the contraction, in log10, as well as the 'mantissa' tensor or scalar. :type strip_exponent: bool, optional :param equalize_norms: If ``True``, then equalize the norms of all tensors in the tensor network before unwrapping. :type equalize_norms: bool, optional :param output_inds: If unwrapping a tensor, then transpose it to the specified indices. :type output_inds: sequence of str, optional :rtype: TensorNetwork, Tensor or scalar .. py:class:: Tensor(data=1.0, inds=(), tags=None, left_inds=None) A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together. :param data: The n-dimensional data. :type data: numpy.ndarray :param inds: The index labels for each dimension. Must match the number of dimensions of ``data``. :type inds: sequence of str :param tags: Tags with which to identify and group this tensor. These will be converted into a ``oset``. :type tags: sequence of str, optional :param left_inds: Which, if any, indices to group as 'left' indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level. :type left_inds: sequence of str, optional .. rubric:: Examples Basic construction: >>> from quimb import randn >>> from quimb.tensor import Tensor >>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'}) >>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'}) Indices are automatically aligned, and tags combined, when contracting: >>> X @ Y Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'}) .. py:attribute:: __slots__ :value: ('_data', '_inds', '_tags', '_left_inds', '_owners') .. py:attribute:: _owners .. py:method:: _set_data(data) .. py:method:: _set_inds(inds) .. py:method:: _set_tags(tags) .. py:method:: _set_left_inds(left_inds) .. py:method:: get_params() A simple function that returns the 'parameters' of the underlying data array. This is mainly for providing an interface for 'structured' arrays e.g. with block sparsity to interact with optimization. .. py:method:: set_params(params) A simple function that sets the 'parameters' of the underlying data array. This is mainly for providing an interface for 'structured' arrays e.g. with block sparsity to interact with optimization. .. py:method:: copy(deep=False, virtual=False) Copy this tensor. .. note:: By default (``deep=False``), the underlying array will *not* be copied. :param deep: Whether to copy the underlying data as well. :type deep: bool, optional :param virtual: To conveniently mimic the behaviour of taking a virtual copy of tensor network, this simply returns ``self``. :type virtual: bool, optional .. py:attribute:: __copy__ .. py:property:: data .. py:property:: inds .. py:property:: tags .. py:property:: left_inds .. py:method:: check() Do some basic diagnostics on this tensor, raising errors if something is wrong. .. py:property:: owners .. py:method:: add_owner(tn, tid) Add ``tn`` as owner of this Tensor - it's tag and ind maps will be updated whenever this tensor is retagged or reindexed. .. py:method:: remove_owner(tn) Remove TensorNetwork ``tn`` as an owner of this Tensor. .. py:method:: check_owners() Check if this tensor is 'owned' by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks. .. py:method:: _apply_function(fn) .. py:method:: modify(**kwargs) Overwrite the data of this tensor in place. :param data: New data. :type data: array, optional :param apply: A function to apply to the current data. If `data` is also given this is applied subsequently. :type apply: callable, optional :param inds: New tuple of indices. :type inds: sequence of str, optional :param tags: New tags. :type tags: sequence of str, optional :param left_inds: New grouping of indices to be 'on the left'. :type left_inds: sequence of str, optional .. py:method:: apply_to_arrays(fn) Apply the function ``fn`` to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their 'numerical meaning'. .. py:method:: isel(selectors, inplace=False) Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to ``X[:, :, 3, :, :]`` with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice. :param selectors: Mapping of index(es) to which value to take. The values can be: - int: select a specific value for that index. - slice: select a range of values for that index. - "r": contract a random vector in. The mapping can contain indices that don't appear on this tensor, in which case they are ignored. :type selectors: dict[str, int or slice or "r"] :param inplace: Whether to select inplace or not. :type inplace: bool, optional :rtype: Tensor .. rubric:: Examples >>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c')) >>> T.isel({'b': -1}) Tensor(shape=(2, 4), inds=('a', 'c'), tags=()) .. seealso:: :py:obj:`TensorNetwork.isel`, :py:obj:`Tensor.rand_reduce` .. py:attribute:: isel_ .. py:method:: add_tag(tag) Add a tag or multiple tags to this tensor. Unlike naively calling `self.tags.add` this also updates the tag maps of all `TensorNetwork` objects viewing this `Tensor`. Inplace operation. :param tag: The tag(s) to add to this tensor. :type tag: str or sequence of str .. py:method:: expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal') Inplace increase the size of the dimension of ``ind``, the new array entries will be filled with zeros by default. :param name: Name of the index to expand. :type name: str :param size: Size of the expanded index. :type size: int, optional :param mode: How to fill any new array entries. If ``'zeros'`` then fill with zeros, if ``'repeat'`` then repeatedly tile the existing entries. If ``'random'`` then fill with random entries drawn from ``rand_dist``, multiplied by ``rand_strength``. If ``None`` then select from zeros or random depening on non-zero ``rand_strength``. :type mode: {None, 'zeros', 'repeat', 'random'}, optional :param rand_strength: If ``mode='random'``, a multiplicative scale for the random entries, defaulting to 1.0. If ``mode is None`` then supplying a non-zero value here triggers ``mode='random'``. :type rand_strength: float, optional :param rand_dist: If ``mode='random'``, the distribution to draw the random entries from. :type rand_dist: {'normal', 'uniform', 'exp'}, optional .. py:method:: new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal') Inplace add a new index - a named dimension. If ``size`` is specified to be greater than one then the new array entries will be filled with zeros. :param name: Name of the new index. :type name: str :param size: Size of the new index. :type size: int, optional :param axis: Position of the new index. :type axis: int, optional :param mode: How to fill any new array entries. If ``'zeros'`` then fill with zeros, if ``'repeat'`` then repeatedly tile the existing entries. If ``'random'`` then fill with random entries drawn from ``rand_dist``, multiplied by ``rand_strength``. If ``None`` then select from zeros or random depening on non-zero ``rand_strength``. :type mode: {None, 'zeros', 'repeat', 'random'}, optional :param rand_strength: If ``mode='random'``, a multiplicative scale for the random entries, defaulting to 1.0. If ``mode is None`` then supplying a non-zero value here triggers ``mode='random'``. :type rand_strength: float, optional :param rand_dist: If ``mode='random'``, the distribution to draw the random entries from. :type rand_dist: {'normal', 'uniform', 'exp'}, optional .. seealso:: :py:obj:`Tensor.expand_ind`, :py:obj:`new_bond` .. py:attribute:: new_bond .. py:method:: new_ind_with_identity(name, left_inds, right_inds, axis=0) Inplace add a new index, where the newly stacked array entries form the identity from ``left_inds`` to ``right_inds``. Selecting 0 or 1 for the new index ``name`` thus is like 'turning off' this tensor if viewed as an operator. :param name: Name of the new index. :type name: str :param left_inds: Names of the indices forming the left hand side of the operator. :type left_inds: tuple[str] :param right_inds: Names of the indices forming the right hand side of the operator. The dimensions of these must match those of ``left_inds``. :type right_inds: tuple[str] :param axis: Position of the new index. :type axis: int, optional .. py:method:: new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False) Expand this tensor with two new indices of size ``d``, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor. :param new_left_ind: Name of the new left index. :type new_left_ind: str :param new_right_ind: Name of the new right index. :type new_right_ind: str :param d: Size of the new indices. :type d: int :param inplace: Whether to perform the expansion inplace. :type inplace: bool, optional :rtype: Tensor .. py:attribute:: new_ind_pair_with_identity_ .. py:method:: new_ind_pair_diag(ind, new_left_ind, new_right_ind, inplace=False) Expand an existing index ``ind`` of this tensor into a new pair of indices ``(new_left_ind, new_right_ind)`` each of matching size, such that the old tensor is the diagonal of the new tensor. The new indices are inserted at the position of ``ind``. :param ind: Name of the index to expand. :type ind: str :param new_left_ind: Name of the new left index. :type new_left_ind: str :param new_right_ind: Name of the new right index. :type new_right_ind: str :param inplace: Whether to perform the expansion inplace. :type inplace: bool, optional :rtype: Tensor .. rubric:: Examples Expand the middle dimension of a 3-dimensional tensor:: t = qtn.rand_tensor((2, 3, 4), ('a', 'b', 'c')) t.new_ind_pair_diag_('b', 'x', 'y') # Tensor(shape=(2, 3, 3, 4), inds=('a', 'x', 'y', 'c'), tags=oset([])) .. py:attribute:: new_ind_pair_diag_ .. py:method:: conj(inplace=False) Conjugate this tensors data (does nothing to indices). .. py:attribute:: conj_ .. py:property:: H Conjugate this tensors data (does nothing to indices). .. py:property:: shape The size of each dimension. .. py:property:: ndim The number of dimensions. .. py:property:: size The total number of array elements. .. py:property:: dtype The data type of the array elements. .. py:property:: dtype_name The name of the data type of the array elements. .. py:property:: backend The backend inferred from the data. .. py:method:: iscomplex() .. py:method:: astype(dtype, inplace=False) Change the type of this tensor to ``dtype``. .. py:attribute:: astype_ .. py:method:: max_dim() Return the maximum size of any dimension, or 1 if scalar. .. py:method:: ind_size(ind) Return the size of dimension corresponding to ``ind``. .. py:method:: inds_size(inds) Return the total size of dimensions corresponding to ``inds``. .. py:method:: shared_bond_size(other) Get the total size of the shared index(es) with ``other``. .. py:method:: inner_inds() Get all indices that appear on two or more tensors. .. py:method:: transpose(*output_inds, inplace=False) Transpose this tensor - permuting the order of both the data *and* the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn't matter. Note to compute the tranditional 'transpose' of an operator within a contraction for example, you would just use reindexing not this. :param output_inds: The desired output sequence of indices. :type output_inds: sequence of str :param inplace: Perform the tranposition inplace. :type inplace: bool, optional :returns: **tt** -- The transposed tensor. :rtype: Tensor .. seealso:: :py:obj:`transpose_like`, :py:obj:`reindex` .. py:attribute:: transpose_ .. py:method:: transpose_like(other, inplace=False) Transpose this tensor to match the indices of ``other``, allowing for one index to be different. E.g. if ``self.inds = ('a', 'b', 'c', 'x')`` and ``other.inds = ('b', 'a', 'd', 'c')`` then 'x' will be aligned with 'd' and the output inds will be ``('b', 'a', 'x', 'c')`` :param other: The tensor to match. :type other: Tensor :param inplace: Perform the tranposition inplace. :type inplace: bool, optional :returns: **tt** -- The transposed tensor. :rtype: Tensor .. seealso:: :py:obj:`transpose` .. py:attribute:: transpose_like_ .. py:method:: moveindex(ind, axis, inplace=False) Move the index ``ind`` to position ``axis``. Like ``transpose``, this permutes the order of both the data *and* the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn't matter. :param ind: The index to move. :type ind: str :param axis: The new position to move ``ind`` to. Can be negative. :type axis: int :param inplace: Whether to perform the move inplace or not. :type inplace: bool, optional :rtype: Tensor .. py:attribute:: moveindex_ .. py:method:: item() Return the scalar value of this tensor, if it has a single element. .. py:method:: trace(left_inds, right_inds, preserve_tensor=False, inplace=False) Trace index or indices ``left_inds`` with ``right_inds``, removing them. :param left_inds: The left indices to trace, order matching ``right_inds``. :type left_inds: str or sequence of str :param right_inds: The right indices to trace, order matching ``left_inds``. :type right_inds: str or sequence of str :param preserve_tensor: If ``True``, a tensor will be returned even if no indices remain. :type preserve_tensor: bool, optional :param inplace: Perform the trace inplace. :type inplace: bool, optional :returns: **z** :rtype: Tensor or scalar .. py:method:: sum_reduce(ind, inplace=False) Sum over index ``ind``, removing it from this tensor. :param ind: The index to sum over. :type ind: str :param inplace: Whether to perform the reduction inplace. :type inplace: bool, optional :rtype: Tensor .. py:attribute:: sum_reduce_ .. py:method:: vector_reduce(ind, v, inplace=False) Contract the vector ``v`` with the index ``ind`` of this tensor, removing it. :param ind: The index to contract. :type ind: str :param v: The vector to contract with. :type v: array_like :param inplace: Whether to perform the reduction inplace. :type inplace: bool, optional :rtype: Tensor .. py:attribute:: vector_reduce_ .. py:method:: rand_reduce(ind, dtype=None, inplace=False, **kwargs) Contract the index ``ind`` of this tensor with a random vector, removing it. .. py:attribute:: rand_reduce_ .. py:method:: collapse_repeated(inplace=False) Take the diagonals of any repeated indices, such that each index only appears once. .. py:attribute:: collapse_repeated_ .. py:method:: contract(*others, output_inds=None, **opts) .. py:method:: direct_product(other, sum_inds=(), inplace=False) .. py:attribute:: direct_product_ .. py:method:: split(*args, **kwargs) .. py:method:: compute_reduced_factor(side, left_inds, right_inds, **split_opts) .. py:method:: distance(other, **contract_opts) .. py:attribute:: distance_normalized .. py:method:: gate(G, ind, preserve_inds=True, transposed=False, inplace=False) Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike ``contract``, ``G`` is a raw array and the tensor remains with the same set of indices. This is like applying: .. math:: x \leftarrow G x or if ``transposed=True``: .. math:: x \leftarrow x G :param G: The matrix to gate the tensor index with. :type G: 2D array_like :param ind: Which index to apply the gate to. :type ind: str :param preserve_inds: If ``True``, the order of the indices is preserved, otherwise the gated index will be left at the first axis, avoiding a transpose. :type preserve_inds: bool, optional :param transposed: If ``True``, the gate is effectively transpose and applied, or equivalently, contracted to its left rather than right. :type transposed: bool, optional :rtype: Tensor .. rubric:: Examples Create a random tensor of 4 qubits: >>> t = qtn.rand_tensor( ... shape=[2, 2, 2, 2], ... inds=['k0', 'k1', 'k2', 'k3'], ... ) Create another tensor with an X gate applied to qubit 2: >>> Gt = t.gate(qu.pauli('X'), 'k2') The contraction of these two tensors is now the expectation of that operator: >>> t.H @ Gt -4.108910576149794 .. py:attribute:: gate_ .. py:method:: singular_values(left_inds, method='svd') Return the singular values associated with splitting this tensor according to ``left_inds``. :param left_inds: A subset of this tensors indices that defines 'left'. :type left_inds: sequence of str :param method: Whether to use the SVD or eigenvalue decomposition to get the singular values. :type method: {'svd', 'eig'} :returns: The singular values. :rtype: 1d-array .. py:method:: entropy(left_inds, method='svd') Return the entropy associated with splitting this tensor according to ``left_inds``. :param left_inds: A subset of this tensors indices that defines 'left'. :type left_inds: sequence of str :param method: Whether to use the SVD or eigenvalue decomposition to get the singular values. :type method: {'svd', 'eig'} :rtype: float .. py:method:: retag(retag_map, inplace=False) Rename the tags of this tensor, optionally, in-place. :param retag_map: Mapping of pairs ``{old_tag: new_tag, ...}``. :type retag_map: dict-like :param inplace: If ``False`` (the default), a copy of this tensor with the changed tags will be returned. :type inplace: bool, optional .. py:attribute:: retag_ .. py:method:: reindex(index_map, inplace=False) Rename the indices of this tensor, optionally in-place. :param index_map: Mapping of pairs ``{old_ind: new_ind, ...}``. :type index_map: dict-like :param inplace: If ``False`` (the default), a copy of this tensor with the changed inds will be returned. :type inplace: bool, optional .. py:attribute:: reindex_ .. py:method:: fuse(fuse_map, inplace=False) Combine groups of indices into single indices. :param fuse_map: Mapping like: ``{new_ind: sequence of existing inds, ...}`` or an ordered mapping like ``[(new_ind_1, old_inds_1), ...]`` in which case the output tensor's fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused. :type fuse_map: dict_like or sequence of tuples. :returns: The transposed, reshaped and re-labeled tensor. :rtype: Tensor .. py:attribute:: fuse_ .. py:method:: unfuse(unfuse_map, shape_map, inplace=False) Reshape single indices into groups of multiple indices :param unfuse_map: Mapping like: ``{existing_ind: sequence of new inds, ...}`` or an ordered mapping like ``[(old_ind_1, new_inds_1), ...]`` in which case the output tensor's new inds will be ordered. In both cases the new indices are created at the old index's position of the tensor's shape :type unfuse_map: dict_like or sequence of tuples. :param shape_map: Mapping like: ``{old_ind: new_ind_sizes, ...}`` or an ordered mapping like ``[(old_ind_1, new_ind_sizes_1), ...]``. :type shape_map: dict_like or sequence of tuples :returns: The transposed, reshaped and re-labeled tensor :rtype: Tensor .. py:attribute:: unfuse_ .. py:method:: to_dense(*inds_seq, to_qarray=False) Convert this Tensor into an dense array, with a single dimension for each of inds in ``inds_seqs``. E.g. to convert several sites into a density matrix: ``T.to_dense(('k0', 'k1'), ('b0', 'b1'))``. .. py:attribute:: to_qarray .. py:method:: squeeze(include=None, exclude=None, inplace=False) Drop any singlet dimensions from this tensor. :param inplace: Whether modify the original or return a new tensor. :type inplace: bool, optional :param include: Only squeeze dimensions with indices in this list. :type include: sequence of str, optional :param exclude: Squeeze all dimensions except those with indices in this list. :type exclude: sequence of str, optional :param inplace: Whether to perform the squeeze inplace or not. :type inplace: bool, optional :rtype: Tensor .. py:attribute:: squeeze_ .. py:method:: largest_element() Return the largest element, in terms of absolute magnitude, of this tensor. .. py:method:: idxmin(f=None) Get the index configuration of the minimum element of this tensor, optionally applying ``f`` first. :param f: If a callable, apply this function to the tensor data before finding the minimum element. If a string, apply ``autoray.do(f, data)``. :type f: callable or str, optional :returns: Mapping of index names to their values at the minimum element. :rtype: dict[str, int] .. py:method:: idxmax(f=None) Get the index configuration of the maximum element of this tensor, optionally applying ``f`` first. :param f: If a callable, apply this function to the tensor data before finding the maximum element. If a string, apply ``autoray.do(f, data)``. :type f: callable or str, optional :returns: Mapping of index names to their values at the maximum element. :rtype: dict[str, int] .. py:method:: norm(squared=False, **contract_opts) Frobenius norm of this tensor: .. math:: \|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)} where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition. .. py:method:: overlap(other, **contract_opts) Overlap of this tensor with another tensor: .. math:: \langle o | t \rangle = \mathrm{Tr} \left(o^{\dagger} t\right) where the trace is taken over all indices. :param other: The other tensor or network to overlap with. This tensor will be conjugated. :type other: Tensor or TensorNetwork :rtype: scalar .. py:method:: normalize(inplace=False) .. py:attribute:: normalize_ .. py:method:: symmetrize(ind1, ind2, inplace=False) Hermitian symmetrize this tensor for indices ``ind1`` and ``ind2``. I.e. ``T = (T + T.conj().T) / 2``, where the transpose is taken only over the specified indices. .. py:attribute:: symmetrize_ .. py:method:: isometrize(left_inds=None, method='qr', inplace=False) Make this tensor unitary (or isometric) with respect to ``left_inds``. The underlying method is set by ``method``. :param left_inds: The indices to group together and treat as the left hand side of a matrix. :type left_inds: sequence of str :param method: The method used to generate the isometry. The options are: - "qr": use the Q factor of the QR decomposition of ``x`` with the constraint that the diagonal of ``R`` is positive. - "svd": uses ``U @ VH`` of the SVD decomposition of ``x``. This is useful for finding the 'closest' isometric matrix to ``x``, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization. - "exp": use the matrix exponential of ``x - dag(x)``, first completing ``x`` with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square ``x``. - "cayley": use the Cayley transform of ``x - dag(x)``, first completing ``x`` with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with `HIPS/autograd` e.g.), but more expensive for non-square ``x``. - "householder": use the Householder reflection method directly. This requires that the backend implements "linalg.householder_product". - "torch_householder": use the Householder reflection method directly, using the ``torch_householder`` package. This requires that the package is installed and that the backend is ``"torch"``. This is generally the best parametrizing method for "torch" if available. - "mgs": use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference. Not all backends support all methods or differentiating through all methods. :type method: str, optional :param inplace: Whether to perform the unitization inplace. :type inplace: bool, optional :rtype: Tensor .. py:attribute:: isometrize_ .. py:attribute:: unitize .. py:attribute:: unitize_ .. py:method:: randomize(dtype=None, inplace=False, **randn_opts) Randomize the entries of this tensor. :param dtype: The data type of the random entries. If left as the default ``None``, then the data type of the current array will be used. :type dtype: {None, str}, optional :param inplace: Whether to perform the randomization inplace, by default ``False``. :type inplace: bool, optional :param randn_opts: Supplied to :func:`~quimb.gen.rand.randn`. :rtype: Tensor .. py:attribute:: randomize_ .. py:method:: flip(ind, inplace=False) Reverse the axis on this tensor corresponding to ``ind``. Like performing e.g. ``X[:, :, ::-1, :]``. .. py:attribute:: flip_ .. py:method:: multiply_index_diagonal(ind, x, inplace=False) Multiply this tensor by 1D array ``x`` as if it were a diagonal tensor being contracted into index ``ind``. .. py:attribute:: multiply_index_diagonal_ .. py:method:: almost_equals(other, **kwargs) Check if this tensor is almost the same as another. .. py:method:: drop_tags(tags=None) Drop certain tags, defaulting to all, from this tensor. .. py:method:: bonds(other) Return a tuple of the shared indices between this tensor and ``other``. .. py:method:: bonds_size(other) Return the size of the shared indices between this tensor and ``other``. .. py:method:: filter_bonds(other) Sort this tensor's indices into a list of those that it shares and doesn't share with another tensor. :param other: The other tensor. :type other: Tensor :returns: **shared, unshared** -- The shared and unshared indices. :rtype: (tuple[str], tuple[str]) .. py:method:: __imul__(other) .. py:method:: __itruediv__(other) .. py:method:: __and__(other) Combine with another ``Tensor`` or ``TensorNetwork`` into a new ``TensorNetwork``. .. py:method:: __or__(other) Combine virtually (no copies made) with another ``Tensor`` or ``TensorNetwork`` into a new ``TensorNetwork``. .. py:method:: __matmul__(other) Explicitly contract with another tensor. Avoids some slight overhead of calling the full :func:`~quimb.tensor.tensor_core.tensor_contract`. .. py:method:: negate(inplace=False) Negate this tensor. .. py:attribute:: negate_ .. py:method:: __neg__() Negate this tensor. .. py:method:: as_network(virtual=True) Return a ``TensorNetwork`` with only this tensor. .. py:method:: draw(*args, **kwargs) Plot a graph of this tensor and its indices. .. py:attribute:: graph .. py:attribute:: visualize .. py:method:: __getstate__() .. py:method:: __setstate__(state) .. py:method:: _repr_info() General info to show in various reprs. Sublasses can add more relevant info to this dict. .. py:method:: _repr_info_extra() General detailed info to show in various reprs. Sublasses can add more relevant info to this dict. .. py:method:: _repr_info_str(normal=True, extra=False) Render the general info as a string. .. py:method:: _repr_html_() Render this Tensor as HTML, for Jupyter notebooks. .. py:method:: __str__() .. py:method:: __repr__() .. py:function:: _make_copy_ndarray(d, ndim, dtype=float) .. py:function:: COPY_tensor(d, inds, tags=None, dtype=float) Get the tensor representing the COPY operation with dimension size ``d`` and number of dimensions ``len(inds)``, with exterior indices ``inds``. :param d: The size of each dimension. :type d: int :param inds: The exterior index names for each dimension. :type inds: sequence of str :param tags: Tag the tensor with these. :type tags: None or sequence of str, optional :param dtype: Data type to create the underlying numpy array with. :type dtype: str, optional :returns: The tensor describing the MPS, of size ``d**len(inds)``. :rtype: Tensor .. py:function:: COPY_mps_tensors(d, inds, tags=None, dtype=float) Get the set of MPS tensors representing the COPY tensor with dimension size ``d`` and number of dimensions ``len(inds)``, with exterior indices ``inds``. :param d: The size of each dimension. :type d: int :param inds: The exterior index names for each dimension. :type inds: sequence of str :param tags: Tag the tensors with these. :type tags: None or sequence of str, optional :param dtype: Data type to create the underlying numpy array with. :type dtype: str, optional :returns: The ``len(inds)`` tensors describing the MPS, with physical legs ordered as supplied in ``inds``. :rtype: List[Tensor] .. py:function:: COPY_tree_tensors(d, inds, tags=None, dtype=float, ssa_path=None) Get the set of tree tensors representing the COPY tensor with dimension size ``d`` and number of dimensions ``len(inds)``, with exterior indices ``inds``. The tree is generated by cycling through pairs. :param d: The size of each dimension. :type d: int :param inds: The exterior index names for each dimension. :type inds: sequence of str :param tags: Tag the tensors with these. :type tags: None or sequence of str, optional :param dtype: Data type to create the underlying numpy array with. :type dtype: str, optional :returns: The ``len(inds) - 2`` tensors describing the TTN, with physical legs ordered as supplied in ``inds``. :rtype: List[Tensor] .. py:function:: _make_promote_array_func(op, meth_name) .. py:function:: _make_rhand_array_promote_func(op, meth_name) .. py:function:: _tensor_network_gate_inds_basic(tn: TensorNetwork, G, inds, ng, tags, contract, isparam, info, **compress_opts) .. py:function:: _tensor_network_gate_inds_lazy_split(tn: TensorNetwork, G, inds, ng, tags, contract, **compress_opts) .. py:data:: _BASIC_GATE_CONTRACT .. py:data:: _SPLIT_GATE_CONTRACT .. py:data:: _VALID_GATE_CONTRACT .. py:function:: tensor_network_gate_inds(self: TensorNetwork, G, inds, contract=False, tags=None, info=None, inplace=False, **compress_opts) Apply the 'gate' ``G`` to indices ``inds``, propagating them to the outside, as if applying ``G @ x``. :param G: The gate array to apply, should match or be factorable into the shape ``(*phys_dims, *phys_dims)``. :type G: array_ike :param inds: The index or indices to apply the gate to. :type inds: str or sequence or str, :param contract: 'swap-split-gate', 'auto-split-gate'}, optional How to apply the gate: - ``False``: gate is added to network lazily and nothing is contracted, tensor network structure is thus not maintained. - ``True``: gate is contracted eagerly with all tensors involved, tensor network structure is thus only maintained if gate acts on a single site only. - ``'split'``: contract all involved tensors then split the result back into two. - ``'reduce-split'``: factor the two physical indices into 'R-factors' using QR decompositions on the original site tensors, then contract the gate, split it and reabsorb each side. Cheaper than ``'split'`` when the tensors on either side have at least 3 bonds. - ``'split-gate'``: lazily add the gate as with ``False``, but split the gate tensor spatially. - ``'swap-split-gate'``: lazily add the gate as with ``False``, but split the gate as if an extra SWAP has been applied. - ``'auto-split-gate'``: lazily add the gate as with ``False``, but maybe apply one of the above options depending on whether they result in a rank reduction. The named methods are relevant for two site gates only, for single site gates they use the ``contract=True`` option which also maintains the structure of the TN. See below for a pictorial description of each method. :type contract: {False, True, 'split', 'reduce-split', 'split-gate', :param tags: Tags to add to the new gate tensor. :type tags: str or sequence of str, optional :param info: Used to store extra optional information such as the singular values if not absorbed. :type info: None or dict, optional :param inplace: Whether to perform the gate operation inplace on the tensor network or not. :type inplace: bool, optional :param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_split` for any ``contract`` methods that involve splitting. Ignored otherwise. :returns: **G_tn** :rtype: TensorNetwork .. rubric:: Notes The ``contract`` options look like the following (for two site gates). ``contract=False``:: . . <- inds │ │ GGGGG │╱ │╱ ──●───●── ╱ ╱ ``contract=True``:: │╱ │╱ ──GGGGG── ╱ ╱ ``contract='split'``:: │╱ │╱ │╱ │╱ ──GGGGG── ==> ──G┄┄┄G── ╱ ╱ ╱ ╱ ``contract='reduce-split'``:: │ │ │ │ GGGGG GGG │ │ │╱ │╱ ==> ╱│ │ ╱ ==> ╱│ │ ╱ │╱ │╱ ──●───●── ──>─●─●─<── ──>─GGG─<── ==> ──G┄┄┄G── ╱ ╱ ╱ ╱ ╱ ╱ ╱ ╱ For one site gates when one of the above 'split' methods is supplied ``contract=True`` is assumed. ``contract='split-gate'``:: │ │ G~~~G │╱ │╱ ──●───●── ╱ ╱ ``contract='swap-split-gate'``:: ╲ ╱ ╳ ╱ ╲ G~~~G │╱ │╱ ──●───●── ╱ ╱ ``contract='auto-split-gate'`` chooses between the above two and ``False``, depending on whether either results in a lower rank. .. py:class:: TensorNetwork(ts=(), *, virtual=False, check_collisions=True) Bases: :py:obj:`object` A collection of (as yet uncontracted) Tensors. :param ts: The objects to combine. The new network will copy these (but not the underlying data) by default. For a *view* set ``virtual=True``. :type ts: sequence of Tensor or TensorNetwork :param virtual: Whether the TensorNetwork should be a *view* onto the tensors it is given, or a copy of them. E.g. if a virtual TN is constructed, any changes to a Tensor's indices or tags will propagate to all TNs viewing that Tensor. :type virtual: bool, optional :param check_collisions: If True, the default, then ``TensorNetwork`` instances with double indices which match another ``TensorNetwork`` instances double indices will have those indices' names mangled. Can be explicitly turned off when it is known that no collisions will take place -- i.e. when not adding any new tensors. :type check_collisions: bool, optional .. attribute:: tensor_map Mapping of unique ids to tensors, like``{tensor_id: tensor, ...}``. I.e. this is where the tensors are 'stored' by the network. :type: dict .. attribute:: tag_map Mapping of tags to a set of tensor ids which have those tags. I.e. ``{tag: {tensor_id_1, tensor_id_2, ...}}``. Thus to select those tensors could do: ``map(tensor_map.__getitem__, tag_map[tag])``. :type: dict .. attribute:: ind_map Like ``tag_map`` but for indices. So ``ind_map[ind]]`` returns the tensor ids of those tensors with ``ind``. :type: dict .. attribute:: exponent A scalar prefactor for the tensor network, stored in base 10 like ``10**exponent``. This is mostly for conditioning purposes and will be ``0.0`` unless you use use ``equalize_norms(value)`` or ``tn.strip_exponent(tid_or_tensor)``. :type: float .. py:attribute:: _EXTRA_PROPS :value: () .. py:attribute:: _CONTRACT_STRUCTURED :value: False .. py:attribute:: _tid_counter :value: 0 .. py:attribute:: tensor_map .. py:attribute:: tag_map .. py:attribute:: ind_map .. py:attribute:: _inner_inds .. py:attribute:: _outer_inds .. py:attribute:: exponent :value: 0.0 .. py:method:: combine(other, *, virtual=False, check_collisions=True) Combine this tensor network with another, returning a new tensor network. This can be overriden by subclasses to check for a compatible structured type. :param other: The other tensor network to combine with. :type other: TensorNetwork :param virtual: Whether the new tensor network should copy all the incoming tensors (``False``, the default), or view them as virtual (``True``). :type virtual: bool, optional :param check_collisions: Whether to check for index collisions between the two tensor networks before combining them. If ``True`` (the default), any inner indices that clash will be mangled. :type check_collisions: bool, optional :rtype: TensorNetwork .. py:method:: __and__(other) Combine this tensor network with more tensors, without contracting. Copies the tensors. .. py:method:: __or__(other) Combine this tensor network with more tensors, without contracting. Views the constituent tensors. .. py:method:: _update_properties(cls, like=None, current=None, **kwargs) .. py:method:: new(like=None, **kwargs) :classmethod: Create a new tensor network, without any tensors, of type ``cls``, with all the requisite properties specified by ``kwargs`` or inherited from ``like``. .. py:method:: from_TN(tn, like=None, inplace=False, **kwargs) :classmethod: Construct a specific tensor network subclass (i.e. one with some promise about structure/geometry and tags/inds such as an MPS) from a generic tensor network which should have that structure already. :param cls: The TensorNetwork subclass to convert ``tn`` to. :type cls: class :param tn: The TensorNetwork to convert. :type tn: TensorNetwork :param like: If specified, try and retrieve the neccesary attribute values from this tensor network. :type like: TensorNetwork, optional :param inplace: Whether to perform the conversion inplace or not. :type inplace: bool, optional :param kwargs: Extra properties of the TN subclass that should be specified. .. py:method:: view_as(cls, inplace=False, **kwargs) View this tensor network as subclass ``cls``. .. py:attribute:: view_as_ .. py:method:: view_like(like, inplace=False, **kwargs) View this tensor network as the same subclass ``cls`` as ``like`` inheriting its extra properties as well. .. py:attribute:: view_like_ .. py:method:: copy(virtual=False, deep=False) Copy this ``TensorNetwork``. If ``deep=False``, (the default), then everything but the actual numeric data will be copied. .. py:attribute:: __copy__ .. py:method:: get_params() Get a pytree of the 'parameters', i.e. all underlying data arrays. .. py:method:: set_params(params) Take a pytree of the 'parameters', i.e. all underlying data arrays, as returned by ``get_params`` and set them. .. py:method:: _link_tags(tags, tid) Link ``tid`` to each of ``tags``. .. py:method:: _unlink_tags(tags, tid) "Unlink ``tid`` from each of ``tags``. .. py:method:: _link_inds(inds, tid) Link ``tid`` to each of ``inds``. .. py:method:: _unlink_inds(inds, tid) "Unlink ``tid`` from each of ``inds``. .. py:method:: _reset_inner_outer(inds) .. py:method:: _next_tid() .. py:method:: add_tensor(tensor, tid=None, virtual=False) Add a single tensor to this network - mangle its tid if neccessary. .. py:method:: add_tensor_network(tn, virtual=False, check_collisions=True) .. py:method:: add(t, virtual=False, check_collisions=True) Add Tensor, TensorNetwork or sequence thereof to self. .. py:method:: make_tids_consecutive(tid0=0) Reset the `tids` - node identifies - to be consecutive integers. .. py:method:: __iand__(tensor) Inplace, but non-virtual, addition of a Tensor or TensorNetwork to this network. It should not have any conflicting indices. .. py:method:: __ior__(tensor) Inplace, virtual, addition of a Tensor or TensorNetwork to this network. It should not have any conflicting indices. .. py:method:: _modify_tensor_tags(old, new, tid) .. py:method:: _modify_tensor_inds(old, new, tid) .. py:property:: num_tensors The total number of tensors in the tensor network. .. py:property:: num_indices The total number of indices in the tensor network. .. py:method:: pop_tensor(tid_or_tags, which='all') -> Tensor Remove a tensor from this network, and return it. :param tid_or_tags: The tensor id or tag(s) to match. :type tid_or_tags: int or str or sequence of str :param which: If supplying tags, whether to match all or any of the tags. Default is 'all'. :type which: {'all', 'any'}, optional :returns: The tensor that was removed. :rtype: Tensor .. py:method:: remove_all_tensors() Remove all tensors from this network. .. py:attribute:: _pop_tensor .. py:method:: delete(tags, which='all') Delete any tensors which match all or any of ``tags``. :param tags: The tags to match. :type tags: str or sequence of str :param which: Whether to match all or any of the tags. :type which: {'all', 'any'}, optional .. py:method:: check() Check some basic diagnostics of the tensor network. .. py:method:: add_tag(tag, where=None, which='all', record=None) Add tag(s) to every tensor in this network, or if ``where`` is specified, the tensors matching those tags -- i.e. adds the tag to all tensors in ``self.select_tensors(where, which=which)``. Inplace operation. :param tag: The tag or tags to add. :type tag: str or sequence of str :param where: The existing tags to match for selection. :type where: str or sequence of str, optional :param which: How to match the ``where`` tags. Default is 'all', meaning a tensor must have *all* of the specified tags to be selected. :type which: {'all', 'any', '!all', '!any'}, optional :param record: A dictionary to record the tags added to each tensor. Useful for untagging later at the Tensor level. The keys will be the tensors themselves, and the values will be sets of tags that were added. If ``None`` (the default), no record is kept. :type record: None or dict, optional .. py:method:: drop_tags(tags=None) Remove a tag or tags from this tensor network, defaulting to all. This is an inplace operation. :param tags: The tag or tags to drop. If ``None``, drop all tags. :type tags: str or sequence of str or None, optional .. py:method:: retag(tag_map, inplace=False) Rename tags for all tensors in this network, optionally in-place. :param tag_map: Mapping of pairs ``{old_tag: new_tag, ...}``. :type tag_map: dict-like :param inplace: Perform operation inplace or return copy (default). :type inplace: bool, optional .. py:attribute:: retag_ .. py:method:: reindex(index_map, inplace=False) Rename indices for all tensors in this network, optionally in-place. :param index_map: Mapping of pairs ``{old_ind: new_ind, ...}``. :type index_map: dict-like .. py:attribute:: reindex_ .. py:method:: mangle_inner_(append=None, which=None) Generate new index names for internal bonds, meaning that when this tensor network is combined with another, there should be no collisions. :param append: Whether and what to append to the indices to perform the mangling. If ``None`` a whole new random UUID will be generated. :type append: None or str, optional :param which: Which indices to rename, if ``None`` (the default), all inner indices. :type which: sequence of str, optional .. py:method:: conj(mangle_inner=False, output_inds=None, inplace=False) Conjugate all the tensors in this network (leave all outer indices). :param mangle_inner: Whether to mangle the inner indices of the network. If a string is given, it will be appended to the index names. :type mangle_inner: {bool, str, None}, optional :param output_inds: If given, the indices to mangle will be restricted to those not in this list. This is only needed for (hyper) tensor networks where output indices are not given simply by those that appear once. :type output_inds: sequence of str, optional :param inplace: Whether to perform the conjugation inplace or not. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: conj_ .. py:property:: H Conjugate all the tensors in this network (leaves all indices). .. py:method:: item() Return the scalar value of this tensor network, if it is a scalar. .. py:method:: largest_element() Return the 'largest element', in terms of absolute magnitude, of this tensor network. This is defined as the product of the largest elements of each tensor in the network, which would be the largest single term occuring if the TN was summed explicitly. .. py:method:: make_norm(mangle_append='*', layer_tags=('KET', 'BRA'), output_inds=None, return_all=False) Make the norm (squared) tensor network of this tensor network ``tn.H & tn``. This deterministally mangles the inner indices of the bra to avoid clashes with the ket, and also adds tags to the top and bottom layers. If the tensor network has hyper outer indices, you may need to specify the output indices. This allows 'hyper' norms. :param mangle_append: How to mangle the inner indices of the bra. :type mangle_append: {str, False or None}, optional :param layer_tags: The tags to identify the top and bottom. :type layer_tags: (str, str), optional :param output_inds: If given, the indices to mangle will be restricted to those not in this list. This is only needed for (hyper) tensor networks where output indices are not given simply by those that appear once. :type output_inds: sequence of str, optional :param return_all: Return the norm, the ket and the bra. These are virtual, i.e. are views of the same tensors. :type return_all: bool, optional :returns: **tn_norm** :rtype: TensorNetwork .. py:method:: norm(output_inds=None, squared=False, **contract_opts) Frobenius norm of this tensor network. Computed by exactly contracting the TN with its conjugate: .. math:: \|T\|_F = \sqrt{\mathrm{Tr} \left(T^{\dagger} T\right)} where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition. .. py:method:: make_overlap(other, layer_tags=('KET', 'BRA'), output_inds=None, return_all=False) Make the overlap tensor network of this tensor network with another tensor network `other.H & self`. This deterministally mangles the inner indices of the bra to avoid clashes with the ket, and also adds tags to the top and bottom layers. If the tensor network has hyper outer indices, you may need to specify the output indices. This allows 'hyper' overlaps. :param other: The other tensor network to overlap with, it should have the same outer indices as this tensor network, all other indices will be explicitly mangled in the copy taken, allowing 'hyper' overlaps. This tensor network will be conjugated in the overlap. :type other: TensorNetwork :param layer_tags: The tags to identify the top and bottom. :type layer_tags: (str, str), optional :param output_inds: If given, the indices to mangle will be restricted to those not in this list. This is only needed for (hyper) tensor networks where output indices are not given simply by those that appear once. :type output_inds: sequence of str, optional :param return_all: Return the overlap, the ket and the bra. These are virtual, i.e. are views of the same tensors. :type return_all: bool, optional :returns: **tn_overlap** :rtype: TensorNetwork .. py:method:: overlap(other, output_inds=None, **contract_opts) Overlap of this tensor network with another tensor network. Computed by exactly contracting the TN with the conjugate of the other TN: .. math:: \langle O, T \rangle = \mathrm{Tr} \left(O^{\dagger} T\right) where the trace is taken over all indices. This supports 'hyper' tensor networks, where the output indices are not simply those that appear once. :param other: The other tensor network to overlap with, it should have the same outer indices as this tensor network, all other indices will be explicitly mangled in the copy taken, allowing 'hyper' overlaps. This tensor network will be conjugated in the overlap. :type other: TensorNetwork :param output_inds: If given, the indices to mangle will be restricted to those not in this list. This is only needed for (hyper) tensor networks where output indices are not given simply by those that appear once. :type output_inds: sequence of str, optional :param contract_opts: Supplied to :meth:`~quimb.tensor.tensor_contract` for the contraction. :rtype: scalar .. py:method:: multiply(x, inplace=False, spread_over=8) Scalar multiplication of this tensor network with ``x``. :param x: The number to multiply this tensor network by. :type x: scalar :param inplace: Whether to perform the multiplication inplace. :type inplace: bool, optional :param spread_over: How many tensors to try and spread the multiplication over, in order that the effect of multiplying by a very large or small scalar is not concentrated. :type spread_over: int, optional .. py:attribute:: multiply_ .. py:method:: multiply_each(x, inplace=False) Scalar multiplication of each tensor in this tensor network with ``x``. If trying to spread a multiplicative factor ``fac`` uniformly over all tensors in the network and the number of tensors is large, then calling ``multiply(fac)`` can be inaccurate due to precision loss. If one has a routine that can precisely compute the ``x`` to be applied to each tensor, then this function avoids the potential inaccuracies in ``multiply()``. :param x: The number that multiplies each tensor in the network :type x: scalar :param inplace: Whether to perform the multiplication inplace. :type inplace: bool, optional .. py:attribute:: multiply_each_ .. py:method:: negate(inplace=False) Negate this tensor network. .. py:attribute:: negate_ .. py:method:: __mul__(other) Scalar multiplication. .. py:method:: __rmul__(other) Right side scalar multiplication. .. py:method:: __imul__(other) Inplace scalar multiplication. .. py:method:: __truediv__(other) Scalar division. .. py:method:: __itruediv__(other) Inplace scalar division. .. py:method:: __neg__() Negate this tensor network. .. py:method:: __iter__() .. py:property:: tensors Get the tuple of tensors in this tensor network. .. py:property:: arrays Get the tuple of raw arrays containing all the tensor network data. .. py:method:: get_symbol_map() Get the mapping of the current indices to ``einsum`` style single unicode characters. The symbols are generated in the order they appear on the tensors. .. seealso:: :py:obj:`get_equation`, :py:obj:`get_inputs_output_size_dict` .. py:method:: get_equation(output_inds=None) Get the 'equation' describing this tensor network, in ``einsum`` style with a single unicode letter per index. The symbols are generated in the order they appear on the tensors. :param output_inds: Manually specify which are the output indices. :type output_inds: None or sequence of str, optional :returns: **eq** :rtype: str .. rubric:: Examples >>> tn = qtn.TN_rand_reg(10, 3, 2) >>> tn.get_equation() 'abc,dec,fgb,hia,jke,lfk,mnj,ing,omd,ohl->' .. seealso:: :py:obj:`get_symbol_map`, :py:obj:`get_inputs_output_size_dict` .. py:method:: get_inputs_output_size_dict(output_inds=None) Get a tuple of ``inputs``, ``output`` and ``size_dict`` suitable for e.g. passing to path optimizers. The symbols are generated in the order they appear on the tensors. :param output_inds: Manually specify which are the output indices. :type output_inds: None or sequence of str, optional :returns: * **inputs** (*tuple[str]*) * **output** (*str*) * **size_dict** (*dict[str, ix]*) .. seealso:: :py:obj:`get_symbol_map`, :py:obj:`get_equation` .. py:method:: geometry_hash(output_inds=None, strict_index_order=False) A hash of this tensor network's shapes & geometry. A useful check for determinism. Moreover, if this matches for two tensor networks then they can be contracted using the same tree for the same cost. Order of tensors matters for this - two isomorphic tensor networks with shuffled tensor order will not have the same hash value. Permuting the indices of individual of tensors or the output does not matter unless you set ``strict_index_order=True``. :param output_inds: Manually specify which indices are output indices and their order, otherwise assumed to be all indices that appear once. :type output_inds: None or sequence of str, optional :param strict_index_order: If ``False``, then the permutation of the indices of each tensor and the output does not matter. :type strict_index_order: bool, optional :rtype: str .. rubric:: Examples If we transpose some indices, then only the strict hash changes: >>> tn = qtn.TN_rand_reg(100, 3, 2, seed=0) >>> tn.geometry_hash() '18c702b2d026dccb1a69d640b79d22f3e706b6ad' >>> tn.geometry_hash(strict_index_order=True) 'c109fdb43c5c788c0aef7b8df7bb83853cf67ca1' >>> t = tn['I0'] >>> t.transpose_(t.inds[2], t.inds[1], t.inds[0]) >>> tn.geometry_hash() '18c702b2d026dccb1a69d640b79d22f3e706b6ad' >>> tn.geometry_hash(strict_index_order=True) '52c32c1d4f349373f02d512f536b1651dfe25893' .. py:method:: tensors_sorted() Return a tuple of tensors sorted by their respective tags, such that the tensors of two networks with the same tag structure can be iterated over pairwise. .. py:method:: apply_to_arrays(fn) Modify every tensor's array inplace by applying ``fn`` to it. This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their 'numerical meaning'. .. py:method:: _get_tids_from(xmap, xs, which) .. py:method:: _get_tids_from_tags(tags, which='all') Return the set of tensor ids that match ``tags``. :param tags: Tag specifier(s). :type tags: seq or str, str, None, ..., int, slice :param which: How to select based on the tags, if: - 'all': get ids of tensors matching all tags - 'any': get ids of tensors matching any tags - '!all': get ids of tensors *not* matching all tags - '!any': get ids of tensors *not* matching any tags :type which: {'all', 'any', '!all', '!any'} :rtype: set[str] .. py:method:: _get_tids_from_inds(inds, which='all') Like ``_get_tids_from_tags`` but specify inds instead. .. py:method:: _tids_get(*tids) Convenience function that generates unique tensors from tids. .. py:method:: _inds_get(*inds) Convenience function that generates unique tensors from inds. .. py:method:: _tags_get(*tags) Convenience function that generates unique tensors from tags. .. py:method:: select_tensors(tags, which='all') Return the sequence of tensors that match ``tags``. If ``which='all'``, each tensor must contain every tag. If ``which='any'``, each tensor can contain any of the tags. :param tags: The tag or tag sequence. :type tags: str or sequence of str :param which: Whether to require matching all or any of the tags. :type which: {'all', 'any'} :returns: **tagged_tensors** -- The tagged tensors. :rtype: tuple of Tensor .. seealso:: :py:obj:`select`, :py:obj:`select_neighbors`, :py:obj:`partition`, :py:obj:`partition_tensors` .. py:method:: _select_tids(tids, virtual=True) Get a copy or a virtual copy (doesn't copy the tensors) of this ``TensorNetwork``, only with the tensors corresponding to ``tids``. .. py:method:: _select_without_tids(tids, virtual=True) Get a copy or a virtual copy (doesn't copy the tensors) of this ``TensorNetwork``, without the tensors corresponding to ``tids``. .. py:method:: select(tags, which='all', virtual=True) Get a TensorNetwork comprising tensors that match all or any of ``tags``, inherit the network properties/structure from ``self``. This returns a view of the tensors not a copy. :param tags: The tag or tag sequence. :type tags: str or sequence of str :param which: Whether to require matching all or any of the tags. :type which: {'all', 'any'} :param virtual: Whether the returned tensor network views the same tensors (the default) or takes copies (``virtual=False``) from ``self``. :type virtual: bool, optional :returns: **tagged_tn** -- A tensor network containing the tagged tensors. :rtype: TensorNetwork .. seealso:: :py:obj:`select_tensors`, :py:obj:`select_neighbors`, :py:obj:`partition`, :py:obj:`partition_tensors` .. py:attribute:: select_any .. py:attribute:: select_all .. py:method:: select_neighbors(tags, which='any') Select any neighbouring tensors to those specified by ``tags``.self :param tags: Tags specifying tensors. :type tags: sequence of str, int :param which: How to select tensors based on ``tags``. :type which: {'any', 'all'}, optional :returns: The neighbouring tensors. :rtype: tuple[Tensor] .. seealso:: :py:obj:`select_tensors`, :py:obj:`partition_tensors` .. py:method:: _select_local_tids(tids, max_distance=1, mode='graphdistance', fillin=False, grow_from='all', reduce_outer=None, virtual=True, include=None, exclude=None) Select a local region of tensors, based on graph distance or union of loops, from an initial set of tensor ids. :param tids: The initial tensor ids. :type tids: sequence of str :param max_distance: The maximum distance to the initial tagged region, or if using 'loopunion' mode, the maximum size of any loop. :type max_distance: int, optional :param mode: How to select the local tensors, either by graph distance or by selecting the union of all loopy regions containing ``tids``. :type mode: {'graphdistance', 'loopunion'}, optional :param fillin: Whether to fill in the local patch with additional tensors, or not. `fillin` tensors are those connected by two or more bonds to the original local patch, the process is repeated int(fillin) times. :type fillin: bool or int, optional :param grow_from: If mode is 'loopunion', whether each loop should contain *all* of the initial tids, or just *any* of them (generating a larger region). :type grow_from: {"all", "any"}, optional :param reduce_outer: Whether and how to reduce any outer indices of the selected region. :type reduce_outer: {'sum', 'svd', 'svd-sum', 'reflect'}, optional :param virtual: Whether the returned tensor network should be a view of the tensors or a copy. :type virtual: bool, optional :param include: If given, only include tensor from this set of tids. :type include: None or sequence of int, optional :param exclude: If given, always exclude tensors from this set of tids. :type exclude: None or sequence of int, optional :rtype: TensorNetwork .. py:method:: select_local(tags, which='all', max_distance=1, mode='graphdistance', fillin=False, grow_from='all', reduce_outer=None, virtual=True, include=None, exclude=None) Select a local region of tensors, based on graph distance ``max_distance`` to any tagged tensors. :param tags: The tag or tag sequence defining the initial region. :type tags: str or sequence of str :param which: Whether to require matching all or any of the tags. :type which: {'all', 'any', '!all', '!any'}, optional :param max_distance: The maximum distance to the initial tagged region, or if using 'loopunion' mode, the maximum size of any loop. :type max_distance: int, optional :param mode: How to select the local tensors, either by graph distance or by selecting the union of all loopy regions containing ``where``, of size up to ``max_distance``, ensuring no dangling tensors. :type mode: {'graphdistance', 'loopunion'}, optional :param fillin: Once the local region has been selected based on graph distance, whether and how many times to 'fill-in' corners by adding tensors connected multiple times. For example, if ``R`` is an initially tagged tensor and ``x`` are locally selected tensors:: fillin=0 fillin=1 fillin=2 | | | | | | | | | | | | | | | -o-o-X-o-o- -o-X-X-X-o- -X-X-X-X-X- | | | | | | | | | | | | | | | -o-X-X-X-o- -X-X-X-X-X- -X-X-X-X-X- | | | | | | | | | | | | | | | -X-X-R-X-X- -X-X-R-X-X- -X-X-R-X-X- :type fillin: bool or int, optional :param grow_from: If mode is 'loopunion', whether each loop should contain *all* of the initial tagged tensors, or just *any* of them (generating a larger region). :type grow_from: {"all", "any"}, optional :param reduce_outer: Whether and how to reduce any outer indices of the selected region. :type reduce_outer: {'sum', 'svd', 'svd-sum', 'reflect'}, optional :param virtual: Whether the returned tensor network should be a view of the tensors or a copy (``virtual=False``). :type virtual: bool, optional :param include: Only include tensor with these ``tids``. :type include: sequence of int, optional :param exclude: Only include tensor without these ``tids``. :type exclude: sequence of int, optional :rtype: TensorNetwork .. py:method:: select_path(loop, gauges=None) Select a sub tensor network corresponding to a single (possibly closed AKA loop like) path. Indices that are not part of the loop but do connect tids within it are cut, making this different to other select methods. :param loop: A collection of tids and inds to select. :type loop: NetworkPath or sequence of str or int :param gauges: A dictionary of gauge tensors to insert at dangling (including cut) indices. :type gauges: dict[str, array_like], optional :rtype: TensorNetwork .. py:method:: __getitem__(tags) Get the tensor(s) associated with ``tags``. :param tags: The tags used to select the tensor(s). :type tags: str or sequence of str :rtype: Tensor or sequence of Tensors .. py:method:: __setitem__(tags, tensor) Set the single tensor uniquely associated with ``tags``. .. py:method:: __delitem__(tags) Delete any tensors which have all of ``tags``. .. py:method:: partition_tensors(tags, inplace=False, which='any') Split this TN into a list of tensors containing any or all of ``tags`` and a ``TensorNetwork`` of the the rest. :param tags: The list of tags to filter the tensors by. Use ``...`` (``Ellipsis``) to filter all. :type tags: sequence of str :param inplace: If true, remove tagged tensors from self, else create a new network with the tensors removed. :type inplace: bool, optional :param which: Whether to require matching all or any of the tags. :type which: {'all', 'any'} :returns: **(u_tn, t_ts)** -- The untagged tensor network, and the sequence of tagged Tensors. :rtype: (TensorNetwork, tuple of Tensors) .. seealso:: :py:obj:`partition`, :py:obj:`select`, :py:obj:`select_tensors` .. py:method:: partition(tags, which='any', inplace=False) Split this TN into two, based on which tensors have any or all of ``tags``. Unlike ``partition_tensors``, both results are TNs which inherit the structure of the initial TN. :param tags: The tags to split the network with. :type tags: sequence of str :param which: Whether to split based on matching any or all of the tags. :type which: {'any', 'all'} :param inplace: If True, actually remove the tagged tensors from self. :type inplace: bool :returns: **untagged_tn, tagged_tn** -- The untagged and tagged tensor networs. :rtype: (TensorNetwork, TensorNetwork) .. seealso:: :py:obj:`partition_tensors`, :py:obj:`select`, :py:obj:`select_tensors` .. py:method:: _split_tensor_tid(tid, left_inds, **split_opts) .. py:method:: split_tensor(tags, left_inds, **split_opts) Split the single tensor uniquely identified by ``tags``, adding the resulting tensors from the decomposition back into the network. Inplace operation. .. py:method:: replace_with_identity(where, which='any', inplace=False) Replace all tensors marked by ``where`` with an identity. E.g. if ``X`` denote ``where`` tensors:: ---1 X--X--2--- ---1---2--- | | | | ==> | X--X--X | | :param where: Tags specifying the tensors to replace. :type where: tag or seq of tags :param which: Whether to replace tensors matching any or all the tags ``where``. :type which: {'any', 'all'} :param inplace: Perform operation in place. :type inplace: bool :returns: The TN, with section replaced with identity. :rtype: TensorNetwork .. seealso:: :py:obj:`replace_with_svd` .. py:method:: replace_with_svd(where, left_inds, eps, *, which='any', right_inds=None, method='isvd', max_bond=None, absorb='both', cutoff_mode='rel', renorm=None, ltags=None, rtags=None, keep_tags=True, start=None, stop=None, inplace=False) Replace all tensors marked by ``where`` with an iteratively constructed SVD. E.g. if ``X`` denote ``where`` tensors:: :__ ___: ---X X--X X--- : \ / : | | | | ==> : U~s~VH---: ---X--X--X--X--- :__/ \ : | +--- : \__: X left_inds : right_inds :param where: Tags specifying the tensors to replace. :type where: tag or seq of tags :param left_inds: The indices defining the left hand side of the SVD. :type left_inds: ind or sequence of inds :param eps: The tolerance to perform the SVD with, affects the number of singular values kept. See :func:`quimb.linalg.rand_linalg.estimate_rank`. :type eps: float :param which: Whether to replace tensors matching any or all the tags ``where``, prefix with '!' to invert the selection. :type which: {'any', 'all', '!any', '!all'}, optional :param right_inds: The indices defining the right hand side of the SVD, these can be automatically worked out, but for hermitian decompositions the order is important and thus can be given here explicitly. :type right_inds: ind or sequence of inds, optional :param method: How to perform the decomposition, if not an iterative method the subnetwork dense tensor will be formed first, see :func:`~quimb.tensor.tensor_core.tensor_split` for options. :type method: str, optional :param max_bond: The maximum bond to keep, defaults to no maximum (-1). :type max_bond: int, optional :param ltags: Tags to add to the left tensor. :type ltags: sequence of str, optional :param rtags: Tags to add to the right tensor. :type rtags: sequence of str, optional :param keep_tags: Whether to propagate tags found in the subnetwork to both new tensors or drop them, defaults to ``True``. :type keep_tags: bool, optional :param start: If given, assume can use ``TNLinearOperator1D``. :type start: int, optional :param stop: If given, assume can use ``TNLinearOperator1D``. :type stop: int, optional :param inplace: Perform operation in place. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`replace_with_identity` .. py:attribute:: replace_with_svd_ .. py:method:: replace_section_with_svd(start, stop, eps, **replace_with_svd_opts) Take a 1D tensor network, and replace a section with a SVD. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.replace_with_svd`. :param start: Section start index. :type start: int :param stop: Section stop index, not included itself. :type stop: int :param eps: Precision of SVD. :type eps: float :param replace_with_svd_opts: Supplied to :meth:`~quimb.tensor.tensor_core.TensorNetwork.replace_with_svd`. :rtype: TensorNetwork .. py:method:: convert_to_zero() Inplace conversion of this network to an all zero tensor network. .. py:method:: _contract_between_tids(tid1, tid2, equalize_norms=False, gauges=None, output_inds=None, **contract_opts) .. py:method:: contract_between(tags1, tags2, **contract_opts) Contract the two tensors specified by ``tags1`` and ``tags2`` respectively. This is an inplace operation. No-op if the tensor specified by ``tags1`` and ``tags2`` is the same tensor. :param tags1: Tags uniquely identifying the first tensor. :param tags2: Tags uniquely identifying the second tensor. :type tags2: str or sequence of str :param contract_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_contract`. .. py:method:: contract_ind(ind, output_inds=None, **contract_opts) Contract tensors connected by ``ind``. This is an inplace operation. :param ind: The index to contract over. All tensors connected by this index will be contracted into a single tensor. Note that if `ind` is in `output_inds` then it will still be retained on this tensor. :type ind: str :param output_inds: The output indices for the local contraction. If not given, they will be calculated from the default outer indices of the full tensor network. :type output_inds: str or sequence of str, optional :param contract_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_contract`. .. py:attribute:: gate_inds .. py:attribute:: gate_inds_ .. py:method:: gate_inds_with_tn(inds, gate, gate_inds_inner, gate_inds_outer, inplace=False) Gate some indices of this tensor network with another tensor network. That is, rewire and then combine them such that the new tensor network has the same outer indices as before, but now includes gate:: gate_inds_outer : : gate_inds_inner : : : : inds inds : ┌────┐ : : ┌────┬─── : ┌───────┬─── ───┤ ├── a──┤ │ a──┤ │ │ │ │ ├─── │ ├─── ───┤gate├── b──┤self│ --> b──┤ new │ │ │ │ ├─── │ ├─── ───┤ ├── c──┤ │ c──┤ │ └────┘ └────┴─── └───────┴─── Where there can be arbitrary structure of tensors within both ``self`` and ``gate``. The case where some of target ``inds`` are not present is handled as so (here 'c' is missing so 'x' and 'y' are kept):: gate_inds_outer : : gate_inds_inner : : : : inds inds : ┌────┐ : : ┌────┬─── : ┌───────┬─── ───┤ ├── a──┤ │ a──┤ │ │ │ │ ├─── │ ├─── ───┤gate├── b──┤self│ --> b──┤ new │ │ │ │ ├─── │ ├─── x───┤ ├──y └────┘ x──┤ ┌──┘ └────┘ └────┴───y Which enables convinient construction of various tensor networks, for example propagators, from scratch. :param inds: The current indices to gate. If an index is not present on the target tensor network, it is ignored and instead the resulting tensor network will have both the corresponding inner and outer index of the gate tensor network. :type inds: str or sequence of str :param gate: The tensor network to gate with. :type gate: Tensor or TensorNetwork :param gate_inds_inner: The indices of ``gate`` to join to the old ``inds``, must be the same length as ``inds``. :type gate_inds_inner: sequence of str :param gate_inds_outer: The indices of ``gate`` to make the new outer ``inds``, must be the same length as ``inds``. :type gate_inds_outer: sequence of str :returns: **tn_gated** :rtype: TensorNetwork .. seealso:: :py:obj:`TensorNetwork.gate_inds` .. py:attribute:: gate_inds_with_tn_ .. py:method:: _compute_tree_gauges(tree, outputs) Given a ``tree`` of connected tensors, absorb the gauges from outside inwards, finally outputing the gauges associated with the ``outputs``. :param tree: The tree of connected tensors, see :meth:`get_tree_span`. :type tree: sequence of (tid_outer, tid_inner, distance) :param outputs: Each output is specified by a tensor id and an index, such that having absorbed all gauges in the tree, the effective reduced factor of the tensor with respect to the index is returned. :type outputs: sequence of (tid, ind) :returns: **Gouts** -- The effective reduced factors of the tensor index pairs specified in ``outputs``, each a matrix. :rtype: sequence of array .. py:method:: _compress_between_virtual_tree_tids(tidl, tidr, max_bond, cutoff, r, absorb='both', include=None, exclude=None, span_opts=None, **compress_opts) .. py:method:: _compute_bond_env(tid1, tid2, select_local_distance=None, select_local_opts=None, max_bond=None, cutoff=None, method='contract_around', contract_around_opts=None, contract_compressed_opts=None, optimize='auto-hq', include=None, exclude=None) Compute the local tensor environment of the bond(s), if cut, between two tensors. .. py:method:: _compress_between_full_bond_tids(tid1, tid2, max_bond, cutoff=0.0, absorb='both', renorm=False, method='eigh', select_local_distance=None, select_local_opts=None, env_max_bond='max_bond', env_cutoff='cutoff', env_method='contract_around', contract_around_opts=None, contract_compressed_opts=None, env_optimize='auto-hq', include=None, exclude=None) .. py:method:: _compress_between_local_fit(tid1, tid2, max_bond, cutoff=0.0, absorb='both', method='als', select_local_distance=1, select_local_opts=None, include=None, exclude=None, **fit_opts) .. py:method:: _compress_between_tids(tid1, tid2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, mode='basic', equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback=None, **compress_opts) .. py:method:: compress_between(tags1, tags2, max_bond=None, cutoff=1e-10, absorb='both', canonize_distance=0, canonize_opts=None, equalize_norms=False, **compress_opts) Compress the bond between the two single tensors in this network specified by ``tags1`` and ``tags2`` using :func:`~quimb.tensor.tensor_core.tensor_compress_bond`:: | | | | | | | | ==●====●====●====●== ==●====●====●====●== /| /| /| /| /| /| /| /| | | | | | | | | ==●====1====2====●== ==> ==●====L----R====●== /| /| /| /| /| /| /| /| | | | | | | | | ==●====●====●====●== ==●====●====●====●== /| /| /| /| /| /| /| /| This is an inplace operation. The compression is unlikely to be optimal with respect to the frobenius norm, unless the TN is already canonicalized at the two tensors. The ``absorb`` kwarg can be specified to yield an isometry on either the left or right resulting tensors. :param tags1: Tags uniquely identifying the first ('left') tensor. :param tags2: Tags uniquely identifying the second ('right') tensor. :type tags2: str or sequence of str :param max_bond: The maxmimum bond dimension. :type max_bond: int or None, optional :param cutoff: The singular value cutoff to use. :type cutoff: float, optional :param canonize_distance: How far to locally canonize around the target tensors first. :type canonize_distance: int, optional :param canonize_opts: Other options for the local canonization. :type canonize_opts: None or dict, optional :param equalize_norms: If set, rescale the norms of all tensors modified to this value, stripping the rescaling factor into the ``exponent`` attribute. :type equalize_norms: bool or float, optional :param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_compress_bond`. .. seealso:: :py:obj:`canonize_between` .. py:method:: compress_all(max_bond=None, cutoff=1e-10, canonize=True, tree_gauge_distance=None, canonize_distance=None, canonize_after_distance=None, mode='auto', inplace=False, **compress_opts) Compress all bonds one by one in this network. :param max_bond: The maxmimum bond dimension to compress to. :type max_bond: int or None, optional :param cutoff: The singular value cutoff to use. :type cutoff: float, optional :param tree_gauge_distance: How far to include local tree gauge information when compressing. If the local geometry is a tree, then each compression will be locally optimal up to this distance. :type tree_gauge_distance: int, optional :param canonize_distance: How far to locally canonize around the target tensors first, this is set automatically by ``tree_gauge_distance`` if not specified. :type canonize_distance: int, optional :param canonize_after_distance: How far to locally canonize around the target tensors after, this is set automatically by ``tree_gauge_distance``, depending on ``mode`` if not specified. :type canonize_after_distance: int, optional :param mode: The mode to use for compressing the bonds. If 'auto', will use 'basic' if ``tree_gauge_distance == 0`` else 'virtual-tree'. :type mode: {'auto', 'basic', 'virtual-tree'}, optional :param inplace: Whether to perform the compression inplace. :type inplace: bool, optional :param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.TensorNetwork.compress_between`. :rtype: TensorNetwork .. seealso:: :py:obj:`compress_between`, :py:obj:`canonize_all` .. py:attribute:: compress_all_ .. py:method:: compress_all_tree(inplace=False, **compress_opts) Canonically compress this tensor network, assuming it to be a tree. This generates a tree spanning out from the most central tensor, then compresses all bonds inwards in a depth-first manner, using an infinite ``canonize_distance`` to shift the orthogonality center. .. py:attribute:: compress_all_tree_ .. py:method:: compress_all_1d(max_bond=None, cutoff=1e-10, canonize=True, inplace=False, **compress_opts) Compress a tensor network that you know has a 1D topology, this proceeds by generating a spanning 'tree' from around the least central tensor, then optionally canonicalizing all bonds outwards and compressing inwards. :param max_bond: The maximum bond dimension to compress to. :type max_bond: int, optional :param cutoff: The singular value cutoff to use. :type cutoff: float, optional :param canonize: Whether to canonize all bonds outwards first. :type canonize: bool, optional :param inplace: Whether to perform the compression inplace. :type inplace: bool, optional :param compress_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_compress_bond`. :rtype: TensorNetwork .. py:attribute:: compress_all_1d_ .. py:method:: compress_all_simple(max_bond=None, cutoff=1e-10, gauges=None, max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, inplace=False, **gauge_simple_opts) .. py:attribute:: compress_all_simple_ .. py:method:: _canonize_between_tids(tid1, tid2, absorb='right', gauges=None, gauge_smudge=1e-06, equalize_norms=False, **canonize_opts) .. py:method:: canonize_between(tags1, tags2, absorb='right', **canonize_opts) 'Canonize' the bond between the two single tensors in this network specified by ``tags1`` and ``tags2`` using ``tensor_canonize_bond``:: | | | | | | | | --●----●----●----●-- --●----●----●----●-- /| /| /| /| /| /| /| /| | | | | | | | | --●----1----2----●-- ==> --●---->~~~~R----●-- /| /| /| /| /| /| /| /| | | | | | | | | --●----●----●----●-- --●----●----●----●-- /| /| /| /| /| /| /| /| This is an inplace operation. This can only be used to put a TN into truly canonical form if the geometry is a tree, such as an MPS. :param tags1: Tags uniquely identifying the first ('left') tensor, which will become an isometry. :param tags2: Tags uniquely identifying the second ('right') tensor. :type tags2: str or sequence of str :param absorb: Which side of the bond to absorb the non-isometric operator. :type absorb: {'left', 'both', 'right'}, optional :param canonize_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_canonize_bond`. .. seealso:: :py:obj:`compress_between` .. py:method:: reduce_inds_onto_bond(inda, indb, tags=None, drop_tags=False, combine=True, ndim_cutoff=3) Use QR factorization to 'pull' the indices ``inda`` and ``indb`` off of their respective tensors and onto the bond between them. This is an inplace operation. .. py:method:: _get_neighbor_tids(tids, exclude_inds=()) Get the tids of tensors connected to the tensor(s) at ``tids``. :param tids: The tensor identifier(s) to get the neighbors of. :type tids: int or sequence of int :param exclude_inds: Exclude these indices from being considered as connections. :type exclude_inds: sequence of str, optional :rtype: oset[int] .. py:method:: get_tid_neighbor_map() Get a mapping of each tensor id to the tensor ids of its neighbors. .. py:method:: _get_neighbor_inds(inds) Get the indices connected to the index(es) at ``inds``. :param inds: The index(es) to get the neighbors of. :type inds: str or sequence of str :rtype: oset[str] .. py:method:: _get_subgraph_tids(tids) Get the tids of tensors connected, by any distance, to the tensor or region of tensors ``tids``. .. py:method:: _ind_to_subgraph_tids(ind) Get the tids of tensors connected, by any distance, to the index ``ind``. .. py:attribute:: compute_centralities .. py:attribute:: compute_hierarchical_grouping .. py:attribute:: compute_hierarchical_linkage .. py:attribute:: compute_hierarchical_ordering .. py:attribute:: compute_hierarchical_ssa_path .. py:attribute:: compute_shortest_distances .. py:attribute:: gen_all_paths_between_tids .. py:attribute:: gen_inds_connected .. py:attribute:: gen_loops .. py:attribute:: gen_patches .. py:attribute:: gen_paths_loops .. py:attribute:: gen_sloops .. py:attribute:: gen_gloops .. py:attribute:: get_local_patch .. py:attribute:: get_path_between_tids .. py:attribute:: get_loop_union .. py:attribute:: get_tree_span .. py:attribute:: isconnected .. py:attribute:: istree .. py:attribute:: least_central_tid .. py:attribute:: most_central_tid .. py:attribute:: subgraphs .. py:attribute:: tids_are_connected .. py:method:: _draw_tree_span_tids(tids, span=None, min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', sorter=None, weight_bonds=True, color='order', colormap='Spectral', **draw_opts) .. py:method:: draw_tree_span(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, ndim_sort='max', distance_sort='min', weight_bonds=True, color='order', colormap='Spectral', **draw_opts) Visualize a generated tree span out of the tensors tagged by ``tags``. :param tags: Tags specifiying a region of tensors to span out of. :type tags: str or sequence of str :param which: How to select tensors based on the tags. :type which: {'all', 'any': '!all', '!any'}, optional :param min_distance: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type min_distance: int, optional :param max_distance: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type max_distance: None or int, optional :param include: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type include: sequence of str, optional :param exclude: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type exclude: sequence of str, optional :param distance_sort: See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type distance_sort: {'min', 'max'}, optional :param color: Whether to color nodes based on the order of the contraction or the graph distance from the specified region. :type color: {'order', 'distance'}, optional :param colormap: The name of a ``matplotlib`` colormap to use. :type colormap: str .. seealso:: :py:obj:`get_tree_span` .. py:attribute:: graph_tree_span .. py:method:: _canonize_around_tids(tids, min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', inwards=True, gauges=None, gauge_smudge=1e-06, **canonize_opts) .. py:method:: canonize_around(tags, which='all', min_distance=0, max_distance=None, include=None, exclude=None, span_opts=None, absorb='right', gauge_links=False, link_absorb='both', equalize_norms=False, inplace=False, **canonize_opts) Expand a locally canonical region around ``tags``:: --●---●-- | | | | --●---v---v---●-- | | | | | | --●--->---v---v---<---●-- | | | | | | | | ●--->--->---O---O---<---<---● | | | | | | | | --●--->---^---^---^---●-- | | | | | | --●---^---^---●-- | | | | --●---●-- <=====> max_distance = 2 e.g. Shown on a grid here but applicable to arbitrary geometry. This is a way of gauging a tensor network that results in a canonical form if the geometry is described by a tree (e.g. an MPS or TTN). The canonizations proceed inwards via QR decompositions. The sequence generated by round-robin expanding the boundary of the originally specified tensors - it will only be unique for trees. :param tags: Tags defining which set of tensors to locally canonize around. :type tags: str, or sequence or str :param which: How select the tensors based on tags. :type which: {'all', 'any', '!all', '!any'}, optional :param min_distance: How close, in terms of graph distance, to canonize tensors away. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type min_distance: int, optional :param max_distance: How far, in terms of graph distance, to canonize tensors away. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type max_distance: None or int, optional :param include: How to build the spanning tree to canonize along. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type include: sequence of str, optional :param exclude: How to build the spanning tree to canonize along. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :type exclude: sequence of str, optional :param distance_sort {'min': How to build the spanning tree to canonize along. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :param 'max'}: How to build the spanning tree to canonize along. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :param optional: How to build the spanning tree to canonize along. See :meth:`~quimb.tensor.tensor_core.TensorNetwork.get_tree_span`. :param absorb: As we canonize inwards from tensor A to tensor B which to absorb the singular values into. :type absorb: {'right', 'left', 'both'}, optional :param gauge_links: Whether to gauge the links *between* branches of the spanning tree generated (in a Simple Update like fashion). :type gauge_links: bool, optional :param link_absorb: If performing the link gauging, how to absorb the singular values. :type link_absorb: {'both', 'right', 'left'}, optional :param equalize_norms: Scale the norms of tensors acted on to this value, accumulating the log10 scaled factors in ``self.exponent``. :type equalize_norms: bool or float, optional :param inplace: Whether to perform the canonization inplace. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`get_tree_span` .. py:attribute:: canonize_around_ .. py:method:: gauge_all_canonize(max_iterations=5, absorb='both', gauges=None, gauge_smudge=1e-06, equalize_norms=False, inplace=False, **canonize_opts) Iterative gauge all the bonds in this tensor network with a basic 'canonization' strategy. .. py:attribute:: gauge_all_canonize_ .. py:method:: gauge_all_simple(max_iterations=5, tol=0.0, smudge=1e-12, power=1.0, damping=0.0, gauges=None, equalize_norms=False, touched_tids=None, info=None, progbar=False, inplace=False) Iterative gauge all the bonds in this tensor network with a 'simple update' like strategy. If gauges are not supplied they are initialized and then reabsorbed at the end, in which case this method acts as a kind of conditioning. More usefully, if you supply `gauges` then they will be updated inplace and *not* absorbed back into the tensor network, with the assumption that you are using/tracking them externally. :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for the singular values. :type tol: float, optional :param smudge: A small value to add to the singular values when gauging. :type smudge: float, optional :param power: A power to raise the singular values to when gauging. :type power: float, optional :param damping: The damping factor to apply to the gauging updates. :type damping: float, optional :param gauges: Supply the initial gauges to use. :type gauges: dict, optional :param equalize_norms: Whether to equalize the norms of the tensors after each update. :type equalize_norms: bool, optional :param touched_tids: The tensor identifiers to start the gauge sweep from. :type touched_tids: sequence of int, optional :param info: Store extra information about the gauging process in this dict. If supplied, the following keys are filled: - 'iterations': the number of iterations performed. - 'max_sdiff': the maximum singular value difference. :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param inplace: Whether to perform the gauging inplace. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`gauge_simple_insert`, :py:obj:`gauge_simple_remove`, :py:obj:`gauge_simple_temp`, :py:obj:`gauge_all_canonize` .. py:attribute:: gauge_all_simple_ .. py:method:: gauge_all_random(max_iterations=1, unitary=True, seed=None, inplace=False) Gauge all the bonds in this network randomly. This is largely for testing purposes. .. py:attribute:: gauge_all_random_ .. py:method:: gauge_all(method='canonize', **gauge_opts) Gauge all bonds in this network using one of several strategies. :param method: The method to use for gauging. One of "canonize", "simple", or "random". Default is "canonize". :type method: str, optional :param gauge_opts: Additional keyword arguments to pass to the chosen method. :type gauge_opts: dict, optional .. seealso:: :py:obj:`gauge_all_canonize`, :py:obj:`gauge_all_simple`, :py:obj:`gauge_all_random` .. py:attribute:: gauge_all_ .. py:method:: _gauge_local_tids(tids, max_distance=1, mode='graphdistance', max_iterations='max_distance', method='canonize', include=None, exclude=None, **gauge_local_opts) Iteratively gauge all bonds in the local tensor network defined by ``tids`` according to one of several strategies. .. py:method:: gauge_local(tags, which='all', max_distance=1, max_iterations='max_distance', method='canonize', inplace=False, **gauge_local_opts) Iteratively gauge all bonds in the tagged sub tensor network according to one of several strategies. .. py:attribute:: gauge_local_ .. py:method:: gauge_simple_insert(gauges, remove=False, smudge=0.0, power=1.0) Insert the simple update style bond gauges found in ``gauges`` if they are present in this tensor network. The gauges inserted are also returned so that they can be removed later. :param gauges: The store of bond gauges, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be gauged. :type gauges: dict[str, array_like] :param remove: Whether to remove the gauges from the store after inserting them. :type remove: bool, optional :param smudge: A small value to add to the gauge vectors to avoid singularities when inserting. :type smudge: float, optional :param power: A power to raise the gauge vectors to when inserting. :type power: float, optional :returns: * **outer** (*list[(Tensor, str, array_like)]*) -- The sequence of gauges applied to outer indices, each a tuple of the tensor, the index and the gauge vector. * **inner** (*list[((Tensor, Tensor), str, array_like)]*) -- The sequence of gauges applied to inner indices, each a tuple of the two inner tensors, the inner bond and the gauge vector applied. .. py:method:: gauge_simple_remove(outer=None, inner=None) :staticmethod: Remove the simple update style bond gauges inserted by ``gauge_simple_insert``. .. py:method:: gauge_simple_temp(gauges, smudge=1e-12, power=1.0, ungauge_outer=True, ungauge_inner=True) Context manager that temporarily inserts simple update style bond gauges into this tensor network, before optionally ungauging them. :param self: The TensorNetwork to be gauge-bonded. :type self: TensorNetwork :param gauges: The store of gauge bonds, the keys being indices and the values being the vectors. Only bonds present in this dictionary will be gauged. :type gauges: dict[str, array_like] :param smudge: A small value to add to the gauge vectors to avoid singularities. :type smudge: float, optional :param power: A power to raise the gauge vectors to when inserting. :type power: float, optional :param ungauge_outer: Whether to ungauge the outer bonds. :type ungauge_outer: bool, optional :param ungauge_inner: Whether to ungauge the inner bonds. :type ungauge_inner: bool, optional :Yields: * **outer** (*list[(Tensor, int, array_like)]*) -- The tensors, indices and gauges that were performed on outer indices. * **inner** (*list[((Tensor, Tensor), int, array_like)]*) -- The tensors, indices and gauges that were performed on inner bonds. .. rubric:: Examples >>> tn = TN_rand_reg(10, 4, 3) >>> tn ^ all -51371.66630218866 >>> gauges = {} >>> tn.gauge_all_simple_(gauges=gauges) >>> len(gauges) 20 >>> tn ^ all 28702551.673767876 >>> with gauged_bonds(tn, gauges): ... # temporarily insert gauges ... print(tn ^ all) -51371.66630218887 >>> tn ^ all 28702551.67376789 .. py:method:: _contract_compressed_tid_sequence(seq, *, output_inds=None, max_bond=None, cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=True, compress_mode='auto', compress_min_size=None, compress_span=False, compress_matrices=True, compress_exclude=None, compress_opts=None, strip_exponent=False, equalize_norms='auto', gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, preserve_tensor=False, progbar=False, inplace=False) Core routine for performing compressed contraction. .. py:method:: _contract_around_tids(tids, seq=None, min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, canonize_opts=None, **kwargs) Contract around ``tids``, by following a greedily generated spanning tree, and compressing whenever two tensors in the outer 'boundary' share more than one index. .. py:method:: contract_around_center(**opts) .. py:method:: contract_around_corner(**opts) .. py:method:: contract_around(tags, which='all', min_distance=0, max_distance=None, span_opts=None, max_bond=None, cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=True, compress_min_size=None, compress_opts=None, compress_span=False, compress_matrices=True, equalize_norms=False, gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, inplace=False, **kwargs) Perform a compressed contraction inwards towards the tensors identified by ``tags``. .. py:attribute:: contract_around_ .. py:method:: contract_compressed(optimize, *, output_inds=None, max_bond='auto', cutoff=1e-10, tree_gauge_distance=1, canonize_distance=None, canonize_opts=None, canonize_after_distance=None, canonize_after_opts=None, gauge_boundary_only=True, compress_late=None, compress_mode='auto', compress_min_size=None, compress_span=True, compress_matrices=True, compress_exclude=None, compress_opts=None, strip_exponent=False, equalize_norms='auto', gauges=None, gauge_smudge=1e-06, callback_pre_contract=None, callback_post_contract=None, callback_pre_compress=None, callback_post_compress=None, callback=None, preserve_tensor=False, progbar=False, inplace=False, **kwargs) Contract this tensor network using the hyperoptimized approximate contraction method introduced in https://arxiv.org/abs/2206.07044. Only supports non-hyper tensor networks. :param optimize: The contraction strategy to use. The options are: - str: use the preset strategy with the given name, - path_like: use this exact path, - ``cotengra.HyperCompressedOptimizer``: find the contraction using this optimizer - ``cotengra.ContractionTreeCompressed``: use this exact tree Note that the strategy should be one that specifically targets compressed contraction, paths for exact contraction will likely perform badly. See the cotengra documentation for more details. Values for ``max_bond`` and ``compress_late`` are inherited from the optimizer if possible (and not specified). :type optimize: str, sequence, HyperCompressedOptimizer, ContractionTreeCompressed :param output_inds: Output indices. Note that hyper indices are not supported and this is just for specifying the output order. :type output_inds: sequence of str, optional :param max_bond: The maximum bond dimension to allow during compression. - ``"auto"``: try and inherit value from the optimizer, or use the current maximum bond dimension squared if not available. - int: a specific maximum bond dimension to use. - ``None``: no maximum bond dimension (compression still possible via cutoff) - not recommended. :type max_bond: "auto", int or None, optional :param cutoff: The singular value cutoff to use during compression. :type cutoff: float, optional :param tree_gauge_distance: The distance to 'tree gauge' around a pair of tensors before compressing. Depending on if `compress_mode="basic"` this sets `canonize_distance` and `canonize_after_distance`. :type tree_gauge_distance: int, optional :param canonize_distance: The distance to canonize around a pair of tensors before compressing. :type canonize_distance: int, optional :param canonize_opts: Additional keyword arguments to pass to the canonize routine. :type canonize_opts: dict, optional :param canonize_after_distance: The distance to canonize around a pair of tensors after compressing. :type canonize_after_distance: int, optional :param canonize_after_opts: Additional keyword arguments to pass to the canonize routine after compressing. :type canonize_after_opts: dict, optional :param gauge_boundary_only: Whether to only gauge the 'boundary' tensors, that is, intermediate tensors. :type gauge_boundary_only: bool, optional :param compress_late: Whether to compress just before contracting the tensors involved or immediately after. Early compression is cheaper and a better default especially for contractions beyond planar. Late compression leaves more information in the tensors for possibly better quality gauging and compression. Whilst the largest tensor ('contraction width') is typically unchanged, the total memory and cost can be quite a lot higher. By default, this is `None`, which will try and inherit the value from the optimizer, else default to False. :type compress_late: None or bool, optional :param compress_mode: How to compress a pair of tensors. If 'auto', then 'basic' is used if `tree_gauge_distance=0` or `gauges` are supplied, otherwise 'virtual-tree' is used. See `_compress_between_tids` for other valid options. :type compress_mode: {'auto', 'basic', 'virtual-tree', ...}, optional :param compress_min_size: Skip compressing a pair of tensors if their contraction would yield a tensor smaller than this size. :type compress_min_size: int, optional :param compress_opts: Additional keyword arguments to pass to the core pariwise compression routine. :type compress_opts: dict, optional :param compress_span: Whether to compress between tensors that are going to be contracted. If an `int`, this specifies that if two tensors will be contracted in the next `compress_span` contractions, then their bonds should be compressed. :type compress_span: bool or int, optional :param compress_matrices: Whether to compress pairs of tensors that are effectively matrices. :type compress_matrices: bool, optional :param compress_exclude: An explicit set of tensor ids to exclude from compression. :type compress_exclude: set[int], optional :param strip_exponent: Whether the strip an overall exponent, log10, from the *final* contraction. If a TensorNetwork is returned, this exponent is accumulated in the `exponent` attribute. If a Tensor or scalar is returned, the exponent is returned separately. :type strip_exponent: bool, optional :param equalize_norms: Whether to equalize the norms of the tensors *during* the contraction. By default ("auto") this follows `strip_exponent`. The overall scaling is accumulated, log10, into `tn.exponent`. If `True`, at the end this exponent is redistributed. If a float, this is the target norm to equalize tensors to, e.g. `1.0`, and the exponent is *not* redistributed, which is useful in the case that the non-log value is beyond standard precision. :type equalize_norms: bool or "auto", optional :param gauges: If supplied, use simple update style gauges during the contraction. The keys should be indices and the values singular value vectors. Only bonds present in this dictionary will be gauged. :type gauges: dict[str, array_like], optional :param gauge_smudge: If using simple update style gauging, add a small value to the singular values to avoid singularities. :type gauge_smudge: float, optional :param callback_pre_contract: A function to call before contracting a pair of tensors. It should have signature `fn(tn, (tid1, tid2))`. :type callback_pre_contract: callable, optional :param callback_post_contract: A function to call after contracting a pair of tensors. It should have signature `fn(tn, tid)`. :type callback_post_contract: callable, optional :param callback_pre_compress: A function to call before compressing a pair of tensors. It should have signature `fn(tn, (tid1, tid2))`. :type callback_pre_compress: callable, optional :param callback_post_compress: A function to call after compressing a pair of tensors. It should have signature `fn(tn, (tid1, tid2))`. :type callback_post_compress: callable, optional :param callback: A function to call after each full step of contraction and compressions. It should have signature `fn(tn, tid)`. :type callback: callable, optional :param preserve_tensor: If `True`, return a Tensor object even if it represents a scalar. Ignore if `inplace=True`, in which case a TensorNetwork is always returned. :type preserve_tensor: bool, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param inplace: Whether to perform the contraction inplace. :type inplace: bool, optional :param kwargs: Additional keyword passed to `_contract_compressed_tid_sequence`. :type kwargs: dict, optional .. py:attribute:: contract_compressed_ .. py:method:: new_bond(tags1, tags2, **opts) Inplace addition of a dummmy (size 1) bond between the single tensors specified by by ``tags1`` and ``tags2``. :param tags1: Tags identifying the first tensor. :type tags1: sequence of str :param tags2: Tags identifying the second tensor. :type tags2: sequence of str :param opts: Supplied to :func:`~quimb.tensor.tensor_core.new_bond`. .. seealso:: :py:obj:`new_bond` .. py:method:: _cut_between_tids(tid1, tid2, left_ind, right_ind) .. py:method:: cut_between(left_tags, right_tags, left_ind, right_ind) Cut the bond between the tensors specified by ``left_tags`` and ``right_tags``, giving them the new inds ``left_ind`` and ``right_ind`` respectively. .. py:method:: cut_bond(bond, new_left_ind=None, new_right_ind=None) Cut the bond index specified by ``bond`` between the tensors it connects. Use ``cut_between`` for control over which tensor gets which new index ``new_left_ind`` or ``new_right_ind``. The index must connect exactly two tensors. :param bond: The index to cut. :type bond: str :param new_left_ind: The new index to give to the left tensor (lowest ``tid`` value). :type new_left_ind: str, optional :param new_right_ind: The new index to give to the right tensor (highest ``tid`` value). :type new_right_ind: str, optional .. py:method:: drape_bond_between(tagsa, tagsb, tags_target, left_ind=None, right_ind=None, inplace=False) Take the bond(s) connecting the tensors tagged at ``tagsa`` and ``tagsb``, and 'drape' it through the tensor tagged at ``tags_target``, effectively adding an identity tensor between the two and contracting it with the third:: ┌─┐ ┌─┐ ┌─┐ ┌─┐ ─┤A├─Id─┤B├─ ─┤A├─┐ ┌─┤B├─ └─┘ └─┘ └─┘ │ │ └─┘ left_ind│ │right_ind ┌─┐ --> ├─┤ ─┤C├─ ─┤D├─ └┬┘ └┬┘ where D = C ⊗ Id │ │ This increases the size of the target tensor by ``d**2``, and disconnects the tensors at ``tagsa`` and ``tagsb``. :param tagsa: The tag(s) identifying the first tensor. :type tagsa: str or sequence of str :param tagsb: The tag(s) identifying the second tensor. :type tagsb: str or sequence of str :param tags_target: The tag(s) identifying the target tensor. :type tags_target: str or sequence of str :param left_ind: The new index to give to the left tensor. :type left_ind: str, optional :param right_ind: The new index to give to the right tensor. :type right_ind: str, optional :param inplace: Whether to perform the draping inplace. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: drape_bond_between_ .. py:method:: isel(selectors, inplace=False) Select specific values for some dimensions/indices of this tensor network, thereby removing them. :param selectors: Mapping of index(es) to which value to take. The values can be: - int: select a specific value for that index. - slice: select a range of values for that index. - "r": contract a random vector in. :type selectors: dict[str, int or slice or "r"] :param inplace: Whether to select inplace or not. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`Tensor.isel` .. py:attribute:: isel_ .. py:method:: sum_reduce(ind, inplace=False) Sum over the index ``ind`` of this tensor network, removing it. This is like contracting a vector of ones in, or marginalizing a classical probability distribution. :param ind: The index to sum over. :type ind: str :param inplace: Whether to perform the reduction inplace. :type inplace: bool, optional .. py:attribute:: sum_reduce_ .. py:method:: vector_reduce(ind, v, inplace=False) Contract the vector ``v`` with the index ``ind`` of this tensor network, removing it. :param ind: The index to contract. :type ind: str :param v: The vector to contract with. :type v: array_like :param inplace: Whether to perform the reduction inplace. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: vector_reduce_ .. py:method:: cut_iter(*inds) Cut and iterate over one or more indices in this tensor network. Each network yielded will have that index removed, and the sum of all networks will equal the original network. This works by iterating over the product of all combinations of each bond supplied to ``isel``. As such, the number of networks produced is exponential in the number of bonds cut. :param inds: The bonds to cut. :type inds: sequence of str :Yields: *TensorNetwork* .. rubric:: Examples Here we'll cut the two extra bonds of a cyclic MPS and sum the contraction of the resulting 49 OBC MPS norms: >>> psi = MPS_rand_state(10, bond_dim=7, cyclic=True) >>> norm = psi.H & psi >>> bnds = bonds(norm[0], norm[-1]) >>> sum(tn ^ all for tn in norm.cut_iter(*bnds)) 1.0 .. seealso:: :py:obj:`TensorNetwork.isel`, :py:obj:`TensorNetwork.cut_between` .. py:method:: insert_operator(A, where1, where2, tags=None, inplace=False) Insert an operator on the bond between the specified tensors, e.g.:: | | | | --1---2-- -> --1-A-2-- | | :param A: The operator to insert. :type A: array :param where1: The tags defining the 'left' tensor. :type where1: str, sequence of str, or int :param where2: The tags defining the 'right' tensor. :type where2: str, sequence of str, or int :param tags: Tags to add to the new operator's tensor. :type tags: str or sequence of str :param inplace: Whether to perform the insertion inplace. :type inplace: bool, optional .. py:attribute:: insert_operator_ .. py:method:: _insert_gauge_tids(U, tid1, tid2, Uinv=None, tol=1e-10, bond=None) .. py:method:: insert_gauge(U, where1, where2, Uinv=None, tol=1e-10) Insert the gauge transformation ``U^-1 @ U`` into the bond between the tensors, ``T1`` and ``T2``, defined by ``where1`` and ``where2``. The resulting tensors at those locations will be ``T1 @ U^-1`` and ``U @ T2``. :param U: The gauge to insert. :type U: array :param where1: Tags defining the location of the 'left' tensor. :type where1: str, sequence of str, or int :param where2: Tags defining the location of the 'right' tensor. :type where2: str, sequence of str, or int :param Uinv: The inverse gauge, ``U @ Uinv == Uinv @ U == eye``, to insert. If not given will be calculated using :func:`numpy.linalg.inv`. :type Uinv: array .. py:method:: contract_tags(tags, which='any', output_inds=None, optimize=None, get=None, backend=None, strip_exponent=False, equalize_norms='auto', preserve_tensor=False, inplace=False, **contract_opts) Contract the tensors that match any or all of ``tags``. :param tags: The list of tags to filter the tensors by. Use ``all`` or ``...`` (``Ellipsis``) to contract all tensors. :type tags: sequence of str :param which: Whether to require matching all or any of the tags. :type which: {'all', 'any'} :param output_inds: The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once. :type output_inds: sequence of str, optional :param optimize: The contraction path optimization strategy to use. - ``None``: use the default strategy, - ``str``: use the preset strategy with the given name, - ``cotengra.HyperOptimizer``: find the contraction using this optimizer, supports slicing, - ``opt_einsum.PathOptimizer``: find the path using this optimizer. - ``cotengra.ContractionTree``: use this exact tree, supports slicing, - ``path_like``: use this exact path. Contraction with ``cotengra`` might be a bit more efficient but the main reason would be to handle sliced contraction automatically. :type optimize: str, PathOptimizer, ContractionTree or path_like, optional :param get: What to return. If: * ``None`` (the default) - return the resulting scalar or Tensor. * ``'expression'`` - return a callbable expression that performs the contraction and operates on the raw arrays. * ``'tree'`` - return the ``cotengra.ContractionTree`` describing the contraction in detail. * ``'path'`` - return the raw 'path' as a list of tuples. * ``'symbol-map'`` - return the dict mapping indices to 'symbols' (single unicode letters) used internally by ``cotengra`` * ``'path-info'`` - return the ``opt_einsum.PathInfo`` path object with detailed information such as flop cost. The symbol-map is also added to the ``quimb_symbol_map`` attribute. :type get: str, optional :param backend: Which backend to use to perform the contraction. Supplied to `cotengra`. :type backend: {'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional :param strip_exponent: Whether the strip an overall exponent, log10, from the *final* contraction. If a TensorNetwork is returned, this exponent is accumulated in the `exponent` attribute. If a Tensor or scalar is returned, the exponent is returned separately. :type strip_exponent: bool, optional :param equalize_norms: Whether to equalize the norms of the tensors *during* the contraction. By default ("auto") this follows `strip_exponent`. :type equalize_norms: bool or "auto", optional :param preserve_tensor: Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not. :type preserve_tensor: bool, optional :param inplace: Whether to perform the contraction inplace. :type inplace: bool, optional :param contract_opts: Passed to :func:`~quimb.tensor.tensor_core.tensor_contract`. :returns: The result of the contraction, still a ``TensorNetwork`` if the contraction was only partial. :rtype: TensorNetwork, Tensor or scalar .. seealso:: :py:obj:`contract`, :py:obj:`contract_cumulative` .. py:attribute:: contract_tags_ .. py:method:: contract(tags=..., output_inds=None, optimize=None, get=None, max_bond=None, strip_exponent=False, preserve_tensor=False, backend=None, inplace=False, **kwargs) Contract some, or all, of the tensors in this network. This method dispatches to ``contract_tags``, ``contract_structured``, or ``contract_compressed`` based on the various arguments. :param tags: Any tensors with any of these tags with be contracted. Use ``all`` or ``...`` (``Ellipsis``) to contract all tensors. ``...`` will try and use a 'structured' contract method if possible. :type tags: sequence of str, all, or Ellipsis, optional :param output_inds: The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once. :type output_inds: sequence of str, optional :param optimize: The contraction path optimization strategy to use. - ``None``: use the default strategy, - ``str``: use the preset strategy with the given name, - ``cotengra.HyperOptimizer``: find the contraction using this optimizer, supports slicing, - ``opt_einsum.PathOptimizer``: find the path using this optimizer. - ``cotengra.ContractionTree``: use this exact tree, supports slicing, - ``path_like``: use this exact path. Contraction with ``cotengra`` might be a bit more efficient but the main reason would be to handle sliced contraction automatically. :type optimize: str, PathOptimizer, ContractionTree or path_like, optional :param get: What to return. If: - ``None`` (the default) - return the resulting scalar or Tensor. - ``'expression'`` - return a callbable expression that performs the contraction and operates on the raw arrays. - ``'tree'`` - return the ``cotengra.ContractionTree`` describing the contraction in detail. - ``'path'`` - return the raw 'path' as a list of tuples. - ``'symbol-map'`` - return the dict mapping indices to 'symbols' (single unicode letters) used internally by ``cotengra`` - ``'path-info'`` - return the ``opt_einsum.PathInfo`` path object with detailed information such as flop cost. The symbol-map is also added to the ``quimb_symbol_map`` attribute. :type get: str, optional :param strip_exponent: Whether the strip an overall exponent, log10, from the *final* contraction. If a TensorNetwork is returned, this exponent is accumulated in the `exponent` attribute. If a Tensor or scalar is returned, the exponent is returned separately. :type strip_exponent: bool, optional :param preserve_tensor: Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not. :type preserve_tensor: bool, optional :param backend: Which backend to use to perform the contraction. Supplied to `cotengra`. :type backend: {'auto', 'numpy', 'jax', 'cupy', 'tensorflow', ...}, optional :param inplace: Whether to perform the contraction inplace. This is only valid if not all tensors are contracted (which doesn't produce a TN). :type inplace: bool, optional :param kwargs: Passed to :func:`~quimb.tensor.tensor_core.tensor_contract`, :meth:`~quimb.tensor.tensor_core.TensorNetwork.contract_compressed` . :returns: The result of the contraction, still a ``TensorNetwork`` if the contraction was only partial. :rtype: TensorNetwork, Tensor or scalar .. seealso:: :py:obj:`contract_tags`, :py:obj:`contract_cumulative` .. py:attribute:: contract_ .. py:method:: contract_cumulative(tags_seq, output_inds=None, strip_exponent=False, equalize_norms='auto', preserve_tensor=False, inplace=False, **contract_opts) Cumulative contraction of tensor network. Contract the first set of tags, then that set with the next set, then both of those with the next and so forth. Could also be described as an manually ordered contraction of all tags in ``tags_seq``. :param tags_seq: The list of tag-groups to cumulatively contract. :type tags_seq: sequence of sequence of str :param output_inds: The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once. :type output_inds: sequence of str, optional :param strip_exponent: Whether the strip an overall exponent, log10, from the *final* contraction. If a TensorNetwork is returned, this exponent is accumulated in the `exponent` attribute. If a Tensor or scalar is returned, the exponent is returned separately. :type strip_exponent: bool, optional :param equalize_norms: Whether to equalize the norms of the tensors *during* the contraction. By default ("auto") this follows `strip_exponent`. :type equalize_norms: bool or "auto", optional :param preserve_tensor: Whether to return a tensor regardless of whether the output object is a scalar (has no indices) or not. :type preserve_tensor: bool, optional :param inplace: Whether to perform the contraction inplace. :type inplace: bool, optional :param contract_opts: Passed to :func:`~quimb.tensor.tensor_core.tensor_contract`. :returns: The result of the contraction, still a ``TensorNetwork`` if the contraction was only partial. :rtype: TensorNetwork, Tensor or scalar .. seealso:: :py:obj:`contract`, :py:obj:`contract_tags` .. py:method:: contraction_path(optimize=None, output_inds=None, **kwargs) Compute the contraction path, a sequence of (int, int), for the contraction of this entire tensor network using strategy ``optimize``. :param optimize: The contraction path optimization strategy to use. - ``None``: use the default strategy, - ``str``: use the preset strategy with the given name, - ``cotengra.HyperOptimizer``: find the contraction using this optimizer, supports slicing, - ``opt_einsum.PathOptimizer``: find the path using this optimizer. - ``cotengra.ContractionTree``: use this exact tree, supports slicing, - ``path_like``: use this exact path. :type optimize: str, PathOptimizer, ContractionTree or path_like, optional :param output_inds: The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once. :type output_inds: sequence of str, optional :param kwargs: Passed to :func:`cotengra.array_contract_path`. :type kwargs: dict, optional :rtype: list[tuple[int, int]] .. py:method:: contraction_info(optimize=None, output_inds=None, **kwargs) Compute the ``opt_einsum.PathInfo`` object describing the contraction of this entire tensor network using strategy ``optimize``. Note any sliced indices will be ignored. :param optimize: The contraction path optimization strategy to use. - ``None``: use the default strategy, - ``str``: use the preset strategy with the given name, - ``cotengra.HyperOptimizer``: find the contraction using this optimizer, supports slicing, - ``opt_einsum.PathOptimizer``: find the path using this optimizer. - ``cotengra.ContractionTree``: use this exact tree, supports slicing, - ``path_like``: use this exact path. :type optimize: str, PathOptimizer, ContractionTree or path_like, optional :param output_inds: The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once. :type output_inds: sequence of str, optional :param kwargs: Passed to :func:`cotengra.array_contract_tree`. :type kwargs: dict, optional :rtype: opt_einsum.PathInfo .. py:method:: contraction_tree(optimize=None, output_inds=None, **kwargs) Return the :class:`cotengra.ContractionTree` corresponding to contracting this entire tensor network with strategy ``optimize``. :param optimize: The contraction path optimization strategy to use. - ``None``: use the default strategy, - ``str``: use the preset strategy with the given name, - ``cotengra.HyperOptimizer``: find the contraction using this optimizer, supports slicing, - ``opt_einsum.PathOptimizer``: find the path using this optimizer. - ``cotengra.ContractionTree``: use this exact tree, supports slicing, - ``path_like``: use this exact path. :type optimize: str, PathOptimizer, ContractionTree or path_like, optional :param output_inds: The indices to specify as outputs of the contraction. If not given, and the tensor network has no hyper-indices, these are computed automatically as every index appearing once. :type output_inds: sequence of str, optional :param kwargs: Passed to :func:`cotengra.array_contract_tree`. :type kwargs: dict, optional :rtype: cotengra.ContractionTree .. py:method:: contraction_width(optimize=None, **contract_opts) Compute the 'contraction width' of this tensor network. This is defined as log2 of the maximum tensor size produced during the contraction sequence. If every index in the network has dimension 2 this corresponds to the maximum rank tensor produced. .. py:method:: contraction_cost(optimize=None, **contract_opts) Compute the 'contraction cost' of this tensor network. This is defined as log10 of the total number of scalar operations during the contraction sequence. .. py:method:: __rshift__(tags_seq) Overload of '>>' for TensorNetwork.contract_cumulative. .. py:method:: __irshift__(tags_seq) Overload of '>>=' for inplace TensorNetwork.contract_cumulative. .. py:method:: __xor__(tags) Overload of '^' for TensorNetwork.contract. .. py:method:: __ixor__(tags) Overload of '^=' for inplace TensorNetwork.contract. .. py:method:: __matmul__(other) Overload "@" to mean full contraction with another network. .. py:method:: as_network(virtual=True) Matching method (for ensuring object is a tensor network) to :meth:`~quimb.tensor.tensor_core.Tensor.as_network`, which simply returns ``self`` if ``virtual=True``. .. py:method:: aslinearoperator(left_inds, right_inds, ldims=None, rdims=None, backend=None, optimize=None) View this ``TensorNetwork`` as a :class:`~quimb.tensor.tensor_core.TNLinearOperator`. .. py:method:: split(left_inds, right_inds=None, **split_opts) Decompose this tensor network across a bipartition of outer indices. This method matches ``Tensor.split`` by converting to a ``TNLinearOperator`` first. Note unless an iterative method is passed to ``method``, the full dense tensor will be contracted. .. py:method:: trace(left_inds, right_inds, **contract_opts) Trace over ``left_inds`` joined with ``right_inds`` .. py:method:: to_dense(*inds_seq, to_qarray=False, **contract_opts) Convert this network into an dense array, with a single dimension for each of inds in ``inds_seqs``. E.g. to convert several sites into a density matrix: ``TN.to_dense(('k0', 'k1'), ('b0', 'b1'))``. .. py:attribute:: to_qarray .. py:method:: compute_reduced_factor(side, left_inds, right_inds, optimize='auto-hq', **contract_opts) Compute either the left or right 'reduced factor' of this tensor network. I.e., view as an operator, ``X``, mapping ``left_inds`` to ``right_inds`` and compute ``L`` or ``R`` such that ``X = U_R @ R`` or ``X = L @ U_L``, with ``U_R`` and ``U_L`` unitary operators that are not computed. Only ``dag(X) @ X`` or ``X @ dag(X)`` is contracted, which is generally cheaper than contracting ``X`` itself. :param self: The tensor network to compute the reduced factor of. :type self: TensorNetwork :param side: Whether to compute the left or right reduced factor. If 'right' then ``dag(X) @ X`` is contracted, otherwise ``X @ dag(X)``. :type side: {'left', 'right'} :param left_inds: The indices forming the left side of the operator. :type left_inds: sequence of str :param right_inds: The indices forming the right side of the operator. :type right_inds: sequence of str :param contract_opts: Options to pass to :meth:`~quimb.tensor.tensor_core.TensorNetwork.to_dense`. :type contract_opts: dict, optional :rtype: array_like .. py:method:: insert_compressor_between_regions(ltags, rtags, max_bond=None, cutoff=1e-10, select_which='any', insert_into=None, new_tags=None, new_ltags=None, new_rtags=None, bond_ind=None, gauges=None, gauge_smudge=0.0, gauge_power=1.0, optimize='auto-hq', inplace=False, **compress_opts) Compute and insert a pair of 'oblique' projection tensors (see for example https://arxiv.org/abs/1905.02351) that effectively compresses between two regions of the tensor network. Useful for various approximate contraction methods such as HOTRG and CTMRG. :param ltags: The tags of the tensors in the left region. :type ltags: sequence of str :param rtags: The tags of the tensors in the right region. :type rtags: sequence of str :param max_bond: The maximum bond dimension to use for the compression (i.e. shared by the two projection tensors). If ``None`` then the maximum is controlled by ``cutoff``. :type max_bond: int or None, optional :param cutoff: The cutoff to use for the compression. :type cutoff: float, optional :param select_which: How to select the regions based on the tags, see :meth:`~quimb.tensor.tensor_core.TensorNetwork.select`. :type select_which: {'any', 'all', 'none'}, optional :param insert_into: If given, insert the new tensors into this tensor network, assumed to have the same relevant indices as ``self``. :type insert_into: TensorNetwork, optional :param new_tags: The tag(s) to add to both the new tensors. :type new_tags: str or sequence of str, optional :param new_ltags: The tag(s) to add to the new left projection tensor. :type new_ltags: str or sequence of str, optional :param new_rtags: The tag(s) to add to the new right projection tensor. :type new_rtags: str or sequence of str, optional :param optimize: How to optimize the contraction of the projection tensors. :type optimize: str or PathOptimizer, optional :param inplace: Whether perform the insertion in-place. If ``insert_into`` is supplied then this doesn't matter, and that tensor network will be modified and returned. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`compute_reduced_factor`, :py:obj:`select` .. py:attribute:: insert_compressor_between_regions_ .. py:method:: distance(*args, **kwargs) .. py:attribute:: distance_normalized .. py:method:: fit(tn_target, method='als', tol=1e-09, inplace=False, progbar=False, **fitting_opts) Optimize the entries of this tensor network with respect to a least squares fit of ``tn_target`` which should have the same outer indices. Depending on ``method`` this calls :func:`~quimb.tensor.tensor_core.tensor_network_fit_als` or :func:`~quimb.tensor.tensor_core.tensor_network_fit_autodiff`. The quantity minimized is: .. math:: D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2} :param tn_target: The target tensor network to try and fit the current one to. :type tn_target: TensorNetwork :param method: How to perform the fitting. The options are: - 'als': alternating least squares (ALS) optimization, - 'autodiff': automatic differentiation optimization, - 'tree': ALS where the fitted tensor network has a tree structure and thus a canonical form can be utilized for much greater efficiency and stability. Generally ALS is better for simple geometries, autodiff better for complex ones. Tree best if the tensor network has a tree structure. :type method: {'als', 'autodiff', 'tree'}, optional :param tol: The target norm distance. :type tol: float, optional :param inplace: Update the current tensor network in place. :type inplace: bool, optional :param progbar: Show a live progress bar of the fitting process. :type progbar: bool, optional :param fitting_opts: Supplied to either :func:`~quimb.tensor.tensor_core.tensor_network_fit_als`, :func:`~quimb.tensor.tensor_core.tensor_network_fit_autodiff`, or :func:`~quimb.tensor.tensor_core.tensor_network_fit_tree`. :returns: **tn_opt** -- The optimized tensor network. :rtype: TensorNetwork .. seealso:: :py:obj:`tensor_network_fit_als`, :py:obj:`tensor_network_fit_autodiff`, :py:obj:`tensor_network_fit_tree`, :py:obj:`tensor_network_distance`, :py:obj:`tensor_network_1d_compress` .. py:attribute:: fit_ .. py:property:: tags .. py:method:: all_inds() Return a tuple of all indices in this network. .. py:method:: ind_size(ind) Find the size of ``ind``. .. py:method:: inds_size(inds) Return the total size of dimensions corresponding to ``inds``. .. py:method:: ind_sizes() Get dict of each index mapped to its size. .. py:method:: inner_inds() Tuple of interior indices, assumed to be any indices that appear twice or more (this only holds generally for non-hyper tensor networks). .. py:method:: outer_inds() Tuple of exterior indices, assumed to be any lone indices (this only holds generally for non-hyper tensor networks). .. py:method:: outer_dims_inds() Get the 'outer' pairs of dimension and indices, i.e. as if this tensor network was fully contracted. .. py:method:: outer_size() Get the total size of the 'outer' indices, i.e. as if this tensor network was fully contracted. .. py:method:: get_multibonds(include=None, exclude=None) Get a dict of 'multibonds' in this tensor network, i.e. groups of two or more indices that appear on exactly the same tensors and thus could be fused, for example. :param include: Only consider these indices, by default all indices. :type include: sequence of str, optional :param exclude: Ignore these indices, by default the outer indices of this TN. :type exclude: sequence of str, optional :returns: A dict mapping the tuple of indices that could be fused to the tuple of tensor ids they appear on. :rtype: dict[tuple[str], tuple[int]] .. py:method:: get_hyperinds(output_inds=None) Get a tuple of all 'hyperinds', defined as those indices which don't appear exactly twice on either the tensors *or* in the 'outer' (i.e. output) indices. Note the default set of 'outer' indices is calculated as only those indices that appear once on the tensors, so these likely need to be manually specified, otherwise, for example, an index that appears on two tensors *and* the output will incorrectly be identified as non-hyper. :param output_inds: The outer or output index or indices. If not specified then taken as every index that appears only once on the tensors (and thus non-hyper). :type output_inds: None, str or sequence of str, optional :returns: The tensor network hyperinds. :rtype: tuple[str] .. py:method:: compute_contracted_inds(*tids, output_inds=None) Get the indices describing the tensor contraction of tensors corresponding to ``tids``. .. py:method:: squeeze(fuse=False, include=None, exclude=None, inplace=False) Drop singlet bonds and dimensions from this tensor network. If ``fuse=True`` also fuse all multibonds between tensors. :param fuse: Whether to fuse multibonds between tensors as well as squeezing. :type fuse: bool, optional :param include: Only squeeze these indices, by default all indices. :type include: sequence of str, optional :param exclude: Ignore these indices, by default the outer indices of this TN. :type exclude: sequence of str, optional :param inplace: Whether to perform the squeeze and optional fuse inplace. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: squeeze_ .. py:method:: isometrize(method='qr', allow_no_left_inds=False, inplace=False) Project every tensor in this network into an isometric form, assuming they have ``left_inds`` marked. :param method: The method used to generate the isometry. The options are: - "qr": use the Q factor of the QR decomposition of ``x`` with the constraint that the diagonal of ``R`` is positive. - "svd": uses ``U @ VH`` of the SVD decomposition of ``x``. This is useful for finding the 'closest' isometric matrix to ``x``, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization. - "exp": use the matrix exponential of ``x - dag(x)``, first completing ``x`` with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square ``x``. - "cayley": use the Cayley transform of ``x - dag(x)``, first completing ``x`` with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with `HIPS/autograd` e.g.), but more expensive for non-square ``x``. - "householder": use the Householder reflection method directly. This requires that the backend implements "linalg.householder_product". - "torch_householder": use the Householder reflection method directly, using the ``torch_householder`` package. This requires that the package is installed and that the backend is ``"torch"``. This is generally the best parametrizing method for "torch" if available. - "mgs": use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference. Not all backends support all methods or differentiating through all methods. :type method: str, optional :param allow_no_left_inds: If ``True`` then allow tensors with no ``left_inds`` to be left alone, rather than raising an error. :type allow_no_left_inds: bool, optional :param inplace: If ``True`` then perform the operation in-place. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: isometrize_ .. py:attribute:: unitize .. py:attribute:: unitize_ .. py:method:: randomize(dtype=None, seed=None, inplace=False, **randn_opts) Randomize every tensor in this TN - see :meth:`quimb.tensor.tensor_core.Tensor.randomize`. :param dtype: The data type of the random entries. If left as the default ``None``, then the data type of the current array will be used. :type dtype: {None, str}, optional :param seed: Seed for the random number generator. :type seed: None or int, optional :param inplace: Whether to perform the randomization inplace, by default ``False``. :type inplace: bool, optional :param randn_opts: Supplied to :func:`~quimb.gen.rand.randn`. :rtype: TensorNetwork .. py:attribute:: randomize_ .. py:method:: strip_exponent(tid_or_tensor, value=None, check_zero=False) Scale the elements of tensor corresponding to ``tid`` so that the norm of the array is some value, which defaults to ``1``. The log of the scaling factor, base 10, is then accumulated in the ``exponent`` attribute. :param tid: The tensor identifier or actual tensor. :type tid: str or Tensor :param value: The value to scale the norm of the tensor to. :type value: None or float, optional :param check_zero: Whether to check if the tensor has zero norm and in that case do nothing, since the `exponent` would be -inf. Off by default to avoid data dependent computational graphs when tracing and computing gradients etc. :type check_zero: bool, optional .. py:method:: distribute_exponent() Distribute the exponent ``p`` of this tensor network (i.e. corresponding to ``tn * 10**p``) equally among all tensors. .. py:method:: equalize_norms(value=None, check_zero=False, inplace=False) Make the Frobenius norm of every tensor in this TN equal without changing the overall value if ``value=None``, or set the norm of every tensor to ``value`` by scalar multiplication only. :param value: Set the norm of each tensor to this value specifically. If supplied the change in overall scaling will be accumulated in ``tn.exponent`` in the form of a base 10 power. :type value: None or float, optional :param check_zero: Whether, if and when equalizing norms, to check if tensors have zero norm and in that case do nothing, since the `exponent` would be -inf. Off by default to avoid data dependent computational graphs when tracing and computing gradients etc. :type check_zero: bool, optional :param inplace: Whether to perform the norm equalization inplace or not. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: equalize_norms_ .. py:method:: balance_bonds(inplace=False) Apply :func:`~quimb.tensor.tensor_contract.tensor_balance_bond` to all bonds in this tensor network. :param inplace: Whether to perform the bond balancing inplace or not. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: balance_bonds_ .. py:method:: fuse_multibonds(gauges=None, include=None, exclude=None, inplace=False) Fuse any multi-bonds (more than one index shared by the same pair of tensors) into a single bond. :param gauges: If supplied, also fuse the gauges contained in this dict. :type gauges: None or dict[str, array_like], optional :param include: Only consider these indices, by default all indices. :type include: sequence of str, optional :param exclude: Ignore these indices, by default the outer indices of this TN. :type exclude: sequence of str, optional .. py:attribute:: fuse_multibonds_ .. py:method:: expand_bond_dimension(new_bond_dim, mode=None, rand_strength=None, rand_dist='normal', inds_to_expand=None, inplace=False) Increase the dimension of all or some of the bonds in this tensor network to at least ``new_bond_dim``, optinally adding some random noise to the new entries. :param new_bond_dim: The minimum bond dimension to expand to, if the bond dimension is already larger than this it will be left unchanged. :type new_bond_dim: int :param rand_strength: The strength of random noise to add to the new array entries, if any. The noise is drawn from a normal distribution with standard deviation ``rand_strength``. :type rand_strength: float, optional :param inds_to_expand: The indices to expand, if not all. :type inds_to_expand: sequence of str, optional :param inplace: Whether to expand this tensor network in place, or return a new one. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: expand_bond_dimension_ .. py:method:: flip(inds, inplace=False) Flip the dimension corresponding to indices ``inds`` on all tensors that share it. .. py:attribute:: flip_ .. py:method:: rank_simplify(output_inds=None, equalize_norms=False, cache=None, max_combinations=500, check_zero=False, inplace=False) Simplify this tensor network by performing contractions that don't increase the rank of any tensors. :param output_inds: Explicitly set which indices of the tensor network are output indices and thus should not be modified. :type output_inds: sequence of str, optional :param equalize_norms: Actively renormalize the tensors during the simplification process. Useful for very large TNs. The scaling factor will be stored as an exponent in ``tn.exponent``. :type equalize_norms: bool or float :param cache: Persistent cache used to mark already checked tensors. :type cache: None or set :param check_zero: Whether, if and when equalizing norms, to check if tensors have zero norm and in that case do nothing, since the `exponent` would be -inf. Off by default to avoid data dependent computational graphs when tracing and computing gradients etc. :type check_zero: bool, optional :param inplace: Whether to perform the rand reduction inplace. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`full_simplify`, :py:obj:`column_reduce`, :py:obj:`diagonal_reduce` .. py:attribute:: rank_simplify_ .. py:method:: diagonal_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False) Find tensors with diagonal structure and collapse those axes. This will create a tensor 'hyper' network with indices repeated 2+ times, as such, output indices should be explicitly supplied when contracting, as they can no longer be automatically inferred. For example: >>> tn_diag = tn.diagonal_reduce() >>> tn_diag.contract(all, output_inds=[]) :param output_inds: Which indices to explicitly consider as outer legs of the tensor network and thus not replace. If not given, these will be taken as all the indices that appear once. :type output_inds: sequence of str, optional :param atol: When identifying diagonal tensors, the absolute tolerance with which to compare to zero with. :type atol: float, optional :param cache: Persistent cache used to mark already checked tensors. :type cache: None or set :param inplace: Whether to perform the diagonal reduction inplace. :param bool: Whether to perform the diagonal reduction inplace. :param optional: Whether to perform the diagonal reduction inplace. :rtype: TensorNetwork .. seealso:: :py:obj:`full_simplify`, :py:obj:`rank_simplify`, :py:obj:`antidiag_gauge`, :py:obj:`column_reduce` .. py:attribute:: diagonal_reduce_ .. py:method:: antidiag_gauge(output_inds=None, atol=1e-12, cache=None, inplace=False) Flip the order of any bonds connected to antidiagonal tensors. Whilst this is just a gauge fixing (with the gauge being the flipped identity) it then allows ``diagonal_reduce`` to then simplify those indices. :param output_inds: Which indices to explicitly consider as outer legs of the tensor network and thus not flip. If not given, these will be taken as all the indices that appear once. :type output_inds: sequence of str, optional :param atol: When identifying antidiagonal tensors, the absolute tolerance with which to compare to zero with. :type atol: float, optional :param cache: Persistent cache used to mark already checked tensors. :type cache: None or set :param inplace: Whether to perform the antidiagonal gauging inplace. :param bool: Whether to perform the antidiagonal gauging inplace. :param optional: Whether to perform the antidiagonal gauging inplace. :rtype: TensorNetwork .. seealso:: :py:obj:`full_simplify`, :py:obj:`rank_simplify`, :py:obj:`diagonal_reduce`, :py:obj:`column_reduce` .. py:attribute:: antidiag_gauge_ .. py:method:: column_reduce(output_inds=None, atol=1e-12, cache=None, inplace=False) Find bonds on this tensor network which have tensors where all but one column (of the respective index) is non-zero, allowing the 'cutting' of that bond. :param output_inds: Which indices to explicitly consider as outer legs of the tensor network and thus not slice. If not given, these will be taken as all the indices that appear once. :type output_inds: sequence of str, optional :param atol: When identifying singlet column tensors, the absolute tolerance with which to compare to zero with. :type atol: float, optional :param cache: Persistent cache used to mark already checked tensors. :type cache: None or set :param inplace: Whether to perform the column reductions inplace. :param bool: Whether to perform the column reductions inplace. :param optional: Whether to perform the column reductions inplace. :rtype: TensorNetwork .. seealso:: :py:obj:`full_simplify`, :py:obj:`rank_simplify`, :py:obj:`diagonal_reduce`, :py:obj:`antidiag_gauge` .. py:attribute:: column_reduce_ .. py:method:: split_simplify(atol=1e-12, equalize_norms=False, cache=None, check_zero=False, inplace=False, **split_opts) Find tensors which have low rank SVD decompositions across any combination of bonds and perform them. :param atol: Cutoff used when attempting low rank decompositions. :type atol: float, optional :param equalize_norms: Actively renormalize the tensors during the simplification process. Useful for very large TNs. The scaling factor will be stored as an exponent in ``tn.exponent``. :type equalize_norms: bool or float :param cache: Persistent cache used to mark already checked tensors. :type cache: None or set :param check_zero: Whether, if and when equalizing norms, to check if tensors have zero norm and in that case do nothing, since the `exponent` would be -inf. Off by default to avoid data dependent computational graphs when tracing and computing gradients etc. :type check_zero: bool, optional :param inplace: Whether to perform the split simplification inplace. :param bool: Whether to perform the split simplification inplace. :param optional: Whether to perform the split simplification inplace. .. py:attribute:: split_simplify_ .. py:method:: pair_simplify(cutoff=1e-12, output_inds=None, max_inds=10, cache=None, equalize_norms=False, max_combinations=500, check_zero=False, inplace=False, **split_opts) .. py:attribute:: pair_simplify_ .. py:method:: loop_simplify(output_inds=None, max_loop_length=None, max_inds=10, cutoff=1e-12, loops=None, cache=None, equalize_norms=False, check_zero=False, inplace=False, **split_opts) Try and simplify this tensor network by identifying loops and checking for low-rank decompositions across groupings of the loops outer indices. :param max_loop_length: Largest length of loop to search for, if not set, the size will be set to the length of the first (and shortest) loop found. :type max_loop_length: None or int, optional :param cutoff: Cutoff to use for the operator decomposition. :type cutoff: float, optional :param loops: Loops to check, or a function that generates them. :type loops: None, sequence or callable :param cache: For performance reasons can supply a cache for already checked loops. :type cache: set, optional :param check_zero: Whether, if and when equalizing norms, to check if tensors have zero norm and in that case do nothing, since the `exponent` would be -inf. Off by default to avoid data dependent computational graphs when tracing and computing gradients etc. :type check_zero: bool, optional :param inplace: Whether to replace the loops inplace. :type inplace: bool, optional :param split_opts: Supplied to :func:`~quimb.tensor.tensor_core.tensor_split`. :rtype: TensorNetwork .. py:attribute:: loop_simplify_ .. py:method:: full_simplify(seq='ADCR', output_inds=None, atol=1e-12, equalize_norms=False, cache=None, rank_simplify_opts=None, loop_simplify_opts=None, split_simplify_opts=None, custom_methods=(), split_method='svd', check_zero='auto', inplace=False, progbar=False) Perform a series of tensor network 'simplifications' in a loop until there is no more reduction in the number of tensors or indices. Note that apart from rank-reduction, the simplification methods make use of the non-zero structure of the tensors, and thus changes to this will potentially produce different simplifications. :param seq: Which simplifications and which order to perform them in. * ``'A'`` : stands for ``antidiag_gauge`` * ``'D'`` : stands for ``diagonal_reduce`` * ``'C'`` : stands for ``column_reduce`` * ``'R'`` : stands for ``rank_simplify`` * ``'S'`` : stands for ``split_simplify`` * ``'L'`` : stands for ``loop_simplify`` If you want to keep the tensor network 'simple', i.e. with no hyperedges, then don't use ``'D'`` (moreover ``'A'`` is redundant). :type seq: str, optional :param output_inds: Explicitly set which indices of the tensor network are output indices and thus should not be modified. If not specified the tensor network is assumed to be a 'standard' one where indices that only appear once are the output indices. :type output_inds: sequence of str, optional :param atol: The absolute tolerance when indentifying zero entries of tensors and performing low-rank decompositions. :type atol: float, optional :param equalize_norms: Actively renormalize the tensors during the simplification process. Useful for very large TNs. If `True`, the norms, in the formed of stripped exponents, will be redistributed at the end. If an actual number, the final tensors will all have this norm, and the scaling factor will be stored as a base-10 exponent in ``tn.exponent``. :type equalize_norms: bool or float :param cache: A persistent cache for each simplification process to mark already processed tensors. :type cache: None or set :param check_zero: Whether to check if tensors have zero norm and in that case do nothing if and when equalizing norms, rather than generating a NaN. If 'auto' this will only be turned on if other methods that explicitly check data entries ("A", "D", "C", "S", "L") are being used (the default). :type check_zero: bool, optional :param progbar: Show a live progress bar of the simplification process. :type progbar: bool, optional :param inplace: Whether to perform the simplification inplace. :type inplace: bool, optional :rtype: TensorNetwork .. seealso:: :py:obj:`diagonal_reduce`, :py:obj:`rank_simplify`, :py:obj:`antidiag_gauge`, :py:obj:`column_reduce`, :py:obj:`split_simplify`, :py:obj:`loop_simplify` .. py:attribute:: full_simplify_ .. py:method:: hyperinds_resolve(mode='dense', sorter=None, output_inds=None, inplace=False) Convert this into a regular tensor network, where all indices appear at most twice, by inserting COPY tensor or tensor networks for each hyper index. :param mode: What type of COPY tensor(s) to insert. :type mode: {'dense', 'mps', 'tree'}, optional :param sorter: If given, a function to sort the indices that a single hyperindex will be turned into. Th function is called like ``tids.sort(key=sorter)``. "centrality" will sort by the centrality of the tensors, "clustering" will sort using a hierarchical clustering. :type sorter: None, callable, "centrality", or "clustering", optional :param inplace: Whether to insert the COPY tensors inplace. :type inplace: bool, optional :rtype: TensorNetwork .. py:attribute:: hyperinds_resolve_ .. py:method:: compress_simplify(output_inds=None, atol=1e-06, simplify_sequence_a='ADCRS', simplify_sequence_b='RPL', hyperind_resolve_mode='tree', hyperind_resolve_sort='clustering', final_resolve=False, split_method='svd', max_simplification_iterations=100, converged_tol=0.01, equalize_norms=True, check_zero=True, progbar=False, inplace=False, **full_simplify_opts) .. py:attribute:: compress_simplify_ .. py:method:: max_bond() Return the size of the largest bond (i.e. index connecting 2+ tensors) in this network. .. py:property:: shape Effective, i.e. outer, shape of this TensorNetwork. .. py:property:: dtype The dtype of this TensorNetwork, this is the minimal common type of all the tensors data. .. py:property:: dtype_name The name of the data type of the array elements. .. py:property:: backend Get the backend of any tensor in this network, asssuming it to be the same for all tensors. .. py:method:: iscomplex() .. py:method:: astype(dtype, inplace=False) Convert the type of all tensors in this network to ``dtype``. .. py:attribute:: astype_ .. py:method:: __getstate__() .. py:method:: __setstate__(state) .. py:method:: _repr_info() General info to show in various reprs. Sublasses can add more relevant info to this dict. .. py:method:: _repr_info_str() Render the general info as a string. .. py:method:: _repr_html_() Render this TensorNetwork as HTML, for Jupyter notebooks. .. py:method:: __str__() .. py:method:: __repr__() .. py:attribute:: draw .. py:attribute:: draw_3d .. py:attribute:: draw_interactive .. py:attribute:: draw_3d_interactive .. py:attribute:: graph .. py:attribute:: visualize_tensors .. py:data:: TNLO_HANDLED_FUNCTIONS .. py:class:: TNLinearOperator(tns, left_inds, right_inds, ldims=None, rdims=None, optimize=None, backend=None, is_conj=False) Bases: :py:obj:`scipy.sparse.linalg.LinearOperator` Get a linear operator - something that replicates the matrix-vector operation - for an arbitrary uncontracted TensorNetwork, e.g:: : --O--O--+ +-- : --+ : | | | : | : --O--O--O-O-- : acting on --V : | | : | : --+ +---- : --+ left_inds^ ^right_inds This can then be supplied to scipy's sparse linear algebra routines. The ``left_inds`` / ``right_inds`` convention is that the linear operator will have shape matching ``(*left_inds, *right_inds)``, so that the ``right_inds`` are those that will be contracted in a normal matvec / matmat operation:: _matvec = --0--v , _rmatvec = v--0-- :param tns: A representation of the hamiltonian :type tns: sequence of Tensors or TensorNetwork :param left_inds: The 'left' inds of the effective hamiltonian network. :type left_inds: sequence of str :param right_inds: The 'right' inds of the effective hamiltonian network. These should be ordered the same way as ``left_inds``. :type right_inds: sequence of str :param ldims: The dimensions corresponding to left_inds. Will figure out if None. :type ldims: tuple of int, or None :param rdims: The dimensions corresponding to right_inds. Will figure out if None. :type rdims: tuple of int, or None :param optimize: The path optimizer to use for the 'matrix-vector' contraction. :type optimize: str, optional :param backend: The array backend to use for the 'matrix-vector' contraction. :type backend: str, optional :param is_conj: Whether this object should represent the *adjoint* operator. :type is_conj: bool, optional .. seealso:: :py:obj:`TNLinearOperator1D` .. py:attribute:: optimize :value: None .. py:attribute:: tags .. py:attribute:: _kws .. py:attribute:: _ins :value: () .. py:attribute:: is_conj :value: False .. py:attribute:: _conj_linop :value: None .. py:attribute:: _adjoint_linop :value: None .. py:attribute:: _transpose_linop :value: None .. py:attribute:: _contractors .. py:method:: _matvec(vec) Default matrix-vector multiplication handler. If self is a linear operator of shape (M, N), then this method will be called on a shape (N,) or (N, 1) ndarray, and should return a shape (M,) or (M, 1) ndarray. This default implementation falls back on _matmat, so defining that will define matrix-vector multiplication as well. .. py:method:: _matmat(mat) Default matrix-matrix multiplication handler. Falls back on the user-defined _matvec method, so defining that will define matrix multiplication (though in a very suboptimal way). .. py:method:: trace() .. py:method:: copy(conj=False, transpose=False) .. py:method:: conj() .. py:method:: _transpose() Default implementation of _transpose; defers to rmatvec + conj .. py:method:: _adjoint() Hermitian conjugate of this TNLO. .. py:method:: to_dense(*inds_seq, to_qarray=False, **contract_opts) Convert this TNLinearOperator into a dense array, defaulting to grouping the left and right indices respectively. .. py:attribute:: toarray .. py:attribute:: to_qarray .. py:method:: split(**split_opts) .. py:property:: A .. py:method:: astype(dtype) Convert this ``TNLinearOperator`` to type ``dtype``. .. py:method:: __array_function__(func, types, args, kwargs) .. py:function:: tnlo_implements(np_function) Register an __array_function__ implementation for TNLinearOperator objects. .. py:function:: _tnlo_trace(x) .. py:class:: PTensor(fn, params, inds=(), tags=None, left_inds=None) Bases: :py:obj:`Tensor` A tensor whose data array is lazily generated from a set of parameters and a function. :param fn: The function that generates the tensor data from ``params``. :type fn: callable :param params: The initial parameters supplied to the generating function like ``fn(params)``. :type params: sequence of numbers :param inds: Should match the shape of ``fn(params)``, see :class:`~quimb.tensor.tensor_core.Tensor`. :type inds: optional :param tags: See :class:`~quimb.tensor.tensor_core.Tensor`. :type tags: optional :param left_inds: See :class:`~quimb.tensor.tensor_core.Tensor`. :type left_inds: optional .. seealso:: :py:obj:`PTensor` .. py:attribute:: __slots__ :value: ('_data', '_inds', '_tags', '_left_inds', '_owners') .. py:method:: from_parray(parray, inds=(), tags=None, left_inds=None) :classmethod: .. py:method:: copy() Copy this parametrized tensor. .. py:method:: _set_data(x) .. py:property:: data .. py:property:: fn .. py:method:: get_params() Get the parameters of this ``PTensor``. .. py:method:: set_params(params) Set the parameters of this ``PTensor``. .. py:property:: params .. py:property:: shape The size of each dimension. .. py:property:: backend The backend inferred from the data. .. py:method:: _apply_function(fn) Apply ``fn`` to the data array of this ``PTensor`` (lazily), by composing it with the current parametrized array function. .. py:method:: conj(inplace=False) Conjugate this parametrized tensor - done lazily whenever the ``.data`` attribute is accessed. .. py:attribute:: conj_ .. py:method:: unparametrize() Turn this PTensor into a normal Tensor. .. py:method:: __getstate__() .. py:method:: __setstate__(state) .. py:class:: IsoTensor(data=1.0, inds=(), tags=None, left_inds=None) Bases: :py:obj:`Tensor` A ``Tensor`` subclass which keeps its ``left_inds`` by default even when its data is changed. .. py:attribute:: __slots__ :value: ('_data', '_inds', '_tags', '_left_inds', '_owners') .. py:method:: modify(**kwargs) Overwrite the data of this tensor in place. :param data: New data. :type data: array, optional :param apply: A function to apply to the current data. If `data` is also given this is applied subsequently. :type apply: callable, optional :param inds: New tuple of indices. :type inds: sequence of str, optional :param tags: New tags. :type tags: sequence of str, optional :param left_inds: New grouping of indices to be 'on the left'. :type left_inds: sequence of str, optional .. py:method:: fuse(*args, inplace=False, **kwargs) Combine groups of indices into single indices. :param fuse_map: Mapping like: ``{new_ind: sequence of existing inds, ...}`` or an ordered mapping like ``[(new_ind_1, old_inds_1), ...]`` in which case the output tensor's fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused. :type fuse_map: dict_like or sequence of tuples. :returns: The transposed, reshaped and re-labeled tensor. :rtype: Tensor