quimb.tensor.fitting

Tools for computing distances between and fitting tensor networks.

Functions

tensor_network_distance(tnA, tnB[, xAA, xAB, xBB, ...])

Compute the Frobenius norm distance between two tensor networks:

tensor_network_fit_autodiff(tn, tn_target[, steps, ...])

Optimize the fit of tn with respect to tn_target using

vdot_broadcast(x, y)

conjugate_gradient(A, b[, x0, tol, maxiter])

Conjugate Gradient solver for complex matrices/linear operators.

_tn_fit_als_core(var_tags, tnAA, tnAB, xBB, tol, steps)

tensor_network_fit_als(tn, tn_target[, tags, steps, ...])

Optimize the fit of tn with respect to tn_target using

tensor_network_fit_tree(tn, tn_target[, tags, steps, ...])

Fit tn to tn_target, assuming that tn has tree structure (i.e. a

Module Contents

quimb.tensor.fitting.tensor_network_distance(tnA, tnB, xAA=None, xAB=None, xBB=None, method='auto', normalized=False, output_inds=None, **contract_opts)

Compute the Frobenius norm distance between two tensor networks:

\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]

which should have matching outer indices. Note the default approach to computing the norm is precision limited to about eps**0.5 where eps is the precision of the data type, e.g. 1e-8 for float64. This is due to the subtraction in the above expression.

Parameters:
  • tnA (TensorNetwork or Tensor) – The first tensor network operator.

  • tnB (TensorNetwork or Tensor) – The second tensor network operator.

  • xAA (None or scalar) – The value of A.H @ A if you already know it (or it doesn’t matter).

  • xAB (None or scalar) – The value of A.H @ B if you already know it (or it doesn’t matter).

  • xBB (None or scalar) – The value of B.H @ B if you already know it (or it doesn’t matter).

  • method ({'auto', 'overlap', 'dense'}, optional) – How to compute the distance. If 'overlap', the default, the distance will be computed as the sum of overlaps, without explicitly forming the dense operators. If 'dense', the operators will be directly formed and the norm computed, which can be quicker when the exterior dimensions are small. If 'auto', the dense method will be used if the total operator (outer) size is <= 2**16.

  • normalized (bool or str, optional) – If True, then normalize the distance by the norm of the two operators, i.e. D(A, B) * 2 / (|A| + |B|). The resulting distance lies between 0 and 2 and is more useful for assessing convergence. If 'infidelity', compute the normalized infidelity 1 - |<A|B>|^2 / (|A| |B|), which can be faster to optimize e.g., but does not take into account normalization.

  • output_inds (sequence of str, optional) – Specify the output indices of tnA and tnB to contract over. This can be necessary if either network has hyper indices.

  • contract_opts – Supplied to contract().

Returns:

D

Return type:

float

quimb.tensor.fitting.tensor_network_fit_autodiff(tn, tn_target, steps=1000, tol=1e-09, autodiff_backend='autograd', contract_optimize='auto-hq', distance_method='auto', normalized='squared', output_inds=None, xBB=None, inplace=False, progbar=False, **kwargs)

Optimize the fit of tn with respect to tn_target using automatic differentation. This minimizes the norm of the difference between the two tensor networks, which must have matching outer indices, using overlaps.

Parameters:
  • tn (TensorNetwork) – The tensor network to fit.

  • tn_target (TensorNetwork) – The target tensor network to fit tn to.

  • steps (int, optional) – The maximum number of autodiff steps.

  • tol (float, optional) – The target norm distance.

  • autodiff_backend (str, optional) – Which backend library to use to perform the gradient computation.

  • contract_optimize (str, optional) – The contraction path optimized used to contract the overlaps.

  • distance_method ({'auto', 'dense', 'overlap'}, optional) – Supplied to tensor_network_distance(), controls how the distance is computed.

  • normalized (bool or str, optional) – If True, then normalize the distance by the norm of the two operators, i.e. D(A, B) * 2 / (|A| + |B|). The resulting distance lies between 0 and 2 and is more useful for assessing convergence. If 'infidelity', compute the normalized infidelity 1 - |<A|B>|^2 / (|A| |B|), which can be faster to optimize e.g., but does not take into account normalization.

  • output_inds (sequence of str, optional) – Specify the output indices of tnA and tnB to contract over. This can be necessary if either network has hyper indices.

  • xBB (float, optional) – If you already know, have computed tn_target.H @ tn_target, or don’t care about the overall scale of the norm distance, you can supply a value here.

  • inplace (bool, optional) – Update tn in place.

  • progbar (bool, optional) – Show a live progress bar of the fitting process.

  • kwargs – Passed to TNOptimizer.

quimb.tensor.fitting.vdot_broadcast(x, y)
quimb.tensor.fitting.conjugate_gradient(A, b, x0=None, tol=1e-05, maxiter=1000)

Conjugate Gradient solver for complex matrices/linear operators.

Parameters:
  • A (operator_like) – The matrix or linear operator.

  • B (array_like) – The right-hand side vector.

  • x0 (array_like, optional) – Initial guess for the solution.

  • tol (float, optional) – Tolerance for convergence.

  • maxiter (int, optional) – Maximum number of iterations.

  • Returns

  • --------

  • x (array_like) – The solution vector.

quimb.tensor.fitting._tn_fit_als_core(var_tags, tnAA, tnAB, xBB, tol, steps, dense_solve='auto', solver='auto', solver_maxiter=4, solver_dense='auto', enforce_pos=False, pos_smudge=1e-15, progbar=False)
quimb.tensor.fitting.tensor_network_fit_als(tn, tn_target, tags=None, steps=100, tol=1e-09, dense_solve='auto', solver='auto', solver_maxiter=4, solver_dense='auto', enforce_pos=False, pos_smudge=None, tnAA=None, tnAB=None, xBB=None, output_inds=None, contract_optimize='auto-hq', inplace=False, progbar=False, **kwargs)

Optimize the fit of tn with respect to tn_target using alternating least squares (ALS). This minimizes the norm of the difference between the two tensor networks, which must have matching outer indices, using overlaps.

Parameters:
  • tn (TensorNetwork) – The tensor network to fit.

  • tn_target (TensorNetwork) – The target tensor network to fit tn to.

  • tags (sequence of str, optional) – If supplied, only optimize tensors matching any of given tags.

  • steps (int, optional) – The maximum number of ALS steps.

  • tol (float, optional) – The target norm distance.

  • dense_solve ({'auto', True, False}, optional) – Whether to solve the local minimization problem in dense form. If 'auto', will only use dense form for small tensors.

  • solver ({"auto", None, "cg", ...}, optional) – What solver to use for the iterative (but not dense) local minimization. If None will use a built in conjugate gradient solver. If a string, will use the corresponding solver from scipy.sparse.linalg.

  • solver_maxiter (int, optional) – The maximum number of iterations for the iterative solver.

  • solver_dense ({"auto", None, 'solve', 'eigh', 'lstsq', ...}, optional) – The underlying driver function used to solve the local minimization, e.g. numpy.linalg.solve for 'solve' with numpy backend, if solving the local problem in dense form.

  • enforce_pos (bool, optional) – Whether to enforce positivity of the locally formed environments, which can be more stable, only for dense solves. This sets solver_dense='eigh'.

  • pos_smudge (float, optional) – If enforcing positivity, the level below which to clip eigenvalues for make the local environment positive definit, only for dense solves.

  • tnAA (TensorNetwork, optional) – If you have already formed the overlap tn.H & tn, maybe approximately, you can supply it here. The unconjugated layer should have tag '__KET__' and the conjugated layer '__BRA__'. Each tensor being optimized should have tag '__VAR{i}__'.

  • tnAB (TensorNetwork, optional) – If you have already formed the overlap tn_target.H & tn, maybe approximately, you can supply it here. Each tensor being optimized should have tag '__VAR{i}__'.

  • xBB (float, optional) – If you have already know, have computed tn_target.H @ tn_target, or it doesn’t matter, you can supply the value here.

  • contract_optimize (str, optional) – The contraction path optimized used to contract the local environments. Note 'greedy' is the default in order to maximize shared work.

  • inplace (bool, optional) – Update tn in place.

  • progbar (bool, optional) – Show a live progress bar of the fitting process.

Return type:

TensorNetwork

quimb.tensor.fitting.tensor_network_fit_tree(tn, tn_target, tags=None, steps=100, tol=1e-09, ordering=None, xBB=None, istree=True, contract_optimize='auto-hq', inplace=False, progbar=False)

Fit tn to tn_target, assuming that tn has tree structure (i.e. a single path between any two sites) and matching outer structure to tn_target. The tree structure allows a canonical form that greatly simplifies the normalization and least squares minimization. Note that no structure is assumed about tn_target, and so for example no partial contractions reused.

Parameters:
  • tn (TensorNetwork) – The tensor network to fit, it should have a tree structure and outer indices matching tn_target.

  • tn_target (TensorNetwork) – The target tensor network to fit tn to.

  • tags (sequence of str, optional) – If supplied, only optimize tensors matching any of given tags.

  • steps (int, optional) – The maximum number of ALS steps.

  • tol (float, optional) – The target norm distance.

  • ordering (sequence of int, optional) – The order in which to optimize the tensors, if None will be computed automatically using a hierarchical clustering.

  • xBB (float, optional) – If you have already know, have computed tn_target.H @ tn_target, or it doesn’t matter, you can supply the value here. It matters only for the overall scale of the norm distance.

  • contract_optimize (str, optional) – A contraction path strategy or optimizer for contracting the local environments.

  • inplace (bool, optional) – Fit tn in place.

  • progbar (bool, optional) – Show a live progress bar of the fitting process.

Return type:

TensorNetwork