quimb.tensor.fitting

Tools for computing distances between and fitting tensor networks.

Functions

tensor_network_distance(tnA, tnB[, xAA, xAB, xBB, ...])

Compute the Frobenius norm distance between two tensor networks:

tensor_network_fit_autodiff(tn, tn_target[, steps, ...])

Optimize the fit of tn with respect to tn_target using

_tn_fit_als_core(var_tags, tnAA, tnAB, xBB, tol, ...)

tensor_network_fit_als(tn, tn_target[, tags, steps, ...])

Optimize the fit of tn with respect to tn_target using

Module Contents

quimb.tensor.fitting.tensor_network_distance(tnA, tnB, xAA=None, xAB=None, xBB=None, method='auto', normalized=False, **contract_opts)[source]

Compute the Frobenius norm distance between two tensor networks:

\[D(A, B) = | A - B |_{\mathrm{fro}} = \mathrm{Tr} [(A - B)^{\dagger}(A - B)]^{1/2} = ( \langle A | A \rangle - 2 \mathrm{Re} \langle A | B \rangle| + \langle B | B \rangle ) ^{1/2}\]

which should have matching outer indices. Note the default approach to computing the norm is precision limited to about eps**0.5 where eps is the precision of the data type, e.g. 1e-8 for float64. This is due to the subtraction in the above expression.

Parameters:
  • tnA (TensorNetwork or Tensor) – The first tensor network operator.

  • tnB (TensorNetwork or Tensor) – The second tensor network operator.

  • xAA (None or scalar) – The value of A.H @ A if you already know it (or it doesn’t matter).

  • xAB (None or scalar) – The value of A.H @ B if you already know it (or it doesn’t matter).

  • xBB (None or scalar) – The value of B.H @ B if you already know it (or it doesn’t matter).

  • method ({'auto', 'overlap', 'dense'}, optional) – How to compute the distance. If 'overlap', the default, the distance will be computed as the sum of overlaps, without explicitly forming the dense operators. If 'dense', the operators will be directly formed and the norm computed, which can be quicker when the exterior dimensions are small. If 'auto', the dense method will be used if the total operator (outer) size is <= 2**16.

  • normalized (bool, optional) – If True, then normalize the distance by the norm of the two operators, i.e. 2 * D(A, B) / (|A| + |B|). The resulting distance lies between 0 and 2 and is more useful for assessing convergence.

  • contract_opts – Supplied to contract().

Returns:

D

Return type:

float

quimb.tensor.fitting.tensor_network_fit_autodiff(tn, tn_target, steps=1000, tol=1e-09, autodiff_backend='autograd', contract_optimize='auto-hq', distance_method='auto', inplace=False, progbar=False, **kwargs)[source]

Optimize the fit of tn with respect to tn_target using automatic differentation. This minimizes the norm of the difference between the two tensor networks, which must have matching outer indices, using overlaps.

Parameters:
  • tn (TensorNetwork) – The tensor network to fit.

  • tn_target (TensorNetwork) – The target tensor network to fit tn to.

  • steps (int, optional) – The maximum number of autodiff steps.

  • tol (float, optional) – The target norm distance.

  • autodiff_backend (str, optional) – Which backend library to use to perform the gradient computation.

  • contract_optimize (str, optional) – The contraction path optimized used to contract the overlaps.

  • distance_method ({'auto', 'dense', 'overlap'}, optional) – Supplied to tensor_network_distance(), controls how the distance is computed.

  • inplace (bool, optional) – Update tn in place.

  • progbar (bool, optional) – Show a live progress bar of the fitting process.

  • kwargs – Passed to TNOptimizer.

quimb.tensor.fitting._tn_fit_als_core(var_tags, tnAA, tnAB, xBB, tol, contract_optimize, steps, enforce_pos, pos_smudge, solver='solve', progbar=False)[source]
quimb.tensor.fitting.tensor_network_fit_als(tn, tn_target, tags=None, steps=100, tol=1e-09, solver='solve', enforce_pos=False, pos_smudge=None, tnAA=None, tnAB=None, xBB=None, contract_optimize='greedy', inplace=False, progbar=False)[source]

Optimize the fit of tn with respect to tn_target using alternating least squares (ALS). This minimizes the norm of the difference between the two tensor networks, which must have matching outer indices, using overlaps.

Parameters:
  • tn (TensorNetwork) – The tensor network to fit.

  • tn_target (TensorNetwork) – The target tensor network to fit tn to.

  • tags (sequence of str, optional) – If supplied, only optimize tensors matching any of given tags.

  • steps (int, optional) – The maximum number of ALS steps.

  • tol (float, optional) – The target norm distance.

  • solver ({'solve', 'lstsq', ...}, optional) – The underlying driver function used to solve the local minimization, e.g. numpy.linalg.solve for 'solve' with numpy backend.

  • enforce_pos (bool, optional) – Whether to enforce positivity of the locally formed environments, which can be more stable.

  • pos_smudge (float, optional) – If enforcing positivity, the level below which to clip eigenvalues for make the local environment positive definite.

  • tnAA (TensorNetwork, optional) – If you have already formed the overlap tn.H & tn, maybe approximately, you can supply it here. The unconjugated layer should have tag '__KET__' and the conjugated layer '__BRA__'. Each tensor being optimized should have tag '__VAR{i}__'.

  • tnAB (TensorNetwork, optional) – If you have already formed the overlap tn_target.H & tn, maybe approximately, you can supply it here. Each tensor being optimized should have tag '__VAR{i}__'.

  • xBB (float, optional) – If you have already know, have computed tn_target.H @ tn_target, or it doesn’t matter, you can supply the value here.

  • contract_optimize (str, optional) – The contraction path optimized used to contract the local environments. Note 'greedy' is the default in order to maximize shared work.

  • inplace (bool, optional) – Update tn in place.

  • progbar (bool, optional) – Show a live progress bar of the fitting process.

Return type:

TensorNetwork