quimb.tensor.decomp

Functions for decomposing and projecting matrices.

Attributes

Functions

map_cutoff_mode(cutoff_mode)

Map mode to an integer for compatibility with numba.

rdmul(x, d)

Right-multiplication a matrix by a vector representing a diagonal.

rddiv(x, d)

Right-multiplication of a matrix by a vector representing an inverse

ldmul(d, x)

Left-multiplication a matrix by a vector representing a diagonal.

lddiv(d, x)

Left-multiplication of a matrix by a vector representing an inverse

dag_numba(x)

rdmul_numba(x, d)

rddiv_numba(x, d)

ldmul_numba(d, x)

lddiv_numba(d, x)

sgn(x)

Get the 'sign' of x, such that x / sgn(x) is real and

sgn_numba(x)

sgn_tf(x)

_trim_and_renorm_svd_result(U, s, VH, cutoff, ...)

Give full SVD decomposion result U, s, VH, optionally trim,

svd_truncated(x[, cutoff, cutoff_mode, max_bond, ...])

Truncated svd or raw array x.

_compute_number_svals_to_keep_numba(s, cutoff, cutoff_mode)

Find the number of singular values to keep of s given cutoff and

_compute_svals_renorm_factor_numba(s, n_chi, renorm)

Find the normalization constant for s such that the new sum squared

_trim_and_renorm_svd_result_numba(U, s, VH, cutoff, ...)

Accelerate version of _trim_and_renorm_svd_result.

svd_truncated_numba(x[, cutoff, cutoff_mode, ...])

Accelerated version of svd_truncated for numpy arrays.

svd_truncated_numpy(x[, cutoff, cutoff_mode, ...])

Numpy version of svd_truncated, trying the accelerated version

svd_truncated_lazy(x[, cutoff, cutoff_mode, max_bond, ...])

lu_truncated(x[, cutoff, cutoff_mode, max_bond, ...])

svdvals(x)

SVD-decomposition, but return singular values only.

_svd_via_eig_truncated_numba(x[, cutoff, cutoff_mode, ...])

SVD-split via eigen-decomposition.

svd_via_eig_truncated(x[, cutoff, cutoff_mode, ...])

svdvals_eig(x)

SVD-decomposition via eigen, but return singular values only.

eigh_truncated(x[, cutoff, cutoff_mode, max_bond, ...])

eigh_truncated_numba(x[, cutoff, cutoff_mode, ...])

SVD-decomposition, using hermitian eigen-decomposition, only works if

_choose_k(x, cutoff, max_bond)

Choose the number of singular values to target.

svds(x[, cutoff, cutoff_mode, max_bond, absorb, renorm])

SVD-decomposition using iterative methods. Allows the

isvd(x[, cutoff, cutoff_mode, max_bond, absorb, renorm])

SVD-decomposition using interpolative matrix random methods. Allows the

_rsvd_numpy(x[, cutoff, cutoff_mode, max_bond, ...])

rsvd(x[, cutoff, cutoff_mode, max_bond, absorb, renorm])

SVD-decomposition using randomized methods (due to Halko). Allows the

eigsh(x[, cutoff, cutoff_mode, max_bond, absorb, renorm])

SVD-decomposition using iterative hermitian eigen decomp, thus assuming

qr_stabilized(x[, backend])

QR-decomposition, with stabilized R factor.

qr_stabilized_numba(x)

qr_stabilized_lazy(x)

lq_stabilized(x[, backend])

lq_stabilized_numba(x)

_cholesky_numba(x[, cutoff, cutoff_mode, max_bond, absorb])

SVD-decomposition, using cholesky decomposition, only works if

cholesky(x[, cutoff, cutoff_mode, max_bond, absorb])

polar_right(x)

Polar decomposition of x.

polar_right_numba(x)

polar_left(x)

Polar decomposition of x.

polar_left_numba(x)

_similarity_compress_eig(X, max_bond, renorm)

_similarity_compress_eig_numba(X, max_bond, renorm)

_similarity_compress_eigh(X, max_bond, renorm)

_similarity_compress_eigh_numba(X, max_bond, renorm)

_similarity_compress_svd(X, max_bond, renorm, asymm)

_similarity_compress_svd_numba(X, max_bond, renorm, asymm)

_similarity_compress_biorthog(X, max_bond, renorm)

_similarity_compress_biorthog_numba(X, max_bond, renorm)

similarity_compress(X, max_bond[, renorm, method])

isometrize_qr(x[, backend])

Perform isometrization using the QR decomposition.

isometrize_svd(x[, backend])

Perform isometrization using the SVD decomposition.

isometrize_exp(x, backend)

Perform isometrization using anti-symmetric matrix exponentiation.

isometrize_cayley(x, backend)

Perform isometrization using an anti-symmetric Cayley transform.

isometrize_modified_gram_schmidt(A[, backend])

Perform isometrization explicitly using the modified Gram Schmidt

isometrize_householder(X[, backend])

isometrize_torch_householder(x)

Isometrize x using the Householder reflection method, as implemented

isometrize(x[, method])

Generate an isometric (or unitary if square) / orthogonal matrix from

squared_op_to_reduced_factor(x2, dl, dr[, right])

Given the square, x2, of an operator x, compute either the left

squared_op_to_reduced_factor_numba(x2, dl, dr[, right])

compute_oblique_projectors(Rl, Rr, max_bond, cutoff[, ...])

Compute the oblique projectors for two reduced factor matrices that

Module Contents

quimb.tensor.decomp._CUTOFF_MODE_MAP
quimb.tensor.decomp.map_cutoff_mode(cutoff_mode)[source]

Map mode to an integer for compatibility with numba.

quimb.tensor.decomp.rdmul(x, d)[source]

Right-multiplication a matrix by a vector representing a diagonal.

quimb.tensor.decomp.rddiv(x, d)[source]

Right-multiplication of a matrix by a vector representing an inverse diagonal.

quimb.tensor.decomp.ldmul(d, x)[source]

Left-multiplication a matrix by a vector representing a diagonal.

quimb.tensor.decomp.lddiv(d, x)[source]

Left-multiplication of a matrix by a vector representing an inverse diagonal.

quimb.tensor.decomp.dag_numba(x)[source]
quimb.tensor.decomp.rdmul_numba(x, d)[source]
quimb.tensor.decomp.rddiv_numba(x, d)[source]
quimb.tensor.decomp.ldmul_numba(d, x)[source]
quimb.tensor.decomp.lddiv_numba(d, x)[source]
quimb.tensor.decomp.sgn(x)[source]

Get the ‘sign’ of x, such that x / sgn(x) is real and non-negative.

quimb.tensor.decomp.sgn_numba(x)[source]
quimb.tensor.decomp.sgn_tf(x)[source]
quimb.tensor.decomp._trim_and_renorm_svd_result(U, s, VH, cutoff, cutoff_mode, max_bond, absorb, renorm)[source]

Give full SVD decomposion result U, s, VH, optionally trim, renormalize, and absorb the singular values. See svd_truncated for details.

quimb.tensor.decomp.svd_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0, backend=None)[source]

Truncated svd or raw array x.

Parameters:
  • cutoff (float, optional) – Singular value cutoff threshold, if cutoff <= 0.0, then only max_bond is used.

  • cutoff_mode ({1, 2, 3, 4, 5, 6}, optional) –

    How to perform the trim:

    • 1: [‘abs’], trim values below cutoff

    • 2: [‘rel’], trim values below s[0] * cutoff

    • 3: [‘sum2’], trim s.t. sum(s_trim**2) < cutoff.

    • 4: [‘rsum2’], trim s.t. sum(s_trim**2) < sum(s**2) * cutoff.

    • 5: [‘sum1’], trim s.t. sum(s_trim**1) < cutoff.

    • 6: [‘rsum1’], trim s.t. sum(s_trim**1) < sum(s**1) * cutoff.

  • max_bond (int, optional) – An explicit maximum bond dimension, use -1 for none.

  • absorb ({-1, 0, 1, None}, optional) – How to absorb the singular values. -1: left, 0: both, 1: right and None: don’t absorb (return).

  • renorm ({0, 1}, optional) – Whether to renormalize the singular values (depends on cutoff_mode).

quimb.tensor.decomp._compute_number_svals_to_keep_numba(s, cutoff, cutoff_mode)[source]

Find the number of singular values to keep of s given cutoff and cutoff_mode.

quimb.tensor.decomp._compute_svals_renorm_factor_numba(s, n_chi, renorm)[source]

Find the normalization constant for s such that the new sum squared of the n_chi largest values equals the sum squared of all the old ones.

quimb.tensor.decomp._trim_and_renorm_svd_result_numba(U, s, VH, cutoff, cutoff_mode, max_bond, absorb, renorm)[source]

Accelerate version of _trim_and_renorm_svd_result.

quimb.tensor.decomp.svd_truncated_numba(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

Accelerated version of svd_truncated for numpy arrays.

quimb.tensor.decomp.svd_truncated_numpy(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

Numpy version of svd_truncated, trying the accelerated version first, then falling back to the more stable scipy version.

quimb.tensor.decomp.svd_truncated_lazy(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]
quimb.tensor.decomp.lu_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0, backend=None)[source]
quimb.tensor.decomp.svdvals(x)[source]

SVD-decomposition, but return singular values only.

quimb.tensor.decomp._svd_via_eig_truncated_numba(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

SVD-split via eigen-decomposition.

quimb.tensor.decomp.svd_via_eig_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]
quimb.tensor.decomp.svdvals_eig(x)[source]

SVD-decomposition via eigen, but return singular values only.

quimb.tensor.decomp.eigh_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0, backend=None)[source]
quimb.tensor.decomp.eigh_truncated_numba(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

SVD-decomposition, using hermitian eigen-decomposition, only works if x is hermitian.

quimb.tensor.decomp._choose_k(x, cutoff, max_bond)[source]

Choose the number of singular values to target.

quimb.tensor.decomp.svds(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

SVD-decomposition using iterative methods. Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply scipy.sparse.linalg.LinearOperator.

quimb.tensor.decomp.isvd(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

SVD-decomposition using interpolative matrix random methods. Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply scipy.sparse.linalg.LinearOperator.

quimb.tensor.decomp._rsvd_numpy(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]
quimb.tensor.decomp.rsvd(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

SVD-decomposition using randomized methods (due to Halko). Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply scipy.sparse.linalg.LinearOperator.

quimb.tensor.decomp.eigsh(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]

SVD-decomposition using iterative hermitian eigen decomp, thus assuming that x is hermitian. Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply scipy.sparse.linalg.LinearOperator.

quimb.tensor.decomp.qr_stabilized(x, backend=None)[source]

QR-decomposition, with stabilized R factor.

quimb.tensor.decomp.qr_stabilized_numba(x)[source]
quimb.tensor.decomp.qr_stabilized_lazy(x)[source]
quimb.tensor.decomp.lq_stabilized(x, backend=None)[source]
quimb.tensor.decomp.lq_stabilized_numba(x)[source]
quimb.tensor.decomp._cholesky_numba(x, cutoff=-1, cutoff_mode=4, max_bond=-1, absorb=0)[source]

SVD-decomposition, using cholesky decomposition, only works if x is positive definite.

quimb.tensor.decomp.cholesky(x, cutoff=-1, cutoff_mode=4, max_bond=-1, absorb=0)[source]
quimb.tensor.decomp.polar_right(x)[source]

Polar decomposition of x.

quimb.tensor.decomp.polar_right_numba(x)[source]
quimb.tensor.decomp.polar_left(x)[source]

Polar decomposition of x.

quimb.tensor.decomp.polar_left_numba(x)[source]
quimb.tensor.decomp._similarity_compress_eig(X, max_bond, renorm)[source]
quimb.tensor.decomp._similarity_compress_eig_numba(X, max_bond, renorm)[source]
quimb.tensor.decomp._similarity_compress_eigh(X, max_bond, renorm)[source]
quimb.tensor.decomp._similarity_compress_eigh_numba(X, max_bond, renorm)[source]
quimb.tensor.decomp._similarity_compress_svd(X, max_bond, renorm, asymm)[source]
quimb.tensor.decomp._similarity_compress_svd_numba(X, max_bond, renorm, asymm)[source]
quimb.tensor.decomp._similarity_compress_biorthog(X, max_bond, renorm)[source]
quimb.tensor.decomp._similarity_compress_biorthog_numba(X, max_bond, renorm)[source]
quimb.tensor.decomp._similarity_compress_fns
quimb.tensor.decomp.similarity_compress(X, max_bond, renorm=False, method='eigh')[source]
quimb.tensor.decomp.isometrize_qr(x, backend=None)[source]

Perform isometrization using the QR decomposition.

quimb.tensor.decomp.isometrize_svd(x, backend=None)[source]

Perform isometrization using the SVD decomposition.

quimb.tensor.decomp.isometrize_exp(x, backend)[source]

Perform isometrization using anti-symmetric matrix exponentiation.

\[U_A = \exp \left( X - X^\dagger \right)\]

If x is rectangular it is completed with zeros first.

quimb.tensor.decomp.isometrize_cayley(x, backend)[source]

Perform isometrization using an anti-symmetric Cayley transform.

\[U_A = (I + \dfrac{A}{2})(I - \dfrac{A}{2})^{-1}\]

where \(A = X - X^\dagger\). If x is rectangular it is completed with zeros first.

quimb.tensor.decomp.isometrize_modified_gram_schmidt(A, backend=None)[source]

Perform isometrization explicitly using the modified Gram Schmidt procedure (this is slow but a useful reference).

quimb.tensor.decomp.isometrize_householder(X, backend=None)[source]
quimb.tensor.decomp.isometrize_torch_householder(x)[source]

Isometrize x using the Householder reflection method, as implemented by the torch_householder package.

quimb.tensor.decomp._ISOMETRIZE_METHODS
quimb.tensor.decomp.isometrize(x, method='qr')[source]

Generate an isometric (or unitary if square) / orthogonal matrix from array x.

Parameters:
  • x (array) – The matrix to project into isometrix form.

  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

Returns:

Q – The isometrization / orthogonalization of x.

Return type:

array

quimb.tensor.decomp.squared_op_to_reduced_factor(x2, dl, dr, right=True)[source]

Given the square, x2, of an operator x, compute either the left or right reduced factor matrix of the unsquared operator x with original shape (dl, dr).

quimb.tensor.decomp.squared_op_to_reduced_factor_numba(x2, dl, dr, right=True)[source]
quimb.tensor.decomp.compute_oblique_projectors(Rl, Rr, max_bond, cutoff, absorb='both', cutoff_mode=4, **compress_opts)[source]

Compute the oblique projectors for two reduced factor matrices that describe a gauge on a bond. Concretely, assuming that Rl and Rr are the reduced factor matrices for local operator A, such that:

\[A = Q_L R_L R_R Q_R\]

with Q_L and Q_R isometric matrices, then the optimal inner truncation is given by:

\[A' = Q_L P_L P_R' Q_R\]
Parameters:
  • Rl (array) – The left reduced factor matrix.

  • Rr (array) – The right reduced factor matrix.

Returns:

  • Pl (array) – The left oblique projector.

  • Pr (array) – The right oblique projector.