quimb.tensor.decomp¶
Functions for decomposing and projecting matrices.
Attributes¶
Functions¶
|
Map mode to an integer for compatibility with numba. |
|
Right-multiplication a matrix by a vector representing a diagonal. |
|
Right-multiplication of a matrix by a vector representing an inverse |
|
Left-multiplication a matrix by a vector representing a diagonal. |
|
Left-multiplication of a matrix by a vector representing an inverse |
|
|
|
|
|
|
|
|
|
|
|
Get the 'sign' of |
|
|
|
|
|
Give full SVD decomposion result |
|
Truncated svd or raw array |
|
Find the number of singular values to keep of |
|
Find the normalization constant for |
|
Accelerate version of |
|
Accelerated version of |
|
Numpy version of |
|
|
|
|
|
SVD-decomposition, but return singular values only. |
|
SVD-split via eigen-decomposition. |
|
|
|
SVD-decomposition via eigen, but return singular values only. |
|
|
|
SVD-decomposition, using hermitian eigen-decomposition, only works if |
|
Choose the number of singular values to target. |
|
SVD-decomposition using iterative methods. Allows the |
|
SVD-decomposition using interpolative matrix random methods. Allows the |
|
|
|
SVD-decomposition using randomized methods (due to Halko). Allows the |
|
SVD-decomposition using iterative hermitian eigen decomp, thus assuming |
|
QR-decomposition, with stabilized R factor. |
|
|
|
SVD-decomposition, using cholesky decomposition, only works if |
|
|
|
Polar decomposition of |
|
Polar decomposition of |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Perform isometrization using the QR decomposition. |
|
Perform isometrization using the SVD decomposition. |
|
Perform isometrization using anti-symmetric matrix exponentiation. |
|
Perform isometrization using an anti-symmetric Cayley transform. |
|
Perform isometrization explicitly using the modified Gram Schmidt |
|
|
Isometrize |
|
|
Generate an isometric (or unitary if square) / orthogonal matrix from |
|
Given the square, |
|
|
|
Compute the oblique projectors for two reduced factor matrices that |
Module Contents¶
- quimb.tensor.decomp._CUTOFF_MODE_MAP¶
- quimb.tensor.decomp.map_cutoff_mode(cutoff_mode)[source]¶
Map mode to an integer for compatibility with numba.
- quimb.tensor.decomp.rdmul(x, d)[source]¶
Right-multiplication a matrix by a vector representing a diagonal.
- quimb.tensor.decomp.rddiv(x, d)[source]¶
Right-multiplication of a matrix by a vector representing an inverse diagonal.
- quimb.tensor.decomp.ldmul(d, x)[source]¶
Left-multiplication a matrix by a vector representing a diagonal.
- quimb.tensor.decomp.lddiv(d, x)[source]¶
Left-multiplication of a matrix by a vector representing an inverse diagonal.
- quimb.tensor.decomp.sgn(x)[source]¶
Get the ‘sign’ of
x
, such thatx / sgn(x)
is real and non-negative.
- quimb.tensor.decomp._trim_and_renorm_svd_result(U, s, VH, cutoff, cutoff_mode, max_bond, absorb, renorm)[source]¶
Give full SVD decomposion result
U
,s
,VH
, optionally trim, renormalize, and absorb the singular values. Seesvd_truncated
for details.
- quimb.tensor.decomp.svd_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0, backend=None)[source]¶
Truncated svd or raw array
x
.- Parameters:
cutoff (float, optional) – Singular value cutoff threshold, if
cutoff <= 0.0
, then onlymax_bond
is used.cutoff_mode ({1, 2, 3, 4, 5, 6}, optional) –
How to perform the trim:
1: [‘abs’], trim values below
cutoff
2: [‘rel’], trim values below
s[0] * cutoff
3: [‘sum2’], trim s.t.
sum(s_trim**2) < cutoff
.4: [‘rsum2’], trim s.t.
sum(s_trim**2) < sum(s**2) * cutoff
.5: [‘sum1’], trim s.t.
sum(s_trim**1) < cutoff
.6: [‘rsum1’], trim s.t.
sum(s_trim**1) < sum(s**1) * cutoff
.
max_bond (int, optional) – An explicit maximum bond dimension, use -1 for none.
absorb ({-1, 0, 1, None}, optional) – How to absorb the singular values. -1: left, 0: both, 1: right and None: don’t absorb (return).
renorm ({0, 1}, optional) – Whether to renormalize the singular values (depends on cutoff_mode).
- quimb.tensor.decomp._compute_number_svals_to_keep_numba(s, cutoff, cutoff_mode)[source]¶
Find the number of singular values to keep of
s
givencutoff
andcutoff_mode
.
- quimb.tensor.decomp._compute_svals_renorm_factor_numba(s, n_chi, renorm)[source]¶
Find the normalization constant for
s
such that the new sum squared of then_chi
largest values equals the sum squared of all the old ones.
- quimb.tensor.decomp._trim_and_renorm_svd_result_numba(U, s, VH, cutoff, cutoff_mode, max_bond, absorb, renorm)[source]¶
Accelerate version of
_trim_and_renorm_svd_result
.
- quimb.tensor.decomp.svd_truncated_numba(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
Accelerated version of
svd_truncated
for numpy arrays.
- quimb.tensor.decomp.svd_truncated_numpy(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
Numpy version of
svd_truncated
, trying the accelerated version first, then falling back to the more stable scipy version.
- quimb.tensor.decomp.svd_truncated_lazy(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
- quimb.tensor.decomp.lu_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0, backend=None)[source]¶
- quimb.tensor.decomp._svd_via_eig_truncated_numba(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
SVD-split via eigen-decomposition.
- quimb.tensor.decomp.svd_via_eig_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
- quimb.tensor.decomp.svdvals_eig(x)[source]¶
SVD-decomposition via eigen, but return singular values only.
- quimb.tensor.decomp.eigh_truncated(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0, backend=None)[source]¶
- quimb.tensor.decomp.eigh_truncated_numba(x, cutoff=-1.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
SVD-decomposition, using hermitian eigen-decomposition, only works if
x
is hermitian.
- quimb.tensor.decomp._choose_k(x, cutoff, max_bond)[source]¶
Choose the number of singular values to target.
- quimb.tensor.decomp.svds(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
SVD-decomposition using iterative methods. Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply
scipy.sparse.linalg.LinearOperator
.
- quimb.tensor.decomp.isvd(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
SVD-decomposition using interpolative matrix random methods. Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply
scipy.sparse.linalg.LinearOperator
.
- quimb.tensor.decomp._rsvd_numpy(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
- quimb.tensor.decomp.rsvd(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
SVD-decomposition using randomized methods (due to Halko). Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supply
scipy.sparse.linalg.LinearOperator
.
- quimb.tensor.decomp.eigsh(x, cutoff=0.0, cutoff_mode=4, max_bond=-1, absorb=0, renorm=0)[source]¶
SVD-decomposition using iterative hermitian eigen decomp, thus assuming that
x
is hermitian. Allows the computation of only a certain number of singular values, e.g. max_bond, from the get-go, and is thus more efficient. Can also supplyscipy.sparse.linalg.LinearOperator
.
- quimb.tensor.decomp.qr_stabilized(x, backend=None)[source]¶
QR-decomposition, with stabilized R factor.
- quimb.tensor.decomp._cholesky_numba(x, cutoff=-1, cutoff_mode=4, max_bond=-1, absorb=0)[source]¶
SVD-decomposition, using cholesky decomposition, only works if
x
is positive definite.
- quimb.tensor.decomp._similarity_compress_fns¶
- quimb.tensor.decomp.isometrize_qr(x, backend=None)[source]¶
Perform isometrization using the QR decomposition.
- quimb.tensor.decomp.isometrize_svd(x, backend=None)[source]¶
Perform isometrization using the SVD decomposition.
- quimb.tensor.decomp.isometrize_exp(x, backend)[source]¶
Perform isometrization using anti-symmetric matrix exponentiation.
\[U_A = \exp \left( X - X^\dagger \right)\]If
x
is rectangular it is completed with zeros first.
- quimb.tensor.decomp.isometrize_cayley(x, backend)[source]¶
Perform isometrization using an anti-symmetric Cayley transform.
\[U_A = (I + \dfrac{A}{2})(I - \dfrac{A}{2})^{-1}\]where \(A = X - X^\dagger\). If
x
is rectangular it is completed with zeros first.
- quimb.tensor.decomp.isometrize_modified_gram_schmidt(A, backend=None)[source]¶
Perform isometrization explicitly using the modified Gram Schmidt procedure (this is slow but a useful reference).
- quimb.tensor.decomp.isometrize_torch_householder(x)[source]¶
Isometrize
x
using the Householder reflection method, as implemented by thetorch_householder
package.
- quimb.tensor.decomp._ISOMETRIZE_METHODS¶
- quimb.tensor.decomp.isometrize(x, method='qr')[source]¶
Generate an isometric (or unitary if square) / orthogonal matrix from array
x
.- Parameters:
x (array) – The matrix to project into isometrix form.
method (str, optional) –
The method used to generate the isometry. The options are:
”qr”: use the Q factor of the QR decomposition of
x
with the constraint that the diagonal ofR
is positive.”svd”: uses
U @ VH
of the SVD decomposition ofx
. This is useful for finding the ‘closest’ isometric matrix tox
, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.”exp”: use the matrix exponential of
x - dag(x)
, first completingx
with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-squarex
.”cayley”: use the Cayley transform of
x - dag(x)
, first completingx
with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-squarex
.”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.
”torch_householder”: use the Householder reflection method directly, using the
torch_householder
package. This requires that the package is installed and that the backend is"torch"
. This is generally the best parametrizing method for “torch” if available.”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.
Not all backends support all methods or differentiating through all methods.
- Returns:
Q – The isometrization / orthogonalization of
x
.- Return type:
array
- quimb.tensor.decomp.squared_op_to_reduced_factor(x2, dl, dr, right=True)[source]¶
Given the square,
x2
, of an operatorx
, compute either the left or right reduced factor matrix of the unsquared operatorx
with original shape(dl, dr)
.
- quimb.tensor.decomp.compute_oblique_projectors(Rl, Rr, max_bond, cutoff, absorb='both', cutoff_mode=4, **compress_opts)[source]¶
Compute the oblique projectors for two reduced factor matrices that describe a gauge on a bond. Concretely, assuming that
Rl
andRr
are the reduced factor matrices for local operatorA
, such that:\[A = Q_L R_L R_R Q_R\]with
Q_L
andQ_R
isometric matrices, then the optimal inner truncation is given by:\[A' = Q_L P_L P_R' Q_R\]- Parameters:
Rl (array) – The left reduced factor matrix.
Rr (array) – The right reduced factor matrix.
- Returns:
Pl (array) – The left oblique projector.
Pr (array) – The right oblique projector.