Changelog¶
Release notes for quimb
.
v1.8.5 (unreleased)¶
Enhancements:
expose
qtn.edge_coloring
as top level function and allow layers to be returned grouped.add docstring for
tn.contract_compressed
v1.8.4 (2024-07-20)¶
Bug fixes:
v1.8.3 (2024-07-10)¶
Enhancements:
support for numpy v2.0 and scipy v1.14
add MPS sampling:
MatrixProductState.sample_configuration
andMatrixProductState.sample
(generating multiple samples) and use these forCircuitMPS.sample
andCircuitPermMPS.sample
.add basic
.plot()
method for SimpleUpdate classesadd
edges_1d_chain
for generating 1D chain edgesoperatorbuilder: better coefficient placement for long range MPO building
v1.8.2 (2024-06-12)¶
Enhancements:
TNOptimizer
can now accept an arbitrary pytree (nested combination of dicts, lists, tuples, etc. withTensorNetwork
,Tensor
or rawarray_like
objects as the leaves) as the target object to optimize.TNOptimizer
can now directly optimizeCircuit
objects, returning a new optimized circuit with updated parameters.Circuit
: add.copy()
,.get_params()
and.set_params()
interface methods.Update generic TN optimizer docs.
add
tn.gen_inds_loops
for generating all loops of indices in a TN.add
tn.gen_inds_connected
for generating all connected sets of indices in a TN.make SVD fallback error catching more generic (PR #238)
fix some windows + numba CI issues.
approx_spectral_function
add plotting and trackingadd dispatching to various tensor primitives to allow overriding
v1.8.1 (2024-05-06)¶
Enhancements:
CircuitMPS
now supports multi qubit gates, including arbitrary multi-controls (which are treated in a low-rank manner), and faster simulation via better orthogonality center tracking.add
CircuitPermMPS
add
MatrixProductState.gate_nonlocal
for applying a gate, supplied as a raw matrix, to a non-local and arbitrary number of sites. The kwargcontract="nonlocal"
can be used to force this method, or the new option"auto-mps"
will select this method if the gate is non-local (GH 230)add
MatrixProductState.gate_with_mpo
for applying an MPO to an MPS, and immediately compressing back to MPS form usingtensor_network_1d_compress
add
MatrixProductState.gate_with_submpo
for applying an MPO acting only of a subset of sites to an MPSadd
MatrixProductOperator.from_dense
for constructing MPOs from dense matrices, including an only subset of sitesadd
MatrixProductOperator.fill_empty_sites
for ‘completing’ an MPO which only has tensors on a subset of sites with (by default) identitiesMatrixProductState
andMatrixProductOperator
, now support thesites
kwarg in common constructors, enabling the TN to act on a subset of the fullL
sites.add
TensorNetwork.drape_bond_between
for ‘draping’ an existing bond between two tensors through a thirdTN2D, TN3D and arbitrary geom classical partition function builders (
TN_classical_partition_function_from_edges
) now all supportoutputs=
kwarg specifying non-marginalized variablesadd simple dense 1-norm belief propagation algorithm
D1BP
add
qtn.enforce_1d_like
for checking whether a tensor network is 1D-like, including automatically adding strings of identities between non-local bonds, expanding applicability oftensor_network_1d_compress
add
MatrixProductState.canonicalize
as (by default non-inplace) version ofcanonize
, to follow the pattern of other tensor network methods.canonize
is now an alias forcanonicalize_
[note trailing underscore].add
MatrixProductState.left_canonicalize
as (by default non-inplace) version ofleft_canonize
, to follow the pattern of other tensor network methods.left_canonize
is now an alias forleft_canonicalize_
[note trailing underscore].add
MatrixProductState.right_canonicalize
as (by default non-inplace) version ofright_canonize
, to follow the pattern of other tensor network methods.right_canonize
is now an alias forright_canonicalize_
[note trailing underscore].
Bug fixes:
Circuit.apply_gate_raw
: fix kwarg bug (PR 226)fix for retrieving
opt_einsum.PathInfo
for single scalar contraction (GH 231)
v1.8.0 (2024-04-10)¶
Breaking Changes
all singular value renormalization is turned off by default
TensorNetwork.compress_all
now defaults to using some local gauging
Enhancements:
add
quimb.tensor.tensor_1d_compress.py
with functions for compressing generic 1D tensor networks (with arbitrary local structure) using various methods. The methods are:The ‘direct’ method:
tensor_network_1d_compress_direct
The ‘dm’ (density matrix) method:
tensor_network_1d_compress_dm
The ‘zipup’ method:
tensor_network_1d_compress_zipup
The ‘zipup-first’ method:
tensor_network_1d_compress_zipup_first
The 1 and 2 site ‘fit’ or sweeping method:
tensor_network_1d_compress_fit
… and some more niche methods for debugging and testing.
And can be accessed via the unified function
tensor_network_1d_compress
. Boundary contraction in 2D can now utilize any of these methods.add
quimb.tensor.tensor_arbgeom_compress.py
with functions for compressing arbitrary geometry tensor networks using various methods. The methods are:The ‘local-early’ method:
tensor_network_ag_compress_local_early
The ‘local-late’ method:
tensor_network_ag_compress_local_late
The ‘projector’ method:
tensor_network_ag_compress_projector
The ‘superorthogonal’ method:
tensor_network_ag_compress_superorthogonal
The ‘l2bp’ method:
tensor_network_ag_compress_l2bp
And can be accessed via the unified function
tensor_network_ag_compress
. 1D compression can also fall back to these methods.support PBC in
tn2d.contract_hotrg
,tn2d.contract_ctmrg
,tn3d.contract_hotrg
and the new functiontn3d.contract_ctmrg
.support PBC in
gen_2d_bonds
andgen_3d_bonds
, withcyclic
kwarg.support PBC in
TN2D_rand_hidden_loop
andTN3D_rand_hidden_loop
, withcyclic
kwarg.support PBC in the various base PEPS and PEPO construction methods.
add
tensor_network_apply_op_op
for applying ‘operator’ TNs to ‘operator’ TNs.tweak
tensor_network_apply_op_vec
for applying ‘operator’ TNs to ‘vector’ or ‘state’ TNs.add
tnvec.gate_with_op_lazy
method for applying ‘operator’ TNs to ‘vector’ or ‘state’ TNs like \(x \rightarrow A x\).add
tnop.gate_upper_with_op_lazy
method for applying ‘operator’ TNs to the upper indices of ‘operator’ TNs like \(B \rightarrow A B\).add
tnop.gate_lower_with_op_lazy
method for applying ‘operator’ TNs to the lower indices of ‘operator’ TNs like \(B \rightarrow B A\).add
tnop.gate_sandwich_with_op_lazy
method for applying ‘operator’ TNs to the upper and lower indices of ‘operator’ TNs like \(B \rightarrow A B A^\dagger\).unify all TN summing routines into `tensor_network_ag_sum, which allows summing any two tensor networks with matching site tags and outer indices, replacing specific MPS, MPO, PEPS, PEPO, etc. summing routines.
add
rand_symmetric_array
,rand_tensor_symmetric
TN2D_rand_symmetric
for generating random symmetric arrays, tensors and 2D tensor networks.
Bug fixes:
v1.7.3 (2024-02-08)¶
Enhancements:
qu.randn: support
dist="rademacher"
.support
dist
and otherrandn
options in various TN builders.
Bug fixes:
restore fallback (to
scipy.linalg.svd
with driver=’gesvd’) behavior for truncated SVD with numpy backend.
v1.7.2 (2024-01-30)¶
Enhancements:
add
normalized=True
option totensor_network_distance
for computing the normalized distance between tensor networks: \(2 |A - B| / (|A| + |B|)\), which is useful for convergence checks.Tensor.distance_normalized
andTensorNetwork.distance_normalized
added as aliases.add
TensorNetwork.cut_bond
for cutting a bond index
Bug fixes:
removed import of deprecated
numba.generated_jit
decorator.
v1.7.1 (2024-01-30)¶
Enhancements:
add
TensorNetwork.visualize_tensors
for visualizing the actual data entries of an entire tensor network.add
ham.build_mpo_propagator_trotterized
for building a trotterized propagator from a local 1D hamiltonian. This also includes updates for creating ‘empty’ tensor networks usingTensorNetwork.new
, and building up gates from empty tensor networks usingTensorNetwork.gate_inds_with_tn
.add more options to
Tensor.expand_ind
andTensor.new_ind
: repeat tiling mode and random padding mode.tensor decomposition: make
eigh_truncated
backend agnostic.tensor_compress_bond
: addreduced="left"
andreduced="right"
modes for when the pair of tensors is already in a canonical form.add
qtn.TN2D_embedded_classical_ising_partition_function
for constructing 2D (triangular) tensor networks representing all-to-all classical ising partition functions.
Bug fixes:
fix bug in
kruas_op
when operator spanned multiple subsystems (GH 214)fix bug in
qr_stabilized
when the diagonal ofR
has significant imaginary parts.fix bug in quantum discord computation when the state was diagonal (GH 217)
v1.7.0 (2023-12-08)¶
Breaking Changes
Circuit
: removetarget_size
in preparation for all contraction specifications to be encapsulated at the contract level (e.g. withcotengra
)some TN drawing options (mainly arrow options) have changed due to the backend change detailed below.
Enhancements:
TensorNetwork.draw: use
quimb.schematic
for mainbackend="matplotlib"
drawing. Enabling:multi tag coloring for single tensors
arrows and labels on multi-edges
better sizing of tensors using absolute units
neater single tensor drawing, in 2D and 3D
add quimb.schematic.Drawing from experimental submodule, add example docs at schematic - manual drawing. Add methods
text_between
,wedge
,line_offset
and other tweaks for future use by main TN drawing.
upgrade all contraction to use
cotengra
as the backendCircuit
: allow any gate to be controlled by any number of qubits.Circuit
: support for parsingopenqasm2
specifications now with custom and nested gate definitions etc.add
is_cyclic_x
,is_cyclic_y
andis_cyclic_z
to TensorNetwork2D and TensorNetwork3D.add TensorNetwork.compress_all_1d for compressing generic tensor networks that you promise have a 1D topology, without casting as a TensorNetwork1D.
add MatrixProductState.from_fill_fn for constructing MPS from a function that fills the tensors.
add Tensor.idxmin and Tensor.idxmax for finding the index of the minimum/maximum element.
2D and 3D classical partition function TN builders: allow output indices.
quimb.experimental.belief_propagation
: add various 1-norm/2-norm dense/lazy BP algorithms.
Bug fixes:
fixed bug where an output index could be removed by squeezing when performing tensor network simplifications.
v1.6.0 (2023-09-10)¶
Breaking Changes
Quantum circuit RZZ definition corrected (angle changed by -1/2 to match qiskit).
Enhancements:
add OpenQASM 2.0 parsing support:
Circuit.from_openqasm2_file
,Circuit
: add RXX, RYY, CRX, CRY, CRZ, toffoli, fredkin, givens gatestruncate TN pretty html reprentation to 100 tensors for performance
contract_compressed
, default to ‘virtual-tree’ gaugeadd
TN_rand_tree
experimental.operatorbuilder
: fix parallel and heisenberg buildermake parametrized gate generation even more robost (ensure matching types so e.g. tensorflow can be used)
Bug fixes:
fix gauge size check for some backends
v1.5.1 (2023-07-28)¶
Enhancements:
add
MPS_COPY()
.add ‘density matrix’ and ‘zip-up’ MPO-MPS algorithms.
add
drop_tags
option totensor_contract()
compress_all_simple()
, allow cutoff.add structure checking debug methods:
Tensor.check()
andTensorNetwork.check()
.add several direction contraction utility functions:
get_symbol()
,inds_to_eq()
andarray_contract()
.
Bug fixes:
Circuit
: use stack for more robust parametrized gate generationfix for
gate_with_auto_swap()
fori > j
.fix bug where calling
tn.norm()
would mangle indices.
v1.5.0 (2023-05-03)¶
Enhancements
refactor ‘isometrize’ methods including new “cayley”, “householder” and “torch_householder” methods. See
quimb.tensor.decomp.isometrize()
.add
compute_reduced_factor()
andinsert_compressor_between_regions()
methos, for some RG style algorithms.add the
mode="projector"
option for 2D tensor network contractionsadd HOTRG style coarse graining and contraction in 2D and 3D. See
coarse_grain_hotrg()
,contract_hotrg()
,coarse_grain_hotrg()
, andcontract_hotrg()
,add CTMRG style contraction for 2D tensor networks:
contract_ctmrg()
add 2D tensor network ‘corner double line’ (CDL) builders:
TN2D_corner_double_line()
update the docs to use the furo theme, myst_nb for notebooks, and several other
sphinx
extensions.add the
'adabelief'
optimizer toTNOptimizer
as well as a quick plotter:plot()
add initial 3D plotting methods for tensors networks (
TensorNetwork.draw(dim=3, backend='matplotlib3d')
orTensorNetwork.draw(dim=3, backend='plotly')
). The newbackend='plotly'
can also be used for 2D interactive plots.Update
HTN_from_cnf()
to handle more weighted model counting formats.Add
cnf_file_parse()
Add
convert_to_2d()
Add
convert_to_3d()
various optimizations for minimizing computational graph size and construction time.
add
'lu'
,'polar_left'
and'polar_right'
methods totensor_split()
.add experimental arbitrary hamilotonian MPO building
TensorNetwork
: allow empty constructor (i.e. no tensors representing simply the scalar 1)drop_tags()
: allow all tags to be droppedtweaks to compressed contraction and gauging
add jax, flax and optax example
add 3D and interactive plotting of tensors networks with via plotly.
add pygraphiviz layout options
add
combine()
for unified handling of combining tensor networks potentially with structureadd HTML colored pretty printing of tensor networks for notebooks
add
quimb.experimental.cluster_update.py
Bug fixes:
fix
qr_stabilized()
bug for strictly upper triangular R factors.
v1.4.2 (2022-11-28)¶
Enhancements
move from versioneer to to setuptools_scm for versioning
v1.4.1 (2022-11-28)¶
Enhancements
unify much functionality from 1D, 2D and 3D into general arbitrary geometry class
quimb.tensor.tensor_arbgeom.TensorNetworkGen
refactor contraction, allowing using cotengra directly
add
visualize()
for visualizing the actual data entries of an arbitrarily high dimensional tensoradd
Gate
class for more robust tracking and manipulation of gates in quantumCircuit
simulationtweak TN drawing style and layout
tweak default gauging options of compressed contraction
add
as_network()
add
inds_size()
add
get_hyperinds()
add
outer_size()
improve
group_inds()
refactor tensor decompositiona and ‘isometrization’ methods
begin supporting pytree specifications in
TNOptimizer
, e.g. for constantsadd
experimental
submodule for new sharing featuresregister tensor and tensor network objects with
jax
pytree interface (PR 150)update CI infrastructure
Bug fixes:
v1.4.0 (2022-06-14)¶
Enhancements
Add 2D tensor network support and algorithms
Add 3D tensor network infrastructure
Add arbitrary geometry quantum state infrastructure
Many changes to
TNOptimizer
Many changes to TN drawing
Many changes to
Circuit
simulationMany improvements to TN simplification
Make all tag and index operations deterministic
Add
tensor_network_sum()
,tensor_network_distance()
andfit()
Various memory and performance improvements
Various graph generators and TN builders
v1.3.0 (2020-02-18)¶
Enhancements
Added time dependent evolutions to
Evolution
when integrating a pure state - see Time-Dependent Evolutions - as well as supportingLinearOperator
defined hamiltonians (PR 40).Allow the
Evolution
callbackcompute=
to optionally access the Hamiltonian (PR 49).Added
quimb.tensor.tensor_core.Tensor.randomize()
andquimb.tensor.tensor_core.TensorNetwork.randomize()
to randomize tensor and tensor network entries.Automatically squeeze tensor networks when rank-simplifying.
Add
compress_site()
for compressing around single sites of MPS etc.Add
MPS_ghz_state()
andMPS_w_state()
for building bond dimension 2 open boundary MPS reprentations of those states.Various changes in conjunction with autoray to improve the agnostic-ness of tensor network operations with respect to the backend array type.
Add
new_bond()
on top ofquimb.tensor.tensor_core.Tensor.new_ind()
andquimb.tensor.tensor_core.Tensor.expand_ind()
for more graph orientated construction of tensor networks, see Graph Orientated Tensor Network Creation.Add the
fsim()
gate.Make the parallel number generation functions use new
numpy 1.17+
functionality rather thanrandomgen
(which can still be used as the underlying bit generator) (PR 50)TN: rename
contraction_complexity
tocontraction_width()
.TN: update
quimb.tensor.tensor_core.TensorNetwork.rank_simplify()
, to handle hyper-edges.TN: add
quimb.tensor.tensor_core.TensorNetwork.diagonal_reduce()
, to automatically collapse all diagonal tensor axes in a tensor network, introducing hyper edges.TN: add
quimb.tensor.tensor_core.TensorNetwork.antidiag_gauge()
, to automatically flip all anti-diagonal tensor axes in a tensor network allowing subsequent diagonal reduction.TN: add
quimb.tensor.tensor_core.TensorNetwork.column_reduce()
, to automatically identify tensor axes with a single non-zero column, allowing the corresponding index to be cut.TN: add
quimb.tensor.tensor_core.TensorNetwork.full_simplify()
, to iteratively perform all the above simplifications in a specfied order until nothing is left to be done.TN: add
num_tensors
andnum_indices
attributes, shownum_indices
in__repr__
.TN: various improvements to the pytorch optimizer (PR 34)
TN: add some built-in 1D quantum circuit ansatzes:
circ_ansatz_1D_zigzag()
,circ_ansatz_1D_brickwork()
, andcirc_ansatz_1D_rand()
.TN: add parametrized tensors
PTensor
and so trainable, TN based quantum circuits – see Tensor Network Training of Quantum Circuits.
Bug fixes:
Fix consistency of
fidelity()
by making the unsquared version the default for the case when either state is pure, and always return a real number.Fix a bug in the 2D system example for when
j != 1.0
Add environment variable
QUIMB_NUMBA_PAR
to set whether numba should use automatic parallelization - mainly to fix travis segfaults.Make cache import and initilization of
petsc4py
andslepc4py
more robust.
v1.2.0 (2019-06-06)¶
Enhancements
Added
kraus_op()
for general, noisy quantum operationsAdded
projector()
for constructing projectors from observablesAdded
measure()
for measuring and collapsing quantum statesAdded
cprint()
pretty printing states in computational basisAdded
simulate_counts()
for simulating computational basis countsTN: Add
quimb.tensor.tensor_core.TensorNetwork.rank_simplify()
TN: Add
'split-gate'
gate modeTN: Add
TNOptimizer
for tensorflow based optimization of arbitrary, contstrained tensor networks.TN: Add
connect()
to conveniently set a shared index for tensorsTN: make many more tensor operations agnostic of the array backend (e.g. numpy, cupy, tensorflow, …)
TN: allow
align_TN_1D()
to take an MPO as the first argumentTN: add
build_sparse()
TN: add
quimb.tensor.tensor_core.Tensor.unitize()
andquimb.tensor.tensor_core.TensorNetwork.unitize()
to impose unitary/isometric constraints on tensors specfied using theleft_inds
kwargMany updates to tensor network quantum circuit (
quimb.tensor.circuit.Circuit
) simulation including:49-qubit depth 30 circuit simulation example Quantum Circuits
Add
from quimb.gates import *
as shortcut to importX, Z, CNOT, ...
.Add
U_gate()
for parametrized arbitrary single qubit unitary
Bug fixes: