quimb.tensor.tensor_2d_tebd¶
Tools for performing TEBD like algorithms on a 2D lattice.
Classes¶
Globally optimize tensors within a tensor network with respect to any 

Representation of a local hamiltonian defined on a general graph. This 

Generic class for performing time evolving block decimation on an 

A labelled, tagged ndimensional array. The index labels are used 

A 2D Hamiltonian represented as local terms. This combines all two site 

Generic class for performing two dimensional time evolving block 

A simple subclass of 

Implements the 'Full Update' version of 2D imaginary time evolution, 
Functions¶
Wrap a function or method to use the neutral style by default. 


Iterate over each pair of neighbours in 

A context manager to temporarily set the default contraction strategy 

Generate a sequence of rgbs for tag(s) 

Generate a dictionary of all the coordinate pairs in 

Find a sequence of plaquette blocksizes that will cover all the terms 

Convenience function for tiling pairs of bond coordinates on a 2D 

Generate a string of coordinates, in order, from 

Generate the coordinates or a series of swaps that would bring 


Turn a plaquette 


Generates the ordered longrange path  a sequence of coordinates  from 






The default options for the full update gate fitting procedure. 


Parse the options from 
Module Contents¶
 quimb.tensor.tensor_2d_tebd.default_to_neutral_style(fn)[source]¶
Wrap a function or method to use the neutral style by default.
 quimb.tensor.tensor_2d_tebd.pairwise(iterable)[source]¶
Iterate over each pair of neighbours in
iterable
.
 quimb.tensor.tensor_2d_tebd.contract_strategy(strategy, set_globally=False)[source]¶
A context manager to temporarily set the default contraction strategy supplied as
optimize
tocotengra
. By default, this only sets the contract strategy for the current thread. Parameters:
set_globally (bool, optimize) – Whether to set the strategy just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want
True
.
 quimb.tensor.tensor_2d_tebd.get_colors(color, custom_colors=None, alpha=None)[source]¶
Generate a sequence of rgbs for tag(s)
color
.
 class quimb.tensor.tensor_2d_tebd.TNOptimizer(tn, loss_fn, norm_fn=None, loss_constants=None, loss_kwargs=None, tags=None, shared_tags=None, constant_tags=None, loss_target=None, optimizer='LBFGSB', progbar=True, bounds=None, autodiff_backend='AUTO', executor=None, callback=None, **backend_opts)[source]¶
Globally optimize tensors within a tensor network with respect to any loss function via automatic differentiation. If parametrized tensors are used, optimize the parameters rather than the raw arrays.
 Parameters:
tn (TensorNetwork) – The core tensor network structure within which to optimize tensors.
loss_fn (callable or sequence of callable) – The function that takes
tn
(as well asloss_constants
andloss_kwargs
) and returns a single real ‘loss’ to be minimized. For Hamiltonians which can be represented as a sum over terms, an iterable collection of terms (e.g. list) can be given instead. In that case each term is evaluated independently and the sum taken as loss_fn. This can reduce the total memory requirements or allow for parallelization (seeexecutor
).norm_fn (callable, optional) – A function to call before
loss_fn
that prepares or ‘normalizes’ the raw tensor network in some way.loss_constants (dict, optional) – Extra tensor networks, tensors, dicts/list/tuples of arrays, or arrays which will be supplied to
loss_fn
but also converted to the correct backend array type.loss_kwargs (dict, optional) – Extra options to supply to
loss_fn
(unlikeloss_constants
these are assumed to be simple options that don’t need conversion).tags (str, or sequence of str, optional) – If supplied, only optimize tensors with any of these tags.
shared_tags (str, or sequence of str, optional) – If supplied, each tag in
shared_tags
corresponds to a group of tensors to be optimized together.constant_tags (str, or sequence of str, optional) – If supplied, skip optimizing tensors with any of these tags. This ‘optout’ mode is overridden if either
tags
orshared_tags
is supplied.loss_target (float, optional) – Stop optimizing once this loss value is reached.
optimizer (str, optional) – Which
scipy.optimize.minimize
optimizer to use (the'method'
kwarg of that function). In addition,quimb
implements a few custom optimizers compatible with this interface that you can reference by name {'adam', 'nadam', 'rmsprop', 'sgd'}
.executor (None or Executor, optional) – To be used with termbyterm Hamiltonians. If supplied, this executor is used to parallelize the evaluation. Otherwise each term is evaluated in sequence. It should implement the basic concurrent.futures (PEP 3148) interface.
progbar (bool, optional) – Whether to show live progress.
bounds (None or (float, float), optional) – Constrain the optimized tensor entries within this range (if the scipy optimizer supports it).
autodiff_backend ({'jax', 'autograd', 'tensorflow', 'torch'}, optional) – Which backend library to use to perform the automatic differentation (and computation).
callback (callable, optional) –
A function to call after each optimization step. It should take the current
TNOptimizer
instance as its only argument. Information such as the current loss and number of evaluations can then be accessed:def callback(tnopt): print(tnopt.nevals, tnopt.loss)
backend_opts – Supplied to the backend function compiler and array handler. For example
jit_fn=True
ordevice='cpu'
.
 reset(tn=None, clear_info=True, loss_target=None)[source]¶
Reset this optimizer without losing the compiled loss and gradient functions.
 Parameters:
tn (TensorNetwork, optional) – Set this tensor network as the current state of the optimizer, it must exactly match the original tensor network.
clear_info (bool, optional) – Clear the tracked losses and iterations.
 property d¶
 property nevals¶
 The number of gradient evaluations.
 property optimizer¶
 The underlying optimizer that works with the vectorized functions.
 property bounds¶
 get_tn_opt()[source]¶
Extract the optimized tensor network, this is a three part process:
inject the current optimized vector into the target tensor network,
run it through
norm_fn
,drop any tags used to identify variables.
 Returns:
tn_opt
 Return type:
 optimize(n, tol=None, jac=True, hessp=False, optlib='scipy', **options)[source]¶
Run the optimizer for
n
function evaluations, using by defaultscipy.optimize.minimize()
as the driver for the vectorized computation. Supplying the gradient and hessian vector product is controlled by thejac
andhessp
options respectively. Parameters:
n (int) – Notionally the maximum number of iterations for the optimizer, note that depending on the optimizer being used, this may correspond to number of function evaluations rather than just iterations.
tol (None or float, optional) – Tolerance for convergence, note that various more specific tolerances can usually be supplied to
options
, depending on the optimizer being used.jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.
hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.
optlib ({'scipy', 'nlopt'}, optional) – Which optimization library to use.
options – Supplied to
scipy.optimize.minimize()
or whichever optimizer is being used.
 Returns:
tn_opt
 Return type:
 optimize_scipy(n, tol=None, jac=True, hessp=False, **options)[source]¶
Scipy based optimization, see
optimize()
for details.
 optimize_basinhopping(n, nhop, temperature=1.0, jac=True, hessp=False, **options)[source]¶
Run the optimizer for using
scipy.optimize.basinhopping()
as the driver for the vectorized computation. This performsnhop
local optimization each withn
iterations. Parameters:
n (int) – Number of iterations per local optimization.
nhop (int) – Number of local optimizations to hop between.
temperature (float, optional) – H
options – Supplied to the inner
scipy.optimize.minimize()
call.
 Returns:
tn_opt
 Return type:
 optimize_nlopt(n, tol=None, jac=True, hessp=False, ftol_rel=None, ftol_abs=None, xtol_rel=None, xtol_abs=None)[source]¶
Run the optimizer for
n
function evaluations, usingnlopt
as the backend library to run the optimization. Whether the gradient is computed depends on whichoptimizer
is selected, see valid options at https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/.The following scipy
optimizer
options are automatically translated to the correspondingnlopt
algorithms: {“lbfgsb”, “slsqp”, “tnc”, “cobyla”}. Parameters:
n (int) – The maximum number of iterations for the optimizer.
tol (None or float, optional) – Tolerance for convergence, here this is taken to be the relative tolerance for the loss (
ftol_rel
below overrides this).jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.
hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.
ftol_rel (float, optional) – Set relative tolerance on function value.
ftol_abs (float, optional) – Set absolute tolerance on function value.
xtol_rel (float, optional) – Set relative tolerance on optimization parameters.
xtol_abs (float, optional) – Set absolute tolerances on optimization parameters.
 Returns:
tn_opt
 Return type:
 optimize_ipopt(n, tol=None, **options)[source]¶
Run the optimizer for
n
function evaluations, usingipopt
as the backend library to run the optimization via the python packagecyipopt
. Parameters:
n (int) – The maximum number of iterations for the optimizer.
 Returns:
tn_opt
 Return type:
 optimize_nevergrad(n)[source]¶
Run the optimizer for
n
function evaluations, usingnevergrad
as the backend library to run the optimization. As the name suggests, the gradient is not required for this method. Parameters:
n (int) – The maximum number of iterations for the optimizer.
 Returns:
tn_opt
 Return type:
 plot(xscale='symlog', xscale_linthresh=20, zoom='auto', hlines=())[source]¶
Plot the loss function as a function of the number of iterations.
 Parameters:
xscale (str, optional) – The scale of the xaxis. Default is
"symlog"
, i.e. linear for the first part of the plot, and logarithmic for the rest, changing atxscale_linthresh
.xscale_linthresh (int, optional) – The threshold for the change from linear to logarithmic scale, if
xscale
is"symlog"
. Default is20
.zoom (None or int, optional) – If not
None
, show an inset plot of the lastzoom
iterations.hlines (dict, optional) – A dictionary of horizontal lines to plot. The keys are the labels of the lines, and the values are the yvalues of the lines.
 Returns:
fig (matplotlib.figure.Figure) – The figure object.
ax (matplotlib.axes.Axes) – The axes object.
 quimb.tensor.tensor_2d_tebd.calc_plaquette_map(plaquettes)[source]¶
Generate a dictionary of all the coordinate pairs in
plaquettes
mapped to the ‘best’ (smallest) rectangular plaquette that contains them.Examples
Consider 4 sites, with one 2x2 plaquette and two vertical (2x1) and horizontal (1x2) plaquettes each:
>>> plaquettes = [ ... # 2x2 plaquette covering all sites ... ((0, 0), (2, 2)), ... # horizontal plaquettes ... ((0, 0), (1, 2)), ... ((1, 0), (1, 2)), ... # vertical plaquettes ... ((0, 0), (2, 1)), ... ((0, 1), (2, 1)), ... ]
>>> calc_plaquette_map(plaquettes) {((0, 0), (0, 1)): ((0, 0), (1, 2)), ((0, 0), (1, 0)): ((0, 0), (2, 1)), ((0, 0), (1, 1)): ((0, 0), (2, 2)), ((0, 1), (1, 0)): ((0, 0), (2, 2)), ((0, 1), (1, 1)): ((0, 1), (2, 1)), ((1, 0), (1, 1)): ((1, 0), (1, 2))}
Now every of the size coordinate pairs is mapped to one of the plaquettes, but to the smallest one that contains it. So the 2x2 plaquette (specified by
((0, 0), (2, 2))
) would only used for diagonal terms here.
 quimb.tensor.tensor_2d_tebd.calc_plaquette_sizes(coo_groups, autogroup=True)[source]¶
Find a sequence of plaquette blocksizes that will cover all the terms (coordinate pairs) in
pairs
. Parameters:
coo_groups (sequence of tuple[tuple[int]] or tuple[int]) – The sequence of 2D coordinates pairs describing terms. Each should either be a single 2D coordinate or a sequence of 2D coordinates.
autogroup (bool, optional) – Whether to return the minimal sequence of blocksizes that will cover all terms or merge them into a single
((x_bsz, y_bsz),)
.
 Returns:
bszs – Pairs of blocksizes.
 Return type:
Examples
Some nearest neighbour interactions:
>>> H2 = {None: qu.ham_heis(2)} >>> ham = qtn.LocalHam2D(10, 10, H2) >>> calc_plaquette_sizes(ham.terms.keys()) ((1, 2), (2, 1))
>>> calc_plaquette_sizes(ham.terms.keys(), autogroup=False) ((2, 2),)
If we add any next nearest neighbour interaction then we are going to need the (2, 2) blocksize in any case:
>>> H2[(1, 1), (2, 2)] = 0.5 * qu.ham_heis(2) >>> ham = qtn.LocalHam2D(10, 10, H2) >>> calc_plaquette_sizes(ham.terms.keys()) ((2, 2),)
If we add longer range interactions (nondiagonal next nearest) we again can benefit from multiple plaquette blocksizes:
>>> H2[(1, 1), (1, 3)] = 0.25 * qu.ham_heis(2) >>> H2[(1, 1), (3, 1)] = 0.25 * qu.ham_heis(2) >>> ham = qtn.LocalHam2D(10, 10, H2) >>> calc_plaquette_sizes(ham.terms.keys()) ((1, 3), (2, 2), (3, 1))
Or choose the plaquette blocksize that covers all terms:
>>> calc_plaquette_sizes(ham.terms.keys(), autogroup=False) ((3, 3),)
 quimb.tensor.tensor_2d_tebd.gen_2d_bonds(Lx, Ly, steppers=None, coo_filter=None, cyclic=False)[source]¶
Convenience function for tiling pairs of bond coordinates on a 2D lattice given a function like
lambda i, j: (i + 1, j + 1)
. Parameters:
Lx (int) – The number of rows.
Ly (int) – The number of columns.
steppers (callable or sequence of callable, optional) – Function(s) that take args
(i, j)
and generate another coordinate, thus defining a bond. Only valid steps are taken. If not given, defaults to nearest neighbor bonds.coo_filter (callable) – Function that takes args
(i, j)
and only returnsTrue
if this is to be a valid starting coordinate.
 Yields:
bond (tuple[tuple[int, int], tuple[int, int]]) – A pair of coordinates.
Examples
Generate nearest neighbor bonds:
>>> for bond in gen_2d_bonds(2, 2, [lambda i, j: (i, j + 1), >>> lambda i, j: (i + 1, j)]): >>> print(bond) ((0, 0), (0, 1)) ((0, 0), (1, 0)) ((0, 1), (1, 1)) ((1, 0), (1, 1))
Generate next nearest neighbor digonal bonds:
>>> for bond in gen_2d_bonds(2, 2, [lambda i, j: (i + 1, j + 1), >>> lambda i, j: (i + 1, j  1)]): >>> print(bond) ((0, 0), (1, 1)) ((0, 1), (1, 0))
 quimb.tensor.tensor_2d_tebd.gen_long_range_path(ij_a, ij_b, sequence=None)[source]¶
Generate a string of coordinates, in order, from
ij_a
toij_b
. Parameters:
sequence (None, iterable of {'v', 'h'}, or 'random', optional) – What order to cycle through and try and perform moves in, ‘v’, ‘h’ standing for move vertically and horizontally respectively. The default is
('v', 'h')
.
 Returns:
The path, each element is a single coordinate.
 Return type:
 quimb.tensor.tensor_2d_tebd.gen_long_range_swap_path(ij_a, ij_b, sequence=None)[source]¶
Generate the coordinates or a series of swaps that would bring
ij_a
andij_b
together. Parameters:
sequence (None, it of {'av', 'bv', 'ah', 'bh'}, or 'random', optional) – What order to cycle through and try and perform moves in, ‘av’, ‘bv’, ‘ah’, ‘bh’ standing for move ‘a’ vertically, ‘b’ vertically, ‘a’ horizontally’, and ‘b’ horizontally respectively. The default is
('av', 'bv', 'ah', 'bh')
.
 Returns:
The path, each element is two coordinates to swap.
 Return type:
 quimb.tensor.tensor_2d_tebd.plaquette_to_sites(p)[source]¶
Turn a plaquette
((i0, j0), (di, dj))
into the sites it contains.Examples
>>> plaquette_to_sites([(3, 4), (2, 2)]) ((3, 4), (3, 5), (4, 4), (4, 5))
 quimb.tensor.tensor_2d_tebd.swap_path_to_long_range_path(swap_path, ij_a)[source]¶
Generates the ordered longrange path  a sequence of coordinates  from a (longrange) swap path  a sequence of coordinate pairs.
 class quimb.tensor.tensor_2d_tebd.LocalHamGen(H2, H1=None)[source]¶
Representation of a local hamiltonian defined on a general graph. This combines all two site and one site terms into a single interaction per lattice pair, and caches operations on the terms such as getting their exponential. The sites (nodes) should be hashable and comparable.
 Parameters:
H2 (dict[tuple[node], array_like]) – The interaction terms, with each key being an tuple of nodes defining an edge and each value the local hamilotonian term for those two nodes.
H1 (array_like or dict[node, array_like], optional) – The one site term(s). If a single array is given, assume to be the default onsite term for all terms. If a dict is supplied, the keys should represent specific coordinates like
(i, j)
with the values the array representing the local term for that site. A default term for all remaining sites can still be supplied with the keyNone
.
 terms¶
The total effective local term for each interaction (with single site terms appropriately absorbed). Each key is a pair of coordinates
site_a, site_b
withsite_a < site_b
.
 property nsites¶
 The number of sites in the system.
 items()[source]¶
Iterate over all terms in the hamiltonian. This is mostly for convenient compatibility with
compute_local_expectation
.
 get_gate_expm(where, x)[source]¶
Get the local term for pair
where
, matrix exponentiated byx
, and cached.
 _nx_color_ordering(strategy='smallest_first', interchange=True)[source]¶
Generate a term ordering based on a coloring on the line graph.
 get_auto_ordering(order='sort', **kwargs)[source]¶
Get an ordering of the terms to use with TEBD, for example. The default is to sort the coordinates then greedily group them into commuting sets.
 Parameters:
order ({'sort', None, 'random', str}) –
How to order the terms before greedily grouping them into commuting (noncoordinate overlapping) sets:
'sort'
will sort the coordinate pairs first.None
will use the current order of terms which should match the order they were supplied to thisLocalHam2D
instance.'random'
will randomly shuffle the coordinate pairs before grouping them  not the same as returning a completely random order.'randomungrouped'
will randomly shuffle the coordinate pairs but not group them at all with respect to commutation.
Any other option will be passed as a strategy to
networkx.coloring.greedy_color
to generate the ordering. Returns:
Sequence of coordinate pairs.
 Return type:
 draw(ordering='sort', show_norm=True, figsize=None, fontsize=8, legend=True, ax=None, **kwargs)[source]¶
Plot this Hamiltonian as a network.
 Parameters:
ordering ({'sort', None, 'random'}, optional) – An ordering of the termns, or an argument to be supplied to
quimb.tensor.tensor_arbgeom_tebd.LocalHamGen.get_auto_ordering()
to generate this automatically.show_norm (bool, optional) – Show the norm of each term as edge labels.
figsize (None or tuple[int], optional) – Size of the figure, defaults to size of Hamiltonian.
fontsize (int, optional) – Font size for norm labels.
legend (bool, optional) – Whether to show the legend of which terms are in which group.
ax (None or matplotlib.Axes, optional) – Add to a existing set of axes.
 class quimb.tensor.tensor_2d_tebd.TEBDGen(psi0, ham, tau=0.01, D=None, imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]¶
Generic class for performing time evolving block decimation on an arbitrary graph, i.e. applying the exponential of a Hamiltonian using a product formula that involves applying local exponentiated gates only.
 sweep(tau)[source]¶
Perform a full sweep of gates at every pair.
\[\psi \rightarrow \prod_{\{ij\}} \exp(\tau H_{ij}) \psi\]
 evolve(steps, tau=None, progbar=None)[source]¶
Evolve the state with the local Hamiltonian for
steps
steps with time steptau
.
 property state¶
 Return a copy of the current state.
 property n¶
 The number of sweeps performed.
 property D¶
 The maximum bond dimension.
 property energy¶
 Return the energy of current state, computing it only if necessary.
 get_state()[source]¶
The default method for retrieving the current state  simply a copy. Subclasses can override this to perform additional transformations.
 set_state(psi)[source]¶
The default method for setting the current state  simply a copy. Subclasses can override this to perform additional transformations.
 presweep(i)[source]¶
Perform any computations required before the sweep (and energy computation). For the basic TEBD this is nothing.
 gate(U, where)[source]¶
Perform single gate
U
at coordinate pairwhere
. This is the the most common method to override.
 class quimb.tensor.tensor_2d_tebd.Tensor(data=1.0, inds=(), tags=None, left_inds=None)[source]¶
A labelled, tagged ndimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.
 Parameters:
data (numpy.ndarray) – The ndimensional data.
inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of
data
.tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a
oset
.left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.
Examples
Basic construction:
>>> from quimb import randn >>> from quimb.tensor import Tensor >>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'}) >>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})
Indices are automatically aligned, and tags combined, when contracting:
>>> X @ Y Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
 __slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')¶
 get_params()[source]¶
A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.
 set_params(params)[source]¶
A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.
 copy(deep=False, virtual=False)[source]¶
Copy this tensor.
Note
By default (
deep=False
), the underlying array will not be copied.
 property data¶
 property inds¶
 property tags¶
 property left_inds¶
 property owners¶
 add_owner(tn, tid)[source]¶
Add
tn
as owner of this Tensor  it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.
 check_owners()[source]¶
Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.
 modify(**kwargs)[source]¶
Overwrite the data of this tensor in place.
 Parameters:
data (array, optional) – New data.
apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.
inds (sequence of str, optional) – New tuple of indices.
tags (sequence of str, optional) – New tags.
left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.
 apply_to_arrays(fn)[source]¶
Apply the function
fn
to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.
 isel(selectors, inplace=False)[source]¶
Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to
X[:, :, 3, :, :]
with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice. Parameters:
 Return type:
Examples
>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c')) >>> T.isel({'b': 1}) Tensor(shape=(2, 4), inds=('a', 'c'), tags=())
See also
TensorNetwork.isel
 add_tag(tag)[source]¶
Add a tag or multiple tags to this tensor. Unlike
self.tags.add
this also updates anyTensorNetwork
objects viewing thisTensor
.
 expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')[source]¶
Inplace increase the size of the dimension of
ind
, the new array entries will be filled with zeros by default. Parameters:
name (str) – Name of the index to expand.
size (int, optional) – Size of the expanded index.
mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If
'zeros'
then fill with zeros, if'repeat'
then repeatedly tile the existing entries. If'random'
then fill with random entries drawn fromrand_dist
, multiplied byrand_strength
. IfNone
then select from zeros or random depening on nonzerorand_strength
.rand_strength (float, optional) – If
mode='random'
, a multiplicative scale for the random entries, defaulting to 1.0. Ifmode is None
then supplying a nonzero value here triggersmode='random'
.rand_dist ({'normal', 'uniform', 'exp'}, optional) – If
mode='random'
, the distribution to draw the random entries from.
 new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')[source]¶
Inplace add a new index  a named dimension. If
size
is specified to be greater than one then the new array entries will be filled with zeros. Parameters:
name (str) – Name of the new index.
size (int, optional) – Size of the new index.
axis (int, optional) – Position of the new index.
mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If
'zeros'
then fill with zeros, if'repeat'
then repeatedly tile the existing entries. If'random'
then fill with random entries drawn fromrand_dist
, multiplied byrand_strength
. IfNone
then select from zeros or random depening on nonzerorand_strength
.rand_strength (float, optional) – If
mode='random'
, a multiplicative scale for the random entries, defaulting to 1.0. Ifmode is None
then supplying a nonzero value here triggersmode='random'
.rand_dist ({'normal', 'uniform', 'exp'}, optional) – If
mode='random'
, the distribution to draw the random entries from.
See also
 new_ind_with_identity(name, left_inds, right_inds, axis=0)[source]¶
Inplace add a new index, where the newly stacked array entries form the identity from
left_inds
toright_inds
. Selecting 0 or 1 for the new indexname
thus is like ‘turning off’ this tensor if viewed as an operator. Parameters:
name (str) – Name of the new index.
left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.
right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of
left_inds
.axis (int, optional) – Position of the new index.
 new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)[source]¶
Expand this tensor with two new indices of size
d
, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.
 property H¶
 Conjugate this tensors data (does nothing to indices).
 property shape¶
 The size of each dimension.
 property ndim¶
 The number of dimensions.
 property size¶
 The total number of array elements.
 property dtype¶
 The data type of the array elements.
 property backend¶
 The backend inferred from the data.
Get the total size of the shared index(es) with
other
.
 transpose(*output_inds, inplace=False)[source]¶
Transpose this tensor  permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.
Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.
 Parameters:
 Returns:
tt – The transposed tensor.
 Return type:
See also
 transpose_like(other, inplace=False)[source]¶
Transpose this tensor to match the indices of
other
, allowing for one index to be different. E.g. ifself.inds = ('a', 'b', 'c', 'x')
andother.inds = ('b', 'a', 'd', 'c')
then ‘x’ will be aligned with ‘d’ and the output inds will be('b', 'a', 'x', 'c')
 Parameters:
 Returns:
tt – The transposed tensor.
 Return type:
See also
 moveindex(ind, axis, inplace=False)[source]¶
Move the index
ind
to positionaxis
. Liketranspose
, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.
 trace(left_inds, right_inds, preserve_tensor=False, inplace=False)[source]¶
Trace index or indices
left_inds
withright_inds
, removing them. Parameters:
left_inds (str or sequence of str) – The left indices to trace, order matching
right_inds
.right_inds (str or sequence of str) – The right indices to trace, order matching
left_inds
.preserve_tensor (bool, optional) – If
True
, a tensor will be returned even if no indices remain.inplace (bool, optional) – Perform the trace inplace.
 Returns:
z
 Return type:
Tensor or scalar
 vector_reduce(ind, v, inplace=False)[source]¶
Contract the vector
v
with the indexind
of this tensor, removing it.
 collapse_repeated(inplace=False)[source]¶
Take the diagonals of any repeated indices, such that each index only appears once.
 gate(G, ind, preserve_inds=True, inplace=False)[source]¶
Gate this tensor  contract a matrix into one of its indices without changing its indices. Unlike
contract
,G
is a raw array and the tensor remains with the same set of indices. Parameters:
G (2D array_like) – The matrix to gate the tensor index with.
ind (str) – Which index to apply the gate to.
 Return type:
Examples
Create a random tensor of 4 qubits:
>>> t = qtn.rand_tensor( ... shape=[2, 2, 2, 2], ... inds=['k0', 'k1', 'k2', 'k3'], ... )
Create another tensor with an X gate applied to qubit 2:
>>> Gt = t.gate(qu.pauli('X'), 'k2')
The contraction of these two tensors is now the expectation of that operator:
>>> t.H @ Gt 4.108910576149794
 singular_values(left_inds, method='svd')[source]¶
Return the singular values associated with splitting this tensor according to
left_inds
. Parameters:
left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.
method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.
 Returns:
The singular values.
 Return type:
1darray
 entropy(left_inds, method='svd')[source]¶
Return the entropy associated with splitting this tensor according to
left_inds
.
 retag(retag_map, inplace=False)[source]¶
Rename the tags of this tensor, optionally, inplace.
 Parameters:
retag_map (dictlike) – Mapping of pairs
{old_tag: new_tag, ...}
.inplace (bool, optional) – If
False
(the default), a copy of this tensor with the changed tags will be returned.
 reindex(index_map, inplace=False)[source]¶
Rename the indices of this tensor, optionally inplace.
 Parameters:
index_map (dictlike) – Mapping of pairs
{old_ind: new_ind, ...}
.inplace (bool, optional) – If
False
(the default), a copy of this tensor with the changed inds will be returned.
 fuse(fuse_map, inplace=False)[source]¶
Combine groups of indices into single indices.
 Parameters:
fuse_map (dict_like or sequence of tuples.) – Mapping like:
{new_ind: sequence of existing inds, ...}
or an ordered mapping like[(new_ind_1, old_inds_1), ...]
in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused. Returns:
The transposed, reshaped and relabeled tensor.
 Return type:
 unfuse(unfuse_map, shape_map, inplace=False)[source]¶
Reshape single indices into groups of multiple indices
 Parameters:
unfuse_map (dict_like or sequence of tuples.) – Mapping like:
{existing_ind: sequence of new inds, ...}
or an ordered mapping like[(old_ind_1, new_inds_1), ...]
in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shapeshape_map (dict_like or sequence of tuples) – Mapping like:
{old_ind: new_ind_sizes, ...}
or an ordered mapping like[(old_ind_1, new_ind_sizes_1), ...]
.
 Returns:
The transposed, reshaped and relabeled tensor
 Return type:
 to_dense(*inds_seq, to_qarray=False)[source]¶
Convert this Tensor into an dense array, with a single dimension for each of inds in
inds_seqs
. E.g. to convert several sites into a density matrix:T.to_dense(('k0', 'k1'), ('b0', 'b1'))
.
 squeeze(include=None, exclude=None, inplace=False)[source]¶
Drop any singlet dimensions from this tensor.
 Parameters:
inplace (bool, optional) – Whether modify the original or return a new tensor.
include (sequence of str, optional) – Only squeeze dimensions with indices in this list.
exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.
inplace – Whether to perform the squeeze inplace or not.
 Return type:
 largest_element()[source]¶
Return the largest element, in terms of absolute magnitude, of this tensor.
 idxmin(f=None)[source]¶
Get the index configuration of the minimum element of this tensor, optionally applying
f
first.
 idxmax(f=None)[source]¶
Get the index configuration of the maximum element of this tensor, optionally applying
f
first.
 norm()[source]¶
Frobenius norm of this tensor:
\[\t\_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.
 symmetrize(ind1, ind2, inplace=False)[source]¶
Hermitian symmetrize this tensor for indices
ind1
andind2
. I.e.T = (T + T.conj().T) / 2
, where the transpose is taken only over the specified indices.
 isometrize(left_inds=None, method='qr', inplace=False)[source]¶
Make this tensor unitary (or isometric) with respect to
left_inds
. The underlying method is set bymethod
. Parameters:
left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.
method (str, optional) –
The method used to generate the isometry. The options are:
”qr”: use the Q factor of the QR decomposition of
x
with the constraint that the diagonal ofR
is positive.”svd”: uses
U @ VH
of the SVD decomposition ofx
. This is useful for finding the ‘closest’ isometric matrix tox
, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.”exp”: use the matrix exponential of
x  dag(x)
, first completingx
with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for nonsquarex
.”cayley”: use the Cayley transform of
x  dag(x)
, first completingx
with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for nonsquarex
.”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.
”torch_householder”: use the Householder reflection method directly, using the
torch_householder
package. This requires that the package is installed and that the backend is"torch"
. This is generally the best parametrizing method for “torch” if available.”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.
Not all backends support all methods or differentiating through all methods.
inplace (bool, optional) – Whether to perform the unitization inplace.
 Return type:
 unitize_¶
 randomize(dtype=None, inplace=False, **randn_opts)[source]¶
Randomize the entries of this tensor.
 Parameters:
 Return type:
 flip(ind, inplace=False)[source]¶
Reverse the axis on this tensor corresponding to
ind
. Like performing e.g.X[:, :, ::1, :]
.
 multiply_index_diagonal(ind, x, inplace=False)[source]¶
Multiply this tensor by 1D array
x
as if it were a diagonal tensor being contracted into indexind
.
 filter_bonds(other)[source]¶
Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.
 __or__(other)[source]¶
Combine virtually (no copies made) with another
Tensor
orTensorNetwork
into a newTensorNetwork
.
 __matmul__(other)[source]¶
Explicitly contract with another tensor. Avoids some slight overhead of calling the full
tensor_contract()
.
 _repr_info()[source]¶
General info to show in various reprs. Sublasses can add more relevant info to this dict.
 class quimb.tensor.tensor_2d_tebd.LocalHam2D(Lx, Ly, H2, H1=None, cyclic=False)[source]¶
Bases:
quimb.tensor.tensor_arbgeom_tebd.LocalHamGen
A 2D Hamiltonian represented as local terms. This combines all two site and one site terms into a single interaction per lattice pair, and caches operations on the terms such as getting their exponential.
 Parameters:
Lx (int) – The number of rows.
Ly (int) – The number of columns.
H2 (array_like or dict[tuple[tuple[int]], array_like]) – The two site term(s). If a single array is given, assume to be the default interaction for all nearest neighbours. If a dict is supplied, the keys should represent specific pairs of coordinates like
((ia, ja), (ib, jb))
with the values the array representing the interaction for that pair. A default term for all remaining nearest neighbours interactions can still be supplied with the keyNone
.H1 (array_like or dict[tuple[int], array_like], optional) – The one site term(s). If a single array is given, assume to be the default onsite term for all terms. If a dict is supplied, the keys should represent specific coordinates like
(i, j)
with the values the array representing the local term for that site. A default term for all remaining sites can still be supplied with the keyNone
.
 terms¶
The total effective local term for each interaction (with single site terms appropriately absorbed). Each key is a pair of coordinates
ija, ijb
withija < ijb
.
 property nsites¶
 The number of sites in the system.
 draw(ordering='sort', show_norm=True, figsize=None, fontsize=8, legend=True, ax=None, **kwargs)[source]¶
Plot this Hamiltonian as a network.
 Parameters:
ordering ({'sort', None, 'random'}, optional) – An ordering of the termns, or an argument to be supplied to
quimb.tensor.tensor_2d_tebd.LocalHam2D.get_auto_ordering()
to generate this automatically.show_norm (bool, optional) – Show the norm of each term as edge labels.
figsize (None or tuple[int], optional) – Size of the figure, defaults to size of Hamiltonian.
fontsize (int, optional) – Font size for norm labels.
legend (bool, optional) – Whether to show the legend of which terms are in which group.
ax (None or matplotlib.Axes, optional) – Add to a existing set of axes.
 class quimb.tensor.tensor_2d_tebd.TEBD2D(psi0, ham, tau=0.01, D=None, chi=None, imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]¶
Bases:
quimb.tensor.tensor_arbgeom_tebd.TEBDGen
Generic class for performing two dimensional time evolving block decimation, i.e. applying the exponential of a Hamiltonian using a product formula that involves applying local exponentiated gates only.
 Parameters:
psi0 (TensorNetwork2DVector) – The initial state.
ham (LocalHam2D) – The Hamtiltonian consisting of local terms.
tau (float, optional) – The default local exponent, if considered as time real values here imply imaginary time.
max_bond ({'psi0', int, None}, optional) – The maximum bond dimension to keep when applying each gate.
gate_opts (dict, optional) – Supplied to
quimb.tensor.tensor_2d.TensorNetwork2DVector.gate()
, in addition tomax_bond
. By defaultcontract
is set to ‘reducesplit’ andcutoff
is set to0.0
.ordering (str, tuple[tuple[int]], callable, optional) – How to order the terms, if a string is given then use this as the strategy given to
get_auto_ordering()
. An explicit list of coordinate pairs can also be given. The default is to greedily form an ‘edge coloring’ based on the sorted list of Hamiltonian pair coordinates. If a callable is supplied it will be used to generate the ordering before each sweep.second_order_reflect (bool, optional) – If
True
, then apply each layer of gates inordering
forward with half the time step, then the same with reverse order.compute_energy_every (None or int, optional) – How often to compute and record the energy. If a positive integer ‘n’, the energy is computed before every nth sweep (i.e. including before the zeroth).
compute_energy_final (bool, optional) – Whether to compute and record the energy at the end of the sweeps regardless of the value of
compute_energy_every
. If you start sweeping again then this final energy is the same as the zeroth of the next set of sweeps and won’t be recomputed.compute_energy_opts (dict, optional) – Supplied to
compute_local_expectation()
. By defaultmax_bond
is set tomax(8, D**2)
whereD
is the maximum bond to use for applying the gate,cutoff
is set to0.0
andnormalized
is set toTrue
.compute_energy_fn (callable, optional) – Supply your own function to compute the energy, it should take the
TEBD2D
object as its only argument.callback (callable, optional) – A custom callback to run after every sweep, it should take the
TEBD2D
object as its only argument. If it returns any value that boolean evaluates toTrue
then terminal the evolution.progbar (boolean, optional) – Whether to show a live progress bar during the evolution.
kwargs – Extra options for the specific
TEBD2D
subclass.
 state¶
The current state.
 Type:
 ham¶
The Hamiltonian being used to evolve.
 Type:
 energy¶
The current of the current state, this will trigger a computation if the energy at this iteration hasn’t been computed yet.
 Type:
 its¶
The corresponding sequence of iteration numbers that energies have been computed at.
 taus¶
The corresponding sequence of time steps that energies have been computed at.
 best¶
If
keep_best
was set then the best recorded energy and the corresponding state that was computed  keys'energy'
and'state'
respectively. Type:
 property chi¶
 class quimb.tensor.tensor_2d_tebd.SimpleUpdate(psi0, ham, tau=0.01, D=None, chi=None, gauge_renorm=True, gauge_smudge=1e06, condition_tensors=True, condition_balance_bonds=True, long_range_use_swaps=False, long_range_path_sequence='random', imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]¶
Bases:
TEBD2D
A simple subclass of
TEBD2D
that overrides two key methods in order to keep ‘diagonal gauges’ living on the bonds of a PEPS. The gauges are stored separately from the main PEPS in thegauges
attribute. Before and after a gate is applied they are absorbed and then extracted. When accessing thestate
attribute they are automatically inserted or you can callget_state(absorb_gauges=False)
to lazily add them as hyperedge weights only. Reference: https://arxiv.org/abs/0806.3719. Parameters:
psi0 (TensorNetwork2DVector) – The initial state.
ham (LocalHam2D) – The Hamtiltonian consisting of local terms.
tau (float, optional) – The default local exponent, if considered as time real values here imply imaginary time.
max_bond ({'psi0', int, None}, optional) – The maximum bond dimension to keep when applying each gate.
gate_opts (dict, optional) – Supplied to
quimb.tensor.tensor_2d.TensorNetwork2DVector.gate()
, in addition tomax_bond
. By defaultcontract
is set to ‘reducesplit’ andcutoff
is set to0.0
.ordering (str, tuple[tuple[int]], callable, optional) – How to order the terms, if a string is given then use this as the strategy given to
get_auto_ordering()
. An explicit list of coordinate pairs can also be given. The default is to greedily form an ‘edge coloring’ based on the sorted list of Hamiltonian pair coordinates. If a callable is supplied it will be used to generate the ordering before each sweep.second_order_reflect (bool, optional) – If
True
, then apply each layer of gates inordering
forward with half the time step, then the same with reverse order.compute_energy_every (None or int, optional) – How often to compute and record the energy. If a positive integer ‘n’, the energy is computed before every nth sweep (i.e. including before the zeroth).
compute_energy_final (bool, optional) – Whether to compute and record the energy at the end of the sweeps regardless of the value of
compute_energy_every
. If you start sweeping again then this final energy is the same as the zeroth of the next set of sweeps and won’t be recomputed.compute_energy_opts (dict, optional) – Supplied to
compute_local_expectation()
. By defaultmax_bond
is set tomax(8, D**2)
whereD
is the maximum bond to use for applying the gate,cutoff
is set to0.0
andnormalized
is set toTrue
.compute_energy_fn (callable, optional) – Supply your own function to compute the energy, it should take the
TEBD2D
object as its only argument.callback (callable, optional) – A custom callback to run after every sweep, it should take the
TEBD2D
object as its only argument. If it returns any value that boolean evaluates toTrue
then terminal the evolution.progbar (boolean, optional) – Whether to show a live progress bar during the evolution.
gauge_renorm (bool, optional) – Whether to actively renormalize the singular value gauges.
gauge_smudge (float, optional) – A small offset to use when applying the guage and its inverse to avoid numerical problems.
condition_tensors (bool, optional) – Whether to actively equalize tensor norms for numerical stability.
condition_balance_bonds (bool, optional) – If and when equalizing tensor norms, whether to also balance bonds as an additional conditioning.
long_range_use_swaps (bool, optional) – If there are long range terms, whether to use swap gates to apply the terms. If
False
, a long range blob tensor (which won’t scale well for long distances) is formed instead.long_range_path_sequence (str or callable, optional) – If there are long range terms how to generate the path between the two coordinates. If callable, should take the two coordinates and return a sequence of coordinates that links them, else passed to
gen_long_range_swap_path
.
 state¶
The current state.
 Type:
 ham¶
The Hamiltonian being used to evolve.
 Type:
 energy¶
The current of the current state, this will trigger a computation if the energy at this iteration hasn’t been computed yet.
 Type:
 its¶
The corresponding sequence of iteration numbers that energies have been computed at.
 taus¶
The corresponding sequence of time steps that energies have been computed at.
 best¶
If
keep_best
was set then the best recorded energy and the corresponding state that was computed  keys'energy'
and'state'
respectively. Type:
 property gauges¶
 The dictionary of bond pair coordinates to Tensors describing the
 weights (``t = gauges[pair]; t.data``) and index
 (``t = gauges[pair]; t.inds[0]``) of all the gauges.
 property long_range_use_swaps¶
 gate(U, where)[source]¶
Like
TEBD2D.gate
but absorb and extract the relevant gauges before and after each gate application.
 quimb.tensor.tensor_2d_tebd.gate_full_update_als(ket, env, bra, G, where, tags_plq, steps, tol, max_bond, optimize='autohq', solver='solve', dense=True, enforce_pos=False, pos_smudge=1e06, init_simple_guess=True, condition_tensors=True, condition_maintain_norms=True, condition_balance_bonds=True)[source]¶
 quimb.tensor.tensor_2d_tebd.gate_full_update_autodiff_fidelity(ket, env, bra, G, where, tags_plq, steps, tol, max_bond, optimize='autohq', autodiff_backend='autograd', autodiff_optimizer='LBFGSB', init_simple_guess=True, condition_tensors=True, condition_maintain_norms=True, condition_balance_bonds=True, **kwargs)[source]¶
 quimb.tensor.tensor_2d_tebd.get_default_full_update_fit_opts()[source]¶
The default options for the full update gate fitting procedure.
 quimb.tensor.tensor_2d_tebd.parse_specific_gate_opts(strategy, fit_opts)[source]¶
Parse the options from
fit_opts
which are relevant forstrategy
.
 class quimb.tensor.tensor_2d_tebd.FullUpdate(psi0, ham, tau=0.01, D=None, chi=None, fit_strategy='als', fit_opts=None, compute_envs_every=1, pre_normalize=True, condition_tensors=True, condition_balance_bonds=True, contract_optimize='autohq', imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]¶
Bases:
TEBD2D
Implements the ‘Full Update’ version of 2D imaginary time evolution, where each application of a gate is fitted to the current tensors using a boundary contracted environment.
 Parameters:
psi0 (TensorNetwork2DVector) – The initial state.
ham (LocalHam2D) – The Hamtiltonian consisting of local terms.
tau (float, optional) – The default local exponent, if considered as time real values here imply imaginary time.
max_bond ({'psi0', int, None}, optional) – The maximum bond dimension to keep when applying each gate.
gate_opts (dict, optional) – Supplied to
quimb.tensor.tensor_2d.TensorNetwork2DVector.gate()
, in addition tomax_bond
. By defaultcontract
is set to ‘reducesplit’ andcutoff
is set to0.0
.ordering (str, tuple[tuple[int]], callable, optional) – How to order the terms, if a string is given then use this as the strategy given to
get_auto_ordering()
. An explicit list of coordinate pairs can also be given. The default is to greedily form an ‘edge coloring’ based on the sorted list of Hamiltonian pair coordinates. If a callable is supplied it will be used to generate the ordering before each sweep.second_order_reflect (bool, optional) – If
True
, then apply each layer of gates inordering
forward with half the time step, then the same with reverse order.compute_energy_every (None or int, optional) – How often to compute and record the energy. If a positive integer ‘n’, the energy is computed before every nth sweep (i.e. including before the zeroth).
compute_energy_final (bool, optional) – Whether to compute and record the energy at the end of the sweeps regardless of the value of
compute_energy_every
. If you start sweeping again then this final energy is the same as the zeroth of the next set of sweeps and won’t be recomputed.compute_energy_opts (dict, optional) – Supplied to
compute_local_expectation()
. By defaultmax_bond
is set tomax(8, D**2)
whereD
is the maximum bond to use for applying the gate,cutoff
is set to0.0
andnormalized
is set toTrue
.compute_energy_fn (callable, optional) – Supply your own function to compute the energy, it should take the
TEBD2D
object as its only argument.callback (callable, optional) – A custom callback to run after every sweep, it should take the
TEBD2D
object as its only argument. If it returns any value that boolean evaluates toTrue
then terminal the evolution.progbar (boolean, optional) – Whether to show a live progress bar during the evolution.
fit_strategy ({'als', 'autodifffidelity'}, optional) –
Core method used to fit the gate application.
'als'
: alternating least squares'autodifffidelity'
: local fidelity using autodiff
fit_opts (dict, optional) – Advanced options for the gate application fitting functions. Defaults are inserted and can be accessed via the
.fit_opts
attribute.compute_envs_every ({'term', 'group', 'sweep', int}, optional) –
How often to recompute the environments used to the fit the gate application:
'term'
: every gate'group'
: every set of commuting gates (the default)'sweep'
: every total sweepint: every
x
number of total sweeps
pre_normalize (bool, optional) – Actively renormalize the state using the computed environments.
condition_tensors (bool, optional) – Whether to actively equalize tensor norms for numerical stability.
condition_balance_bonds (bool, optional) – If and when equalizing tensor norms, whether to also balance bonds as an additional conditioning.
contract_optimize (str, optional) – Contraction path optimizer to use for gate + env + sites contractions.
 state¶
The current state.
 Type:
 ham¶
The Hamiltonian being used to evolve.
 Type:
 energy¶
The current of the current state, this will trigger a computation if the energy at this iteration hasn’t been computed yet.
 Type:
 its¶
The corresponding sequence of iteration numbers that energies have been computed at.
 taus¶
The corresponding sequence of time steps that energies have been computed at.
 best¶
If
keep_best
was set then the best recorded energy and the corresponding state that was computed  keys'energy'
and'state'
respectively. Type:
 property fit_strategy¶
 set_state(psi)[source]¶
The default method for setting the current state  simply a copy. Subclasses can override this to perform additional transformations.
 property compute_envs_every¶
 _maybe_compute_plaquette_envs(force=False)[source]¶
Compute and store the plaquette environments for all local terms.