quimb.tensor.tensor_2d_tebd

Tools for performing TEBD like algorithms on a 2D lattice.

Classes

TNOptimizer

Globally optimize tensors within a tensor network with respect to any

LocalHamGen

Representation of a local hamiltonian defined on a general graph. This

TEBDGen

Generic class for performing time evolving block decimation on an

Tensor

A labelled, tagged n-dimensional array. The index labels are used

LocalHam2D

A 2D Hamiltonian represented as local terms. This combines all two site

TEBD2D

Generic class for performing two dimensional time evolving block

SimpleUpdate

A simple subclass of TEBD2D that overrides two key methods in

FullUpdate

Implements the 'Full Update' version of 2D imaginary time evolution,

Functions

default_to_neutral_style(fn)

Wrap a function or method to use the neutral style by default.

pairwise(iterable)

Iterate over each pair of neighbours in iterable.

contract_strategy(strategy[, set_globally])

A context manager to temporarily set the default contraction strategy

get_colors(color[, custom_colors, alpha])

Generate a sequence of rgbs for tag(s) color.

calc_plaquette_map(plaquettes)

Generate a dictionary of all the coordinate pairs in plaquettes

calc_plaquette_sizes(coo_groups[, autogroup])

Find a sequence of plaquette blocksizes that will cover all the terms

gen_2d_bonds(Lx, Ly[, steppers, coo_filter, cyclic])

Convenience function for tiling pairs of bond coordinates on a 2D

gen_long_range_path(ij_a, ij_b[, sequence])

Generate a string of coordinates, in order, from ij_a to ij_b.

gen_long_range_swap_path(ij_a, ij_b[, sequence])

Generate the coordinates or a series of swaps that would bring ij_a

nearest_neighbors(coo)

plaquette_to_sites(p)

Turn a plaquette ((i0, j0), (di, dj)) into the sites it contains.

swap_path_to_long_range_path(swap_path, ij_a)

Generates the ordered long-range path - a sequence of coordinates - from

conditioner(tn[, value, sweeps, balance_bonds])

gate_full_update_als(ket, env, bra, G, where, ...[, ...])

gate_full_update_autodiff_fidelity(ket, env, bra, G, ...)

get_default_full_update_fit_opts()

The default options for the full update gate fitting procedure.

parse_specific_gate_opts(strategy, fit_opts)

Parse the options from fit_opts which are relevant for strategy.

Module Contents

quimb.tensor.tensor_2d_tebd.default_to_neutral_style(fn)[source]

Wrap a function or method to use the neutral style by default.

quimb.tensor.tensor_2d_tebd.pairwise(iterable)[source]

Iterate over each pair of neighbours in iterable.

quimb.tensor.tensor_2d_tebd.contract_strategy(strategy, set_globally=False)[source]

A context manager to temporarily set the default contraction strategy supplied as optimize to cotengra. By default, this only sets the contract strategy for the current thread.

Parameters:

set_globally (bool, optimize) – Whether to set the strategy just for this thread, or for all threads. If you are entering the context, then using multithreading, you might want True.

quimb.tensor.tensor_2d_tebd.get_colors(color, custom_colors=None, alpha=None)[source]

Generate a sequence of rgbs for tag(s) color.

class quimb.tensor.tensor_2d_tebd.TNOptimizer(tn, loss_fn, norm_fn=None, loss_constants=None, loss_kwargs=None, tags=None, shared_tags=None, constant_tags=None, loss_target=None, optimizer='L-BFGS-B', progbar=True, bounds=None, autodiff_backend='AUTO', executor=None, callback=None, **backend_opts)[source]

Globally optimize tensors within a tensor network with respect to any loss function via automatic differentiation. If parametrized tensors are used, optimize the parameters rather than the raw arrays.

Parameters:
  • tn (TensorNetwork) – The core tensor network structure within which to optimize tensors.

  • loss_fn (callable or sequence of callable) – The function that takes tn (as well as loss_constants and loss_kwargs) and returns a single real ‘loss’ to be minimized. For Hamiltonians which can be represented as a sum over terms, an iterable collection of terms (e.g. list) can be given instead. In that case each term is evaluated independently and the sum taken as loss_fn. This can reduce the total memory requirements or allow for parallelization (see executor).

  • norm_fn (callable, optional) – A function to call before loss_fn that prepares or ‘normalizes’ the raw tensor network in some way.

  • loss_constants (dict, optional) – Extra tensor networks, tensors, dicts/list/tuples of arrays, or arrays which will be supplied to loss_fn but also converted to the correct backend array type.

  • loss_kwargs (dict, optional) – Extra options to supply to loss_fn (unlike loss_constants these are assumed to be simple options that don’t need conversion).

  • tags (str, or sequence of str, optional) – If supplied, only optimize tensors with any of these tags.

  • shared_tags (str, or sequence of str, optional) – If supplied, each tag in shared_tags corresponds to a group of tensors to be optimized together.

  • constant_tags (str, or sequence of str, optional) – If supplied, skip optimizing tensors with any of these tags. This ‘opt-out’ mode is overridden if either tags or shared_tags is supplied.

  • loss_target (float, optional) – Stop optimizing once this loss value is reached.

  • optimizer (str, optional) – Which scipy.optimize.minimize optimizer to use (the 'method' kwarg of that function). In addition, quimb implements a few custom optimizers compatible with this interface that you can reference by name - {'adam', 'nadam', 'rmsprop', 'sgd'}.

  • executor (None or Executor, optional) – To be used with term-by-term Hamiltonians. If supplied, this executor is used to parallelize the evaluation. Otherwise each term is evaluated in sequence. It should implement the basic concurrent.futures (PEP 3148) interface.

  • progbar (bool, optional) – Whether to show live progress.

  • bounds (None or (float, float), optional) – Constrain the optimized tensor entries within this range (if the scipy optimizer supports it).

  • autodiff_backend ({'jax', 'autograd', 'tensorflow', 'torch'}, optional) – Which backend library to use to perform the automatic differentation (and computation).

  • callback (callable, optional) –

    A function to call after each optimization step. It should take the current TNOptimizer instance as its only argument. Information such as the current loss and number of evaluations can then be accessed:

    def callback(tnopt):
        print(tnopt.nevals, tnopt.loss)
    

  • backend_opts – Supplied to the backend function compiler and array handler. For example jit_fn=True or device='cpu' .

_set_tn(tn)[source]
_reset_tracking_info(loss_target=None)[source]
reset(tn=None, clear_info=True, loss_target=None)[source]

Reset this optimizer without losing the compiled loss and gradient functions.

Parameters:
  • tn (TensorNetwork, optional) – Set this tensor network as the current state of the optimizer, it must exactly match the original tensor network.

  • clear_info (bool, optional) – Clear the tracked losses and iterations.

_maybe_init_pbar(n)[source]
_maybe_update_pbar()[source]
_maybe_close_pbar()[source]
_check_loss_target()[source]
_maybe_call_callback()[source]
vectorized_value(x)[source]

The value of the loss function at vector x.

vectorized_value_and_grad(x)[source]

The value and gradient of the loss function at vector x.

vectorized_hessp(x, p)[source]

The action of the hessian at point x on vector p.

__repr__()[source]

Return repr(self).

property d
property nevals
The number of gradient evaluations.
property optimizer
The underlying optimizer that works with the vectorized functions.
property bounds
get_tn_opt()[source]

Extract the optimized tensor network, this is a three part process:

  1. inject the current optimized vector into the target tensor network,

  2. run it through norm_fn,

  3. drop any tags used to identify variables.

Returns:

tn_opt

Return type:

TensorNetwork

optimize(n, tol=None, jac=True, hessp=False, optlib='scipy', **options)[source]

Run the optimizer for n function evaluations, using by default scipy.optimize.minimize() as the driver for the vectorized computation. Supplying the gradient and hessian vector product is controlled by the jac and hessp options respectively.

Parameters:
  • n (int) – Notionally the maximum number of iterations for the optimizer, note that depending on the optimizer being used, this may correspond to number of function evaluations rather than just iterations.

  • tol (None or float, optional) – Tolerance for convergence, note that various more specific tolerances can usually be supplied to options, depending on the optimizer being used.

  • jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.

  • hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.

  • optlib ({'scipy', 'nlopt'}, optional) – Which optimization library to use.

  • options – Supplied to scipy.optimize.minimize() or whichever optimizer is being used.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_scipy(n, tol=None, jac=True, hessp=False, **options)[source]

Scipy based optimization, see optimize() for details.

optimize_basinhopping(n, nhop, temperature=1.0, jac=True, hessp=False, **options)[source]

Run the optimizer for using scipy.optimize.basinhopping() as the driver for the vectorized computation. This performs nhop local optimization each with n iterations.

Parameters:
  • n (int) – Number of iterations per local optimization.

  • nhop (int) – Number of local optimizations to hop between.

  • temperature (float, optional) – H

  • options – Supplied to the inner scipy.optimize.minimize() call.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_nlopt(n, tol=None, jac=True, hessp=False, ftol_rel=None, ftol_abs=None, xtol_rel=None, xtol_abs=None)[source]

Run the optimizer for n function evaluations, using nlopt as the backend library to run the optimization. Whether the gradient is computed depends on which optimizer is selected, see valid options at https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/.

The following scipy optimizer options are automatically translated to the corresponding nlopt algorithms: {“l-bfgs-b”, “slsqp”, “tnc”, “cobyla”}.

Parameters:
  • n (int) – The maximum number of iterations for the optimizer.

  • tol (None or float, optional) – Tolerance for convergence, here this is taken to be the relative tolerance for the loss (ftol_rel below overrides this).

  • jac (bool, optional) – Whether to supply the jacobian, i.e. gradient, of the loss function.

  • hessp (bool, optional) – Whether to supply the hessian vector product of the loss function.

  • ftol_rel (float, optional) – Set relative tolerance on function value.

  • ftol_abs (float, optional) – Set absolute tolerance on function value.

  • xtol_rel (float, optional) – Set relative tolerance on optimization parameters.

  • xtol_abs (float, optional) – Set absolute tolerances on optimization parameters.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_ipopt(n, tol=None, **options)[source]

Run the optimizer for n function evaluations, using ipopt as the backend library to run the optimization via the python package cyipopt.

Parameters:

n (int) – The maximum number of iterations for the optimizer.

Returns:

tn_opt

Return type:

TensorNetwork

optimize_nevergrad(n)[source]

Run the optimizer for n function evaluations, using nevergrad as the backend library to run the optimization. As the name suggests, the gradient is not required for this method.

Parameters:

n (int) – The maximum number of iterations for the optimizer.

Returns:

tn_opt

Return type:

TensorNetwork

plot(xscale='symlog', xscale_linthresh=20, zoom='auto', hlines=())[source]

Plot the loss function as a function of the number of iterations.

Parameters:
  • xscale (str, optional) – The scale of the x-axis. Default is "symlog", i.e. linear for the first part of the plot, and logarithmic for the rest, changing at xscale_linthresh.

  • xscale_linthresh (int, optional) – The threshold for the change from linear to logarithmic scale, if xscale is "symlog". Default is 20.

  • zoom (None or int, optional) – If not None, show an inset plot of the last zoom iterations.

  • hlines (dict, optional) – A dictionary of horizontal lines to plot. The keys are the labels of the lines, and the values are the y-values of the lines.

Returns:

  • fig (matplotlib.figure.Figure) – The figure object.

  • ax (matplotlib.axes.Axes) – The axes object.

quimb.tensor.tensor_2d_tebd.calc_plaquette_map(plaquettes)[source]

Generate a dictionary of all the coordinate pairs in plaquettes mapped to the ‘best’ (smallest) rectangular plaquette that contains them.

Examples

Consider 4 sites, with one 2x2 plaquette and two vertical (2x1) and horizontal (1x2) plaquettes each:

>>> plaquettes = [
...     # 2x2 plaquette covering all sites
...     ((0, 0), (2, 2)),
...     # horizontal plaquettes
...     ((0, 0), (1, 2)),
...     ((1, 0), (1, 2)),
...     # vertical plaquettes
...     ((0, 0), (2, 1)),
...     ((0, 1), (2, 1)),
... ]
>>> calc_plaquette_map(plaquettes)
{((0, 0), (0, 1)): ((0, 0), (1, 2)),
 ((0, 0), (1, 0)): ((0, 0), (2, 1)),
 ((0, 0), (1, 1)): ((0, 0), (2, 2)),
 ((0, 1), (1, 0)): ((0, 0), (2, 2)),
 ((0, 1), (1, 1)): ((0, 1), (2, 1)),
 ((1, 0), (1, 1)): ((1, 0), (1, 2))}

Now every of the size coordinate pairs is mapped to one of the plaquettes, but to the smallest one that contains it. So the 2x2 plaquette (specified by ((0, 0), (2, 2))) would only used for diagonal terms here.

quimb.tensor.tensor_2d_tebd.calc_plaquette_sizes(coo_groups, autogroup=True)[source]

Find a sequence of plaquette blocksizes that will cover all the terms (coordinate pairs) in pairs.

Parameters:
  • coo_groups (sequence of tuple[tuple[int]] or tuple[int]) – The sequence of 2D coordinates pairs describing terms. Each should either be a single 2D coordinate or a sequence of 2D coordinates.

  • autogroup (bool, optional) – Whether to return the minimal sequence of blocksizes that will cover all terms or merge them into a single ((x_bsz, y_bsz),).

Returns:

bszs – Pairs of blocksizes.

Return type:

tuple[tuple[int]]

Examples

Some nearest neighbour interactions:

>>> H2 = {None: qu.ham_heis(2)}
>>> ham = qtn.LocalHam2D(10, 10, H2)
>>> calc_plaquette_sizes(ham.terms.keys())
((1, 2), (2, 1))
>>> calc_plaquette_sizes(ham.terms.keys(), autogroup=False)
((2, 2),)

If we add any next nearest neighbour interaction then we are going to need the (2, 2) blocksize in any case:

>>> H2[(1, 1), (2, 2)] = 0.5 * qu.ham_heis(2)
>>> ham = qtn.LocalHam2D(10, 10, H2)
>>> calc_plaquette_sizes(ham.terms.keys())
((2, 2),)

If we add longer range interactions (non-diagonal next nearest) we again can benefit from multiple plaquette blocksizes:

>>> H2[(1, 1), (1, 3)] = 0.25 * qu.ham_heis(2)
>>> H2[(1, 1), (3, 1)] = 0.25 * qu.ham_heis(2)
>>> ham = qtn.LocalHam2D(10, 10, H2)
>>> calc_plaquette_sizes(ham.terms.keys())
((1, 3), (2, 2), (3, 1))

Or choose the plaquette blocksize that covers all terms:

>>> calc_plaquette_sizes(ham.terms.keys(), autogroup=False)
((3, 3),)
quimb.tensor.tensor_2d_tebd.gen_2d_bonds(Lx, Ly, steppers=None, coo_filter=None, cyclic=False)[source]

Convenience function for tiling pairs of bond coordinates on a 2D lattice given a function like lambda i, j: (i + 1, j + 1).

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • steppers (callable or sequence of callable, optional) – Function(s) that take args (i, j) and generate another coordinate, thus defining a bond. Only valid steps are taken. If not given, defaults to nearest neighbor bonds.

  • coo_filter (callable) – Function that takes args (i, j) and only returns True if this is to be a valid starting coordinate.

Yields:

bond (tuple[tuple[int, int], tuple[int, int]]) – A pair of coordinates.

Examples

Generate nearest neighbor bonds:

>>> for bond in gen_2d_bonds(2, 2, [lambda i, j: (i, j + 1),
>>>                                 lambda i, j: (i + 1, j)]):
>>>     print(bond)
((0, 0), (0, 1))
((0, 0), (1, 0))
((0, 1), (1, 1))
((1, 0), (1, 1))

Generate next nearest neighbor digonal bonds:

>>> for bond in gen_2d_bonds(2, 2, [lambda i, j: (i + 1, j + 1),
>>>                                 lambda i, j: (i + 1, j - 1)]):
>>>     print(bond)
((0, 0), (1, 1))
((0, 1), (1, 0))
quimb.tensor.tensor_2d_tebd.gen_long_range_path(ij_a, ij_b, sequence=None)[source]

Generate a string of coordinates, in order, from ij_a to ij_b.

Parameters:
  • ij_a ((int, int)) – Coordinate of site ‘a’.

  • ij_b ((int, int)) – Coordinate of site ‘b’.

  • sequence (None, iterable of {'v', 'h'}, or 'random', optional) – What order to cycle through and try and perform moves in, ‘v’, ‘h’ standing for move vertically and horizontally respectively. The default is ('v', 'h').

Returns:

The path, each element is a single coordinate.

Return type:

generator[tuple[int]]

quimb.tensor.tensor_2d_tebd.gen_long_range_swap_path(ij_a, ij_b, sequence=None)[source]

Generate the coordinates or a series of swaps that would bring ij_a and ij_b together.

Parameters:
  • ij_a ((int, int)) – Coordinate of site ‘a’.

  • ij_b ((int, int)) – Coordinate of site ‘b’.

  • sequence (None, it of {'av', 'bv', 'ah', 'bh'}, or 'random', optional) – What order to cycle through and try and perform moves in, ‘av’, ‘bv’, ‘ah’, ‘bh’ standing for move ‘a’ vertically, ‘b’ vertically, ‘a’ horizontally’, and ‘b’ horizontally respectively. The default is ('av', 'bv', 'ah', 'bh').

Returns:

The path, each element is two coordinates to swap.

Return type:

generator[tuple[tuple[int]]]

quimb.tensor.tensor_2d_tebd.nearest_neighbors(coo)[source]
quimb.tensor.tensor_2d_tebd.plaquette_to_sites(p)[source]

Turn a plaquette ((i0, j0), (di, dj)) into the sites it contains.

Examples

>>> plaquette_to_sites([(3, 4), (2, 2)])
((3, 4), (3, 5), (4, 4), (4, 5))
quimb.tensor.tensor_2d_tebd.swap_path_to_long_range_path(swap_path, ij_a)[source]

Generates the ordered long-range path - a sequence of coordinates - from a (long-range) swap path - a sequence of coordinate pairs.

class quimb.tensor.tensor_2d_tebd.LocalHamGen(H2, H1=None)[source]

Representation of a local hamiltonian defined on a general graph. This combines all two site and one site terms into a single interaction per lattice pair, and caches operations on the terms such as getting their exponential. The sites (nodes) should be hashable and comparable.

Parameters:
  • H2 (dict[tuple[node], array_like]) – The interaction terms, with each key being an tuple of nodes defining an edge and each value the local hamilotonian term for those two nodes.

  • H1 (array_like or dict[node, array_like], optional) – The one site term(s). If a single array is given, assume to be the default onsite term for all terms. If a dict is supplied, the keys should represent specific coordinates like (i, j) with the values the array representing the local term for that site. A default term for all remaining sites can still be supplied with the key None.

terms

The total effective local term for each interaction (with single site terms appropriately absorbed). Each key is a pair of coordinates site_a, site_b with site_a < site_b.

Type:

dict[tuple, array_like]

property nsites
The number of sites in the system.
items()[source]

Iterate over all terms in the hamiltonian. This is mostly for convenient compatibility with compute_local_expectation.

_convert_from_qarray_cached(x)[source]
_flip_cached(x)[source]
_add_cached(x, y)[source]
_div_cached(x, y)[source]
_op_id_cached(x)[source]
_id_op_cached(x)[source]
_expm_cached(x, y)[source]
get_gate(where)[source]

Get the local term for pair where, cached.

get_gate_expm(where, x)[source]

Get the local term for pair where, matrix exponentiated by x, and cached.

apply_to_arrays(fn)[source]

Apply the function fn to all the arrays representing terms.

_nx_color_ordering(strategy='smallest_first', interchange=True)[source]

Generate a term ordering based on a coloring on the line graph.

get_auto_ordering(order='sort', **kwargs)[source]

Get an ordering of the terms to use with TEBD, for example. The default is to sort the coordinates then greedily group them into commuting sets.

Parameters:

order ({'sort', None, 'random', str}) –

How to order the terms before greedily grouping them into commuting (non-coordinate overlapping) sets:

  • 'sort' will sort the coordinate pairs first.

  • None will use the current order of terms which should match the order they were supplied to this LocalHam2D instance.

  • 'random' will randomly shuffle the coordinate pairs before grouping them - not the same as returning a completely random order.

  • 'random-ungrouped' will randomly shuffle the coordinate pairs but not group them at all with respect to commutation.

Any other option will be passed as a strategy to networkx.coloring.greedy_color to generate the ordering.

Returns:

Sequence of coordinate pairs.

Return type:

list[tuple[node]]

__repr__()[source]

Return repr(self).

draw(ordering='sort', show_norm=True, figsize=None, fontsize=8, legend=True, ax=None, **kwargs)[source]

Plot this Hamiltonian as a network.

Parameters:
  • ordering ({'sort', None, 'random'}, optional) – An ordering of the termns, or an argument to be supplied to quimb.tensor.tensor_arbgeom_tebd.LocalHamGen.get_auto_ordering() to generate this automatically.

  • show_norm (bool, optional) – Show the norm of each term as edge labels.

  • figsize (None or tuple[int], optional) – Size of the figure, defaults to size of Hamiltonian.

  • fontsize (int, optional) – Font size for norm labels.

  • legend (bool, optional) – Whether to show the legend of which terms are in which group.

  • ax (None or matplotlib.Axes, optional) – Add to a existing set of axes.

graph[source]
class quimb.tensor.tensor_2d_tebd.TEBDGen(psi0, ham, tau=0.01, D=None, imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]

Generic class for performing time evolving block decimation on an arbitrary graph, i.e. applying the exponential of a Hamiltonian using a product formula that involves applying local exponentiated gates only.

sweep(tau)[source]

Perform a full sweep of gates at every pair.

\[\psi \rightarrow \prod_{\{ij\}} \exp(-\tau H_{ij}) \psi\]
_update_progbar(pbar)[source]
evolve(steps, tau=None, progbar=None)[source]

Evolve the state with the local Hamiltonian for steps steps with time step tau.

property state
Return a copy of the current state.
property n
The number of sweeps performed.
property D
The maximum bond dimension.
_check_energy()[source]

Logic for maybe computing the energy if needed.

property energy
Return the energy of current state, computing it only if necessary.
get_state()[source]

The default method for retrieving the current state - simply a copy. Subclasses can override this to perform additional transformations.

set_state(psi)[source]

The default method for setting the current state - simply a copy. Subclasses can override this to perform additional transformations.

presweep(i)[source]

Perform any computations required before the sweep (and energy computation). For the basic TEBD this is nothing.

gate(U, where)[source]

Perform single gate U at coordinate pair where. This is the the most common method to override.

compute_energy()[source]

Compute and return the energy of the current state. Subclasses can override this with a custom method to compute the energy.

__repr__()[source]

Return repr(self).

class quimb.tensor.tensor_2d_tebd.Tensor(data=1.0, inds=(), tags=None, left_inds=None)[source]

A labelled, tagged n-dimensional array. The index labels are used instead of axis numbers to identify dimensions, and are preserved through operations. The tags are used to identify the tensor within networks, and are combined when tensors are contracted together.

Parameters:
  • data (numpy.ndarray) – The n-dimensional data.

  • inds (sequence of str) – The index labels for each dimension. Must match the number of dimensions of data.

  • tags (sequence of str, optional) – Tags with which to identify and group this tensor. These will be converted into a oset.

  • left_inds (sequence of str, optional) – Which, if any, indices to group as ‘left’ indices of an effective matrix. This can be useful, for example, when automatically applying unitary constraints to impose a certain flow on a tensor network but at the atomistic (Tensor) level.

Examples

Basic construction:

>>> from quimb import randn
>>> from quimb.tensor import Tensor
>>> X = Tensor(randn((2, 3, 4)), inds=['a', 'b', 'c'], tags={'X'})
>>> Y = Tensor(randn((3, 4, 5)), inds=['b', 'c', 'd'], tags={'Y'})

Indices are automatically aligned, and tags combined, when contracting:

>>> X @ Y
Tensor(shape=(2, 5), inds=('a', 'd'), tags={'Y', 'X'})
__slots__ = ('_data', '_inds', '_tags', '_left_inds', '_owners')
_set_data(data)[source]
_set_inds(inds)[source]
_set_tags(tags)[source]
_set_left_inds(left_inds)[source]
get_params()[source]

A simple function that returns the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

set_params(params)[source]

A simple function that sets the ‘parameters’ of the underlying data array. This is mainly for providing an interface for ‘structured’ arrays e.g. with block sparsity to interact with optimization.

copy(deep=False, virtual=False)[source]

Copy this tensor.

Note

By default (deep=False), the underlying array will not be copied.

Parameters:
  • deep (bool, optional) – Whether to copy the underlying data as well.

  • virtual (bool, optional) – To conveniently mimic the behaviour of taking a virtual copy of tensor network, this simply returns self.

__copy__[source]
property data
property inds
property tags
property left_inds
check()[source]

Do some basic diagnostics on this tensor, raising errors if something is wrong.

property owners
add_owner(tn, tid)[source]

Add tn as owner of this Tensor - it’s tag and ind maps will be updated whenever this tensor is retagged or reindexed.

remove_owner(tn)[source]

Remove TensorNetwork tn as an owner of this Tensor.

check_owners()[source]

Check if this tensor is ‘owned’ by any alive TensorNetworks. Also trim any weakrefs to dead TensorNetworks.

_apply_function(fn)[source]
modify(**kwargs)[source]

Overwrite the data of this tensor in place.

Parameters:
  • data (array, optional) – New data.

  • apply (callable, optional) – A function to apply to the current data. If data is also given this is applied subsequently.

  • inds (sequence of str, optional) – New tuple of indices.

  • tags (sequence of str, optional) – New tags.

  • left_inds (sequence of str, optional) – New grouping of indices to be ‘on the left’.

apply_to_arrays(fn)[source]

Apply the function fn to the underlying data array(s). This is meant for changing how the raw arrays are backed (e.g. converting between dtypes or libraries) but not their ‘numerical meaning’.

isel(selectors, inplace=False)[source]

Select specific values for some dimensions/indices of this tensor, thereby removing them. Analogous to X[:, :, 3, :, :] with arrays. The indices to select from can be specified either by integer, in which case the correspoding index is removed, or by a slice.

Parameters:
  • selectors (dict[str, int], dict[str, slice]) – Mapping of index(es) to which value to take.

  • inplace (bool, optional) – Whether to select inplace or not.

Return type:

Tensor

Examples

>>> T = rand_tensor((2, 3, 4), inds=('a', 'b', 'c'))
>>> T.isel({'b': -1})
Tensor(shape=(2, 4), inds=('a', 'c'), tags=())

See also

TensorNetwork.isel

isel_[source]
add_tag(tag)[source]

Add a tag or multiple tags to this tensor. Unlike self.tags.add this also updates any TensorNetwork objects viewing this Tensor.

expand_ind(ind, size, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace increase the size of the dimension of ind, the new array entries will be filled with zeros by default.

Parameters:
  • name (str) – Name of the index to expand.

  • size (int, optional) – Size of the expanded index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_ind(name, size=1, axis=0, mode=None, rand_strength=None, rand_dist='normal')[source]

Inplace add a new index - a named dimension. If size is specified to be greater than one then the new array entries will be filled with zeros.

Parameters:
  • name (str) – Name of the new index.

  • size (int, optional) – Size of the new index.

  • axis (int, optional) – Position of the new index.

  • mode ({None, 'zeros', 'repeat', 'random'}, optional) – How to fill any new array entries. If 'zeros' then fill with zeros, if 'repeat' then repeatedly tile the existing entries. If 'random' then fill with random entries drawn from rand_dist, multiplied by rand_strength. If None then select from zeros or random depening on non-zero rand_strength.

  • rand_strength (float, optional) – If mode='random', a multiplicative scale for the random entries, defaulting to 1.0. If mode is None then supplying a non-zero value here triggers mode='random'.

  • rand_dist ({'normal', 'uniform', 'exp'}, optional) – If mode='random', the distribution to draw the random entries from.

new_bond[source]
new_ind_with_identity(name, left_inds, right_inds, axis=0)[source]

Inplace add a new index, where the newly stacked array entries form the identity from left_inds to right_inds. Selecting 0 or 1 for the new index name thus is like ‘turning off’ this tensor if viewed as an operator.

Parameters:
  • name (str) – Name of the new index.

  • left_inds (tuple[str]) – Names of the indices forming the left hand side of the operator.

  • right_inds (tuple[str]) – Names of the indices forming the right hand side of the operator. The dimensions of these must match those of left_inds.

  • axis (int, optional) – Position of the new index.

new_ind_pair_with_identity(new_left_ind, new_right_ind, d, inplace=False)[source]

Expand this tensor with two new indices of size d, by taking an (outer) tensor product with the identity operator. The two new indices are added as axes at the start of the tensor.

Parameters:
  • new_left_ind (str) – Name of the new left index.

  • new_right_ind (str) – Name of the new right index.

  • d (int) – Size of the new indices.

  • inplace (bool, optional) – Whether to perform the expansion inplace.

Return type:

Tensor

new_ind_pair_with_identity_[source]
conj(inplace=False)[source]

Conjugate this tensors data (does nothing to indices).

conj_[source]
property H
Conjugate this tensors data (does nothing to indices).
property shape
The size of each dimension.
property ndim
The number of dimensions.
property size
The total number of array elements.
property dtype
The data type of the array elements.
property backend
The backend inferred from the data.
iscomplex()[source]
astype(dtype, inplace=False)[source]

Change the type of this tensor to dtype.

astype_[source]
max_dim()[source]

Return the maximum size of any dimension, or 1 if scalar.

ind_size(ind)[source]

Return the size of dimension corresponding to ind.

inds_size(inds)[source]

Return the total size of dimensions corresponding to inds.

shared_bond_size(other)[source]

Get the total size of the shared index(es) with other.

inner_inds()[source]

Get all indices that appear on two or more tensors.

transpose(*output_inds, inplace=False)[source]

Transpose this tensor - permuting the order of both the data and the indices. This operation is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Note to compute the tranditional ‘transpose’ of an operator within a contraction for example, you would just use reindexing not this.

Parameters:
  • output_inds (sequence of str) – The desired output sequence of indices.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

transpose_[source]
transpose_like(other, inplace=False)[source]

Transpose this tensor to match the indices of other, allowing for one index to be different. E.g. if self.inds = ('a', 'b', 'c', 'x') and other.inds = ('b', 'a', 'd', 'c') then ‘x’ will be aligned with ‘d’ and the output inds will be ('b', 'a', 'x', 'c')

Parameters:
  • other (Tensor) – The tensor to match.

  • inplace (bool, optional) – Perform the tranposition inplace.

Returns:

tt – The transposed tensor.

Return type:

Tensor

See also

transpose

transpose_like_[source]
moveindex(ind, axis, inplace=False)[source]

Move the index ind to position axis. Like transpose, this permutes the order of both the data and the indices and is mainly for ensuring a certain data layout since for most operations the specific order of indices doesn’t matter.

Parameters:
  • ind (str) – The index to move.

  • axis (int) – The new position to move ind to. Can be negative.

  • inplace (bool, optional) – Whether to perform the move inplace or not.

Return type:

Tensor

moveindex_[source]
item()[source]

Return the scalar value of this tensor, if it has a single element.

trace(left_inds, right_inds, preserve_tensor=False, inplace=False)[source]

Trace index or indices left_inds with right_inds, removing them.

Parameters:
  • left_inds (str or sequence of str) – The left indices to trace, order matching right_inds.

  • right_inds (str or sequence of str) – The right indices to trace, order matching left_inds.

  • preserve_tensor (bool, optional) – If True, a tensor will be returned even if no indices remain.

  • inplace (bool, optional) – Perform the trace inplace.

Returns:

z

Return type:

Tensor or scalar

sum_reduce(ind, inplace=False)[source]

Sum over index ind, removing it from this tensor.

Parameters:
  • ind (str) – The index to sum over.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

sum_reduce_[source]
vector_reduce(ind, v, inplace=False)[source]

Contract the vector v with the index ind of this tensor, removing it.

Parameters:
  • ind (str) – The index to contract.

  • v (array_like) – The vector to contract with.

  • inplace (bool, optional) – Whether to perform the reduction inplace.

Return type:

Tensor

vector_reduce_[source]
collapse_repeated(inplace=False)[source]

Take the diagonals of any repeated indices, such that each index only appears once.

collapse_repeated_[source]
contract(*others, output_inds=None, **opts)[source]
direct_product(other, sum_inds=(), inplace=False)[source]
direct_product_[source]
split(*args, **kwargs)[source]
compute_reduced_factor(side, left_inds, right_inds, **split_opts)[source]
distance(other, **contract_opts)[source]
distance_normalized[source]
gate(G, ind, preserve_inds=True, inplace=False)[source]

Gate this tensor - contract a matrix into one of its indices without changing its indices. Unlike contract, G is a raw array and the tensor remains with the same set of indices.

Parameters:
  • G (2D array_like) – The matrix to gate the tensor index with.

  • ind (str) – Which index to apply the gate to.

Return type:

Tensor

Examples

Create a random tensor of 4 qubits:

>>> t = qtn.rand_tensor(
...    shape=[2, 2, 2, 2],
...    inds=['k0', 'k1', 'k2', 'k3'],
... )

Create another tensor with an X gate applied to qubit 2:

>>> Gt = t.gate(qu.pauli('X'), 'k2')

The contraction of these two tensors is now the expectation of that operator:

>>> t.H @ Gt
-4.108910576149794
gate_[source]
singular_values(left_inds, method='svd')[source]

Return the singular values associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Returns:

The singular values.

Return type:

1d-array

entropy(left_inds, method='svd')[source]

Return the entropy associated with splitting this tensor according to left_inds.

Parameters:
  • left_inds (sequence of str) – A subset of this tensors indices that defines ‘left’.

  • method ({'svd', 'eig'}) – Whether to use the SVD or eigenvalue decomposition to get the singular values.

Return type:

float

retag(retag_map, inplace=False)[source]

Rename the tags of this tensor, optionally, in-place.

Parameters:
  • retag_map (dict-like) – Mapping of pairs {old_tag: new_tag, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed tags will be returned.

retag_[source]
reindex(index_map, inplace=False)[source]

Rename the indices of this tensor, optionally in-place.

Parameters:
  • index_map (dict-like) – Mapping of pairs {old_ind: new_ind, ...}.

  • inplace (bool, optional) – If False (the default), a copy of this tensor with the changed inds will be returned.

reindex_[source]
fuse(fuse_map, inplace=False)[source]

Combine groups of indices into single indices.

Parameters:

fuse_map (dict_like or sequence of tuples.) – Mapping like: {new_ind: sequence of existing inds, ...} or an ordered mapping like [(new_ind_1, old_inds_1), ...] in which case the output tensor’s fused inds will be ordered. In both cases the new indices are created at the minimum axis of any of the indices that will be fused.

Returns:

The transposed, reshaped and re-labeled tensor.

Return type:

Tensor

fuse_[source]
unfuse(unfuse_map, shape_map, inplace=False)[source]

Reshape single indices into groups of multiple indices

Parameters:
  • unfuse_map (dict_like or sequence of tuples.) – Mapping like: {existing_ind: sequence of new inds, ...} or an ordered mapping like [(old_ind_1, new_inds_1), ...] in which case the output tensor’s new inds will be ordered. In both cases the new indices are created at the old index’s position of the tensor’s shape

  • shape_map (dict_like or sequence of tuples) – Mapping like: {old_ind: new_ind_sizes, ...} or an ordered mapping like [(old_ind_1, new_ind_sizes_1), ...].

Returns:

The transposed, reshaped and re-labeled tensor

Return type:

Tensor

unfuse_[source]
to_dense(*inds_seq, to_qarray=False)[source]

Convert this Tensor into an dense array, with a single dimension for each of inds in inds_seqs. E.g. to convert several sites into a density matrix: T.to_dense(('k0', 'k1'), ('b0', 'b1')).

to_qarray[source]
squeeze(include=None, exclude=None, inplace=False)[source]

Drop any singlet dimensions from this tensor.

Parameters:
  • inplace (bool, optional) – Whether modify the original or return a new tensor.

  • include (sequence of str, optional) – Only squeeze dimensions with indices in this list.

  • exclude (sequence of str, optional) – Squeeze all dimensions except those with indices in this list.

  • inplace – Whether to perform the squeeze inplace or not.

Return type:

Tensor

squeeze_[source]
largest_element()[source]

Return the largest element, in terms of absolute magnitude, of this tensor.

idxmin(f=None)[source]

Get the index configuration of the minimum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the minimum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the minimum element.

Return type:

dict[str, int]

idxmax(f=None)[source]

Get the index configuration of the maximum element of this tensor, optionally applying f first.

Parameters:

f (callable or str, optional) – If a callable, apply this function to the tensor data before finding the maximum element. If a string, apply autoray.do(f, data).

Returns:

Mapping of index names to their values at the maximum element.

Return type:

dict[str, int]

norm()[source]

Frobenius norm of this tensor:

\[\|t\|_F = \sqrt{\mathrm{Tr} \left(t^{\dagger} t\right)}\]

where the trace is taken over all indices. Equivalent to the square root of the sum of squared singular values across any partition.

normalize(inplace=False)[source]
normalize_[source]
symmetrize(ind1, ind2, inplace=False)[source]

Hermitian symmetrize this tensor for indices ind1 and ind2. I.e. T = (T + T.conj().T) / 2, where the transpose is taken only over the specified indices.

symmetrize_[source]
isometrize(left_inds=None, method='qr', inplace=False)[source]

Make this tensor unitary (or isometric) with respect to left_inds. The underlying method is set by method.

Parameters:
  • left_inds (sequence of str) – The indices to group together and treat as the left hand side of a matrix.

  • method (str, optional) –

    The method used to generate the isometry. The options are:

    • ”qr”: use the Q factor of the QR decomposition of x with the constraint that the diagonal of R is positive.

    • ”svd”: uses U @ VH of the SVD decomposition of x. This is useful for finding the ‘closest’ isometric matrix to x, such as when it has been expanded with noise etc. But is less stable for differentiation / optimization.

    • ”exp”: use the matrix exponential of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization, but more expensive for non-square x.

    • ”cayley”: use the Cayley transform of x - dag(x), first completing x with zeros if it is rectangular. This is a good parametrization for optimization (one the few compatible with HIPS/autograd e.g.), but more expensive for non-square x.

    • ”householder”: use the Householder reflection method directly. This requires that the backend implements “linalg.householder_product”.

    • ”torch_householder”: use the Householder reflection method directly, using the torch_householder package. This requires that the package is installed and that the backend is "torch". This is generally the best parametrizing method for “torch” if available.

    • ”mgs”: use a python implementation of the modified Gram Schmidt method directly. This is slow if not compiled but a useful reference.

    Not all backends support all methods or differentiating through all methods.

  • inplace (bool, optional) – Whether to perform the unitization inplace.

Return type:

Tensor

isometrize_[source]
unitize[source]
unitize_
randomize(dtype=None, inplace=False, **randn_opts)[source]

Randomize the entries of this tensor.

Parameters:
  • dtype ({None, str}, optional) – The data type of the random entries. If left as the default None, then the data type of the current array will be used.

  • inplace (bool, optional) – Whether to perform the randomization inplace, by default False.

  • randn_opts – Supplied to randn().

Return type:

Tensor

randomize_[source]
flip(ind, inplace=False)[source]

Reverse the axis on this tensor corresponding to ind. Like performing e.g. X[:, :, ::-1, :].

flip_[source]
multiply_index_diagonal(ind, x, inplace=False)[source]

Multiply this tensor by 1D array x as if it were a diagonal tensor being contracted into index ind.

multiply_index_diagonal_[source]
almost_equals(other, **kwargs)[source]

Check if this tensor is almost the same as another.

drop_tags(tags=None)[source]

Drop certain tags, defaulting to all, from this tensor.

bonds(other)[source]

Return a tuple of the shared indices between this tensor and other.

filter_bonds(other)[source]

Sort this tensor’s indices into a list of those that it shares and doesn’t share with another tensor.

Parameters:

other (Tensor) – The other tensor.

Returns:

shared, unshared – The shared and unshared indices.

Return type:

(tuple[str], tuple[str])

__imul__(other)[source]
__itruediv__(other)[source]
__and__(other)[source]

Combine with another Tensor or TensorNetwork into a new TensorNetwork.

__or__(other)[source]

Combine virtually (no copies made) with another Tensor or TensorNetwork into a new TensorNetwork.

__matmul__(other)[source]

Explicitly contract with another tensor. Avoids some slight overhead of calling the full tensor_contract().

negate(inplace=False)[source]

Negate this tensor.

negate_[source]
__neg__()[source]

Negate this tensor.

as_network(virtual=True)[source]

Return a TensorNetwork with only this tensor.

draw(*args, **kwargs)[source]

Plot a graph of this tensor and its indices.

graph[source]
visualize[source]
__getstate__()[source]

Helper for pickle.

__setstate__(state)[source]
_repr_info()[source]

General info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_extra()[source]

General detailed info to show in various reprs. Sublasses can add more relevant info to this dict.

_repr_info_str(normal=True, extra=False)[source]

Render the general info as a string.

_repr_html_()[source]

Render this Tensor as HTML, for Jupyter notebooks.

__str__()[source]

Return str(self).

__repr__()[source]

Return repr(self).

class quimb.tensor.tensor_2d_tebd.LocalHam2D(Lx, Ly, H2, H1=None, cyclic=False)[source]

Bases: quimb.tensor.tensor_arbgeom_tebd.LocalHamGen

A 2D Hamiltonian represented as local terms. This combines all two site and one site terms into a single interaction per lattice pair, and caches operations on the terms such as getting their exponential.

Parameters:
  • Lx (int) – The number of rows.

  • Ly (int) – The number of columns.

  • H2 (array_like or dict[tuple[tuple[int]], array_like]) – The two site term(s). If a single array is given, assume to be the default interaction for all nearest neighbours. If a dict is supplied, the keys should represent specific pairs of coordinates like ((ia, ja), (ib, jb)) with the values the array representing the interaction for that pair. A default term for all remaining nearest neighbours interactions can still be supplied with the key None.

  • H1 (array_like or dict[tuple[int], array_like], optional) – The one site term(s). If a single array is given, assume to be the default onsite term for all terms. If a dict is supplied, the keys should represent specific coordinates like (i, j) with the values the array representing the local term for that site. A default term for all remaining sites can still be supplied with the key None.

terms

The total effective local term for each interaction (with single site terms appropriately absorbed). Each key is a pair of coordinates ija, ijb with ija < ijb.

Type:

dict[tuple[tuple[int]], array_like]

property nsites
The number of sites in the system.
__repr__()[source]

Return repr(self).

draw(ordering='sort', show_norm=True, figsize=None, fontsize=8, legend=True, ax=None, **kwargs)[source]

Plot this Hamiltonian as a network.

Parameters:
  • ordering ({'sort', None, 'random'}, optional) – An ordering of the termns, or an argument to be supplied to quimb.tensor.tensor_2d_tebd.LocalHam2D.get_auto_ordering() to generate this automatically.

  • show_norm (bool, optional) – Show the norm of each term as edge labels.

  • figsize (None or tuple[int], optional) – Size of the figure, defaults to size of Hamiltonian.

  • fontsize (int, optional) – Font size for norm labels.

  • legend (bool, optional) – Whether to show the legend of which terms are in which group.

  • ax (None or matplotlib.Axes, optional) – Add to a existing set of axes.

graph[source]
class quimb.tensor.tensor_2d_tebd.TEBD2D(psi0, ham, tau=0.01, D=None, chi=None, imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]

Bases: quimb.tensor.tensor_arbgeom_tebd.TEBDGen

Generic class for performing two dimensional time evolving block decimation, i.e. applying the exponential of a Hamiltonian using a product formula that involves applying local exponentiated gates only.

Parameters:
  • psi0 (TensorNetwork2DVector) – The initial state.

  • ham (LocalHam2D) – The Hamtiltonian consisting of local terms.

  • tau (float, optional) – The default local exponent, if considered as time real values here imply imaginary time.

  • max_bond ({'psi0', int, None}, optional) – The maximum bond dimension to keep when applying each gate.

  • gate_opts (dict, optional) – Supplied to quimb.tensor.tensor_2d.TensorNetwork2DVector.gate(), in addition to max_bond. By default contract is set to ‘reduce-split’ and cutoff is set to 0.0.

  • ordering (str, tuple[tuple[int]], callable, optional) – How to order the terms, if a string is given then use this as the strategy given to get_auto_ordering(). An explicit list of coordinate pairs can also be given. The default is to greedily form an ‘edge coloring’ based on the sorted list of Hamiltonian pair coordinates. If a callable is supplied it will be used to generate the ordering before each sweep.

  • second_order_reflect (bool, optional) – If True, then apply each layer of gates in ordering forward with half the time step, then the same with reverse order.

  • compute_energy_every (None or int, optional) – How often to compute and record the energy. If a positive integer ‘n’, the energy is computed before every nth sweep (i.e. including before the zeroth).

  • compute_energy_final (bool, optional) – Whether to compute and record the energy at the end of the sweeps regardless of the value of compute_energy_every. If you start sweeping again then this final energy is the same as the zeroth of the next set of sweeps and won’t be recomputed.

  • compute_energy_opts (dict, optional) – Supplied to compute_local_expectation(). By default max_bond is set to max(8, D**2) where D is the maximum bond to use for applying the gate, cutoff is set to 0.0 and normalized is set to True.

  • compute_energy_fn (callable, optional) – Supply your own function to compute the energy, it should take the TEBD2D object as its only argument.

  • callback (callable, optional) – A custom callback to run after every sweep, it should take the TEBD2D object as its only argument. If it returns any value that boolean evaluates to True then terminal the evolution.

  • progbar (boolean, optional) – Whether to show a live progress bar during the evolution.

  • kwargs – Extra options for the specific TEBD2D subclass.

state

The current state.

Type:

TensorNetwork2DVector

ham

The Hamiltonian being used to evolve.

Type:

LocalHam2D

energy

The current of the current state, this will trigger a computation if the energy at this iteration hasn’t been computed yet.

Type:

float

energies

The energies that have been computed, if any.

Type:

list[float]

its

The corresponding sequence of iteration numbers that energies have been computed at.

Type:

list[int]

taus

The corresponding sequence of time steps that energies have been computed at.

Type:

list[float]

best

If keep_best was set then the best recorded energy and the corresponding state that was computed - keys 'energy' and 'state' respectively.

Type:

dict

compute_energy()[source]

Compute and return the energy of the current state.

property chi
__repr__()[source]

Return repr(self).

quimb.tensor.tensor_2d_tebd.conditioner(tn, value=None, sweeps=2, balance_bonds=True)[source]
class quimb.tensor.tensor_2d_tebd.SimpleUpdate(psi0, ham, tau=0.01, D=None, chi=None, gauge_renorm=True, gauge_smudge=1e-06, condition_tensors=True, condition_balance_bonds=True, long_range_use_swaps=False, long_range_path_sequence='random', imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]

Bases: TEBD2D

A simple subclass of TEBD2D that overrides two key methods in order to keep ‘diagonal gauges’ living on the bonds of a PEPS. The gauges are stored separately from the main PEPS in the gauges attribute. Before and after a gate is applied they are absorbed and then extracted. When accessing the state attribute they are automatically inserted or you can call get_state(absorb_gauges=False) to lazily add them as hyperedge weights only. Reference: https://arxiv.org/abs/0806.3719.

Parameters:
  • psi0 (TensorNetwork2DVector) – The initial state.

  • ham (LocalHam2D) – The Hamtiltonian consisting of local terms.

  • tau (float, optional) – The default local exponent, if considered as time real values here imply imaginary time.

  • max_bond ({'psi0', int, None}, optional) – The maximum bond dimension to keep when applying each gate.

  • gate_opts (dict, optional) – Supplied to quimb.tensor.tensor_2d.TensorNetwork2DVector.gate(), in addition to max_bond. By default contract is set to ‘reduce-split’ and cutoff is set to 0.0.

  • ordering (str, tuple[tuple[int]], callable, optional) – How to order the terms, if a string is given then use this as the strategy given to get_auto_ordering(). An explicit list of coordinate pairs can also be given. The default is to greedily form an ‘edge coloring’ based on the sorted list of Hamiltonian pair coordinates. If a callable is supplied it will be used to generate the ordering before each sweep.

  • second_order_reflect (bool, optional) – If True, then apply each layer of gates in ordering forward with half the time step, then the same with reverse order.

  • compute_energy_every (None or int, optional) – How often to compute and record the energy. If a positive integer ‘n’, the energy is computed before every nth sweep (i.e. including before the zeroth).

  • compute_energy_final (bool, optional) – Whether to compute and record the energy at the end of the sweeps regardless of the value of compute_energy_every. If you start sweeping again then this final energy is the same as the zeroth of the next set of sweeps and won’t be recomputed.

  • compute_energy_opts (dict, optional) – Supplied to compute_local_expectation(). By default max_bond is set to max(8, D**2) where D is the maximum bond to use for applying the gate, cutoff is set to 0.0 and normalized is set to True.

  • compute_energy_fn (callable, optional) – Supply your own function to compute the energy, it should take the TEBD2D object as its only argument.

  • callback (callable, optional) – A custom callback to run after every sweep, it should take the TEBD2D object as its only argument. If it returns any value that boolean evaluates to True then terminal the evolution.

  • progbar (boolean, optional) – Whether to show a live progress bar during the evolution.

  • gauge_renorm (bool, optional) – Whether to actively renormalize the singular value gauges.

  • gauge_smudge (float, optional) – A small offset to use when applying the guage and its inverse to avoid numerical problems.

  • condition_tensors (bool, optional) – Whether to actively equalize tensor norms for numerical stability.

  • condition_balance_bonds (bool, optional) – If and when equalizing tensor norms, whether to also balance bonds as an additional conditioning.

  • long_range_use_swaps (bool, optional) – If there are long range terms, whether to use swap gates to apply the terms. If False, a long range blob tensor (which won’t scale well for long distances) is formed instead.

  • long_range_path_sequence (str or callable, optional) – If there are long range terms how to generate the path between the two coordinates. If callable, should take the two coordinates and return a sequence of coordinates that links them, else passed to gen_long_range_swap_path.

state

The current state.

Type:

TensorNetwork2DVector

ham

The Hamiltonian being used to evolve.

Type:

LocalHam2D

energy

The current of the current state, this will trigger a computation if the energy at this iteration hasn’t been computed yet.

Type:

float

energies

The energies that have been computed, if any.

Type:

list[float]

its

The corresponding sequence of iteration numbers that energies have been computed at.

Type:

list[int]

taus

The corresponding sequence of time steps that energies have been computed at.

Type:

list[float]

best

If keep_best was set then the best recorded energy and the corresponding state that was computed - keys 'energy' and 'state' respectively.

Type:

dict

_initialize_gauges()[source]

Create unit singular values, stored as tensors.

property gauges
The dictionary of bond pair coordinates to Tensors describing the
weights (``t = gauges[pair]; t.data``) and index
(``t = gauges[pair]; t.inds[0]``) of all the gauges.
property long_range_use_swaps
gate(U, where)[source]

Like TEBD2D.gate but absorb and extract the relevant gauges before and after each gate application.

get_state(absorb_gauges=True)[source]

Return the state, with the diagonal bond gauges either absorbed equally into the tensors on either side of them (absorb_gauges=True, the default), or left lazily represented in the tensor network with hyperedges (absorb_gauges=False).

set_state(psi)[source]

Set the wavefunction state, this resets the environment gauges to unity.

quimb.tensor.tensor_2d_tebd.gate_full_update_als(ket, env, bra, G, where, tags_plq, steps, tol, max_bond, optimize='auto-hq', solver='solve', dense=True, enforce_pos=False, pos_smudge=1e-06, init_simple_guess=True, condition_tensors=True, condition_maintain_norms=True, condition_balance_bonds=True)[source]
quimb.tensor.tensor_2d_tebd.gate_full_update_autodiff_fidelity(ket, env, bra, G, where, tags_plq, steps, tol, max_bond, optimize='auto-hq', autodiff_backend='autograd', autodiff_optimizer='L-BFGS-B', init_simple_guess=True, condition_tensors=True, condition_maintain_norms=True, condition_balance_bonds=True, **kwargs)[source]
quimb.tensor.tensor_2d_tebd.get_default_full_update_fit_opts()[source]

The default options for the full update gate fitting procedure.

quimb.tensor.tensor_2d_tebd.parse_specific_gate_opts(strategy, fit_opts)[source]

Parse the options from fit_opts which are relevant for strategy.

class quimb.tensor.tensor_2d_tebd.FullUpdate(psi0, ham, tau=0.01, D=None, chi=None, fit_strategy='als', fit_opts=None, compute_envs_every=1, pre_normalize=True, condition_tensors=True, condition_balance_bonds=True, contract_optimize='auto-hq', imag=True, gate_opts=None, ordering=None, second_order_reflect=False, compute_energy_every=None, compute_energy_final=True, compute_energy_opts=None, compute_energy_fn=None, compute_energy_per_site=False, callback=None, keep_best=False, progbar=True)[source]

Bases: TEBD2D

Implements the ‘Full Update’ version of 2D imaginary time evolution, where each application of a gate is fitted to the current tensors using a boundary contracted environment.

Parameters:
  • psi0 (TensorNetwork2DVector) – The initial state.

  • ham (LocalHam2D) – The Hamtiltonian consisting of local terms.

  • tau (float, optional) – The default local exponent, if considered as time real values here imply imaginary time.

  • max_bond ({'psi0', int, None}, optional) – The maximum bond dimension to keep when applying each gate.

  • gate_opts (dict, optional) – Supplied to quimb.tensor.tensor_2d.TensorNetwork2DVector.gate(), in addition to max_bond. By default contract is set to ‘reduce-split’ and cutoff is set to 0.0.

  • ordering (str, tuple[tuple[int]], callable, optional) – How to order the terms, if a string is given then use this as the strategy given to get_auto_ordering(). An explicit list of coordinate pairs can also be given. The default is to greedily form an ‘edge coloring’ based on the sorted list of Hamiltonian pair coordinates. If a callable is supplied it will be used to generate the ordering before each sweep.

  • second_order_reflect (bool, optional) – If True, then apply each layer of gates in ordering forward with half the time step, then the same with reverse order.

  • compute_energy_every (None or int, optional) – How often to compute and record the energy. If a positive integer ‘n’, the energy is computed before every nth sweep (i.e. including before the zeroth).

  • compute_energy_final (bool, optional) – Whether to compute and record the energy at the end of the sweeps regardless of the value of compute_energy_every. If you start sweeping again then this final energy is the same as the zeroth of the next set of sweeps and won’t be recomputed.

  • compute_energy_opts (dict, optional) – Supplied to compute_local_expectation(). By default max_bond is set to max(8, D**2) where D is the maximum bond to use for applying the gate, cutoff is set to 0.0 and normalized is set to True.

  • compute_energy_fn (callable, optional) – Supply your own function to compute the energy, it should take the TEBD2D object as its only argument.

  • callback (callable, optional) – A custom callback to run after every sweep, it should take the TEBD2D object as its only argument. If it returns any value that boolean evaluates to True then terminal the evolution.

  • progbar (boolean, optional) – Whether to show a live progress bar during the evolution.

  • fit_strategy ({'als', 'autodiff-fidelity'}, optional) –

    Core method used to fit the gate application.

    • 'als': alternating least squares

    • 'autodiff-fidelity': local fidelity using autodiff

  • fit_opts (dict, optional) – Advanced options for the gate application fitting functions. Defaults are inserted and can be accessed via the .fit_opts attribute.

  • compute_envs_every ({'term', 'group', 'sweep', int}, optional) –

    How often to recompute the environments used to the fit the gate application:

    • 'term': every gate

    • 'group': every set of commuting gates (the default)

    • 'sweep': every total sweep

    • int: every x number of total sweeps

  • pre_normalize (bool, optional) – Actively renormalize the state using the computed environments.

  • condition_tensors (bool, optional) – Whether to actively equalize tensor norms for numerical stability.

  • condition_balance_bonds (bool, optional) – If and when equalizing tensor norms, whether to also balance bonds as an additional conditioning.

  • contract_optimize (str, optional) – Contraction path optimizer to use for gate + env + sites contractions.

state

The current state.

Type:

TensorNetwork2DVector

ham

The Hamiltonian being used to evolve.

Type:

LocalHam2D

energy

The current of the current state, this will trigger a computation if the energy at this iteration hasn’t been computed yet.

Type:

float

energies

The energies that have been computed, if any.

Type:

list[float]

its

The corresponding sequence of iteration numbers that energies have been computed at.

Type:

list[int]

taus

The corresponding sequence of time steps that energies have been computed at.

Type:

list[float]

best

If keep_best was set then the best recorded energy and the corresponding state that was computed - keys 'energy' and 'state' respectively.

Type:

dict

fit_opts

Detailed options for fitting the applied gate.

Type:

dict

property fit_strategy
set_state(psi)[source]

The default method for setting the current state - simply a copy. Subclasses can override this to perform additional transformations.

property compute_envs_every
_maybe_compute_plaquette_envs(force=False)[source]

Compute and store the plaquette environments for all local terms.

presweep(i)[source]

Full update presweep - compute envs and inject gate options.

compute_energy()[source]

Full update compute energy - use the (likely) already calculated plaquette environments.

gate(G, where)[source]

Apply the gate G at sites where, using a fitting method that takes into account the current environment.