# 9. Tensor Network Algorithms¶

Although quimb.tensor aims to be an interactive and general base for arbitrary tensor networks, it also has fast implementations of the following:

Static:

Time Evolving:

Two site DMRGX and TDVP slot into the same framework and should be easy to implement. All of these are based on 1D tensor networks, the primary representation of which is the matrix product state.

## 9.1. Matrix Product States¶

The basic constructor for MPS is MatrixProductState. This is a subclass of TensorNetwork, with a special tagging scheme (MPS.site_tag_id) and special index naming sceme (MPS.site_ind_id). It is also possible to instantiate a MPS directly from a dense vector using from_dense(), though this is obviously not efficient for many sites.

In the following, we just generate a random MPS, and demonstrate some basic functionality.

[1]:

%matplotlib inline
from quimb.tensor import *

[2]:

p = MPS_rand_state(n=20, bond_dim=50)
print(f"Site tags: '{p.site_tag_id}', site inds: '{p.site_ind_id}'")

Site tags: 'I{}', site inds: 'k{}'

[3]:

print(p)  # shows the full list of constituent tensors

MatrixProductState([
Tensor(shape=(50, 2), inds=('_19e8fd0000000', 'k0'), tags={'I0'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000000', '_19e8fd0000002', 'k1'), tags={'I1'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000002', '_19e8fd0000004', 'k2'), tags={'I2'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000004', '_19e8fd0000006', 'k3'), tags={'I3'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000006', '_19e8fd0000008', 'k4'), tags={'I4'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000008', '_19e8fd000000a', 'k5'), tags={'I5'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000000a', '_19e8fd000000c', 'k6'), tags={'I6'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000000c', '_19e8fd000000e', 'k7'), tags={'I7'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000000e', '_19e8fd000000A', 'k8'), tags={'I8'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000000A', '_19e8fd000000C', 'k9'), tags={'I9'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000000C', '_19e8fd000000E', 'k10'), tags={'I10'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000000E', '_19e8fd0000010', 'k11'), tags={'I11'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000010', '_19e8fd0000012', 'k12'), tags={'I12'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000012', '_19e8fd0000014', 'k13'), tags={'I13'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000014', '_19e8fd0000016', 'k14'), tags={'I14'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000016', '_19e8fd0000018', 'k15'), tags={'I15'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd0000018', '_19e8fd000001a', 'k16'), tags={'I16'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000001a', '_19e8fd000001c', 'k17'), tags={'I17'}),
Tensor(shape=(50, 50, 2), inds=('_19e8fd000001c', '_19e8fd000001e', 'k18'), tags={'I18'}),
Tensor(shape=(50, 2), inds=('_19e8fd000001e', 'k19'), tags={'I19'}),
], structure='I{}', nsites=20)

[4]:

p.show()  # 1D tensor networks also have a ascii show method

 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50
o--o--o--o--o--o--o--o--o--o--o--o--o--o--o--o--o--o--o--o
|  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |


We can then canonicalize the MPS:

[5]:

p.left_canonize()
p.show()

 2 4 8 16 32 50 50 50 50 50 50 50 50 50 50 50 50 50 50
>->->->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->--o
| | | |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |


And we can compute the inner product as:

[6]:

p.H @ p

[6]:

1.0000000000000004


This relies on them sharing the same physical indices, site_ind_id, which the conjugated copy p.H naturally does.

Like any TN, we can graph the overlap for example, and make use of the site tags to color it:

[7]:

(p.H & p).graph(color=[f'I{i}' for i in range(30)], initial_layout='random')


I.e. we used the fact that 1D tensor networks are tagged with the structure "I{}" denoting their sites. See the Examples for how to fix the positions of tensors when graphing them.

We can also add MPS, and multiply/divide them by scalars:

[8]:

p2 = (p + p) / 2
p2.show()

 4 8 16 32 64 100 100 100 100 100 100 100 100 100 100 100 100 100 100
o-o-o--o--o--o===o===o===o===o===o===o===o===o===o===o===o===o===o===o
| | |  |  |  |   |   |   |   |   |   |   |   |   |   |   |   |   |   |


Which doubles the bond dimension, as expected, but should still be normalized:

[9]:

p2.H @ p2

[9]:

1.0


Because the MPS is the addition of two identical states, it should also compress right back down:

[10]:

p2.compress(form=10)
p2.show()

 2 4 8 16 32 50 50 50 50 50 50 50 50 50 32 16 8 4 2
>->->->-->-->-->-->-->-->--o--<--<--<--<--<--<-<-<-<
| | | |  |  |  |  |  |  |  |  |  |  |  |  |  | | | |


Where we have also set the orthogonality center at the site 10.

When tensor networks are imbued with a structure, they can be indexed with integers and slices, which automatically get converted using TN.site_tag_id:

[11]:

p2[10]  # get the tensor(s) with tag 'I10'.

[11]:

Tensor(shape=(50, 50, 2), inds=('_19e8fd000000C', '_19e8fd000000E', 'k10'), tags={'I10'})


Note the tensor has matching physical index 'k10'.

This tensor is the orthogonality center so:

   ->->-O-<-<-        +-O-+
... | | | | | ...  =  | | |
->->-O-<-<-        +-O-+
i=10            i=10


should compute the normalization of the whole state:

[12]:

p2[10].H @ p2[10]  # all indices match -> inner product

[12]:

0.9999999999999989


Or equivalently:

[13]:

p2[10].norm()

[13]:

0.9999999999999994


If two tensor networks with the same structure are combined, it is propagated. For example (p2.H & p2) can still be sliced.

Since the MPS is in canonical form, left and right pieces of the overlap should form the identity. The following forms a TN of the inner product, selects the 2 tensors corresponding to the last site (-1), contracts them, then gets the underlying data:

[14]:

((p2.H & p2).select(-1) ^ all).data  # should be close to the identity

[14]:

array([[ 1.00000000e+00, -9.66170229e-18],
[-9.66170229e-18,  1.00000000e+00]])


Various builtin quantities are available to compute too:

and other non-trivial quantities such as the mutual information can be easily calculated using a combination of - partial_trace_compress() and approx_spectral_function() (see Examples). Finally, many quantities can be computed using local ‘gates’ see the section Gates: compute local quantities and simulate circuits.

## 9.2. Matrix Product Operators¶

The raw MPO class is MatrixProductOperator, which shares many features with MatrixProductState, but has both a MPO.upper_ind_id and a MPO.lower_ind_id.

Here we generate a random hermitian MPO and form a ‘overlap’ network with our MPS:

[15]:

A = MPO_rand_herm(20, bond_dim=7, tags=['HAM'])
pH = p.H

# This inplace modifies the indices of each to form overlap
p.align_(A, pH)

(pH & A & p).graph(color='HAM', iterations=1000)


Compute the actual contraction (... means contract everything, but use the structure if possible):

[16]:

(pH & A & p) ^ ...

[16]:

-1.3820996053058177e-06


## 9.3. Building Hamiltonians¶

There a few built-in MPO hamiltoanians:

These all accept a cyclic argument to enable periodic boundary conditions (PBC), and a S argument to set the size of spin.

For generating other spin Hamiltonians see SpinHam, or consider using the raw constructor of MatrixProductOperator.

## 9.4. Quick DMRG2 Intro¶

First we build a Hamiltonian term by term (though we could just use MPO_ham_heis):

[17]:

builder = SpinHam(S=1)
builder += 1/2, '+', '-'
builder += 1/2, '-', '+'
builder += 1, 'Z', 'Z'
H = builder.build_mpo(n=100)


Then we construct the 2-site DMRG object (DMRG2), with the Hamiltonian MPO and a default sequence of maximum bond dimensions and a bond compression cutoff:

[18]:

dmrg = DMRG2(H, bond_dims=[10, 20, 100, 100, 200], cutoffs=1e-10)


The DMRG object will automatically detect OBC/PBC. Now we can solve to a certain absolute energy tolerance, showing progress and a schematic of the final state:

[19]:

dmrg.solve(tol=1e-6, verbosity=1)

SWEEP-1, direction=R, max_bond=(10/10), cutoff:1e-10

100%|###########################################| 99/99 [00:01<00:00, 60.08it/s]

Energy: -138.72750664166819 ... not converged.
SWEEP-2, direction=R, max_bond=(10/20), cutoff:1e-10


100%|###########################################| 99/99 [00:01<00:00, 97.48it/s]

Energy: -138.93664163123717 ... not converged.
SWEEP-3, direction=R, max_bond=(20/100), cutoff:1e-10


100%|###########################################| 99/99 [00:01<00:00, 54.38it/s]

Energy: -138.94004623374016 ... not converged.
SWEEP-4, direction=R, max_bond=(58/100), cutoff:1e-10


100%|###########################################| 99/99 [00:04<00:00, 23.82it/s]

Energy: -138.9400855258671 ... not converged.
SWEEP-5, direction=R, max_bond=(89/200), cutoff:1e-10


100%|###########################################| 99/99 [00:06<00:00, 15.02it/s]

Energy: -138.94008605050593 ... converged!



[19]:

True

[20]:

dmrg.state.show(max_width=80)

     3 9 27 54 65 74 79 83 87 89 92 94 94 95 95 95 94 94 94 93 93 93 93 92
>->->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-- ...
| | |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |
...
91 91 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
... >-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->--> ...
|  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |
...
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 9
... -->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->- ...
|  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |
...
0 91 91 91 90 90 90 91 91 94 96 97 97 97 97 96 95 94 92 90 87 83 78 73
... ->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-->-- ...
|  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |
...
64 53 27 9 3
... >-->-->-->->-o
|  |  |  | | |


There are many options stored in the dict DMRG.opts - an explanation of each of these is given in get_default_opts(), and it may be necessary to tweak these to achieve the best performance/accuracy, especially for PBC (see Examples).

Note

Performance Tips

1. Make sure numpy is linked to a fast BLAS (e.g. MKL version that comes with conda).

2. Install slepc4py, to use as the iterative eigensolver, it’s faster than scipy.

3. If the hamiltonian is real, compile and use a real version of SLEPC (set the environment variable PETSC_ARCH before launch).

4. Periodic systems are in some ways easier to solve if longer, since this reduces correlations the ‘long way round’.

## 9.5. Quick TEBD Intro¶

Time Evolving Block Decimation (TEBD) requires not a MPO but the nearest neighbour interaction (NNI) term(s) of a Hamiltonian. This is encapsulated in the NNI object, which is initialized with the sum of two site terms H2 and one-site terms (if any), H1.

NNI objects can also be built directly from a SpinHam instance using the build_nni() method. There are also the following built-in NNI Hamiltonians:

Here we build a NNI NNI using a SpinHam:

[21]:

builder = SpinHam(S=1 / 2)

H = NNI_ham_heis(20, bz=0.1)

# check the two site term
H()

[21]:

[[ 0.2  0.   0.   0. ]
[ 0.  -0.3  0.5  0. ]
[ 0.   0.5 -0.2  0. ]
[ 0.   0.   0.   0.3]]


Then we set up an initial state and the TEBD object itself - which mimics the general api of quimb.Evolution:

[22]:

psi0 = MPS_neel_state(20)
tebd = TEBD(psi0, H)


Now we are ready to evolve. By setting a tol, the required timestep dt is computed for us:

[23]:

tebd.update_to(T=3, tol=1e-3)

t=3, max-bond=34: 100%|##########| 100/100 [00:03<00:00, 12.62%/s]


After the evolution we can see that entanglement has been generated throughout the chain:

[24]:

tebd.pt.show()

 2 4 8 16 29 34 33 34 33 34 33 34 33 34 29 16 8 4 2
>->->->-->-->-->-->-->-->-->-->-->-->-->-->-->->->-o
| | | |  |  |  |  |  |  |  |  |  |  |  |  |  | | | |


A more complete demonstration can be found in the Examples.

## 9.6. Gates: compute local quantities and simulate circuits¶

On top of the builtin methods mentioned earlier (entropy(), schmidt_gap(), magnetization(), correlation(), logneg_subsys(), etc.), many other quantities are encapsulated by the gate() method, which works on any 1D tensor network vector (MPS, MERA, etc.). This ‘applies’ a given operator to 1 or more sites, whilst maintaining the ‘physical’, outer indices. This not only directly allows quantum circuit style computation simulation but also makes local quantities (i.e. non-MPO) easy to compute:

[25]:

import quimb as qu
Z = qu.pauli('Z')

# compute <psi0|Z_i|psi0> for neel state above
[
psi0.gate(Z, i).H @ psi0
for i in range(10)
]

[25]:

[(1+0j),
(-1+0j),
(1+0j),
(-1+0j),
(1+0j),
(-1+0j),
(1+0j),
(-1+0j),
(1+0j),
(-1+0j)]


There are four ways in which a gate can be applied:

• Lazily (contract=False) - the gate is added to the tensor network but nothing is contracted. This is the default.

• Lazily with split (contract='split-gate') - the gate is split before it is added to the network.

• Eagerly (contract=True) - the gate is contracted into the tensor network. If the gate acts on more than one site this will produce larger tensors.

• Swap and Split (contract='swap+split') - sites will be swapped until adjacent, the gate will be applied and the resulting tensor split, then the sites swapped back into their original positions. This explicitly maintains the exact structure of an MPS (at the cost of increasing bond dimension), unlike the other two methods.

Here’s a quantum computation style demonstration of the lazy method:

[26]:

import quimb as qu

# some operators to apply
CNOT = qu.controlled('not')

# setup an intitial register of qubits
n = 10
psi0 = MPS_computational_state('0' * n, tags='PSI0')

# apply hadamard to each site
for i in range(n):
psi0.gate_(H, i, tags='H')

# apply CNOT to even pairs
for i in range(0, n, 2):
psi0.gate_(CNOT, (i, i + 1), tags='CNOT')

# apply CNOT to odd pairs
for i in range(1, n - 1, 2):
psi0.gate_(CNOT, (i, i + 1), tags='CNOT')


Note we have used the inplace gate_ (with a trailing underscore) which modifies the original psi0 object. However psi0 has its physical site indices mantained such that it overall looks like the same object:

[27]:

sorted(psi0.outer_inds())

[27]:

['k0', 'k1', 'k2', 'k3', 'k4', 'k5', 'k6', 'k7', 'k8', 'k9']

[28]:

(psi0.H & psi0) ^ all

[28]:

(0.9999999999999978+0j)


But the network now contains the gates as additional tensors:

[29]:

psi0.graph(color=['PSI0', 'H', 'CNOT'], show_inds=True)


With the swap and split method MPS form is always maintained, which allows a canonical form and thus optimal trimming of singular values:

[30]:

n = 10
psi0 = MPS_computational_state('0' * n)

for i in range(n):
# 'swap+split' will be ignore to one-site gates
psi0.gate_(H, i, contract='swap+split')

# use Z-phase to create entanglement
Rz = qu.phase_gate(0.42)
for i in range(n):
psi0.gate_(Rz, i, contract='swap+split')

for i in range(0, n, 2):
psi0.gate_(CNOT, (i, i + 1), contract='swap+split')

for i in range(1, n - 1, 2):
psi0.gate_(CNOT, (i, i + 1), contract='swap+split')

# act with one long-range CNOT
psi0.gate_(CNOT, (2, n - 2), contract='swap+split')

[30]:

<MatrixProductState(tensors=10, structure='I{}', nsites=10)>


We now still have an MPS, but with increased bond dimension:

[31]:

psi0.show()

 2 2 4 4 4 4 4 2 2
>->->->->->->-o-o-<
| | | | | | | | | |


Finally, the eager (contract=True) method works fairly simply:

[32]:

psi0_CNOT = psi0.gate(CNOT, (1, n -2 ), contract=True)
psi0_CNOT.graph(color=[psi0.site_tag(i) for i in range(n)])


Where we can see that the gate, site 1, and site 8 have been combined into a new rank-6 tensor.

A much more detailed run-through of quantum circuit simulation using tensor networks and the Circuit object can be found in the example TN Quantum-Circuit Simulation.