5. 1D Algorithms

Although quimb.tensor aims to be an interactive and general base for arbitrary tensor networks, it also has fast implementations of the following:

Static:

Time Evolving:

Two site DMRGX and TDVP slot into the same framework and should be easy to implement. All of these are based on 1D tensor networks, the primary representation of which is the matrix product state.

5.1. Matrix Product States

The basic constructor for MPS is MatrixProductState. This is a subclass of TensorNetwork, with a special tagging scheme (MPS.site_tag_id) and special index naming sceme (MPS.site_ind_id). It is also possible to instantiate a MPS directly from a dense vector using from_dense(), though this is obviously not efficient for many sites.

In the following, we just generate a random MPS, and demonstrate some basic functionality.

%config InlineBackend.figure_formats = ['svg']
from quimb.tensor import *
p = MPS_rand_state(L=20, bond_dim=50)
print(f"Site tags: '{p.site_tag_id}', site inds: '{p.site_ind_id}'")
Site tags: 'I{}', site inds: 'k{}'
# in a notebook we can explore a colorized representation:
p
MatrixProductState(tensors=20, indices=39, L=20, max_bond=50)
Tensor(shape=(50, 2), inds=[_24c68bAAAAA, k0], tags={I0}),backend=numpy, dtype=float64, data=array([[-7.23050376e-02, -7.03233358e-02], [-3.71396065e-02, -3.79314376e-01], [ 2.47370899e-01, -9.72034726e-02], [ 8.54444229e-02, 9.43094644e-02], [-3.01686600e-03, 6.25840633e-02], [-1.00130764e-01, 2.10632855e-03], [-3.02528153e-01, 2.68325990e-02], [ 1.65186802e-01, -2.23902144e-01], [-2.53326907e-01, -1.06379207e-01], [ 1.25662280e-02, 4.15052468e-01], [-2.63098566e-01, 1.94957486e-01], [ 2.69430003e-01, -2.27379508e-01], [ 1.86165896e-01, -1.97214422e-01], [ 5.43901434e-02, -2.57020565e-01], [-1.39266430e-01, -3.85268255e-01], [-2.08392345e-02, 1.29785108e-03], [ 2.26398464e-01, -3.99087439e-01], [-1.02497892e-01, -4.01221647e-01], [-1.95411800e-01, 1.83972553e-01], [-2.46764648e-01, -1.82171240e-01], [ 9.78024967e-02, 3.59483899e-01], [ 1.08605006e-02, -2.72331260e-01], [-1.68466805e-01, 2.79951556e-01], [ 1.39704704e-01, -1.52637125e-01], [-6.82133962e-02, -4.32434848e-02], [-1.27350450e-01, 1.96994369e-01], [ 6.35695659e-02, -2.00888530e-02], [ 2.85917019e-02, 8.40426521e-02], [ 1.75519457e-01, 2.26869319e-01], [-1.49103672e-03, -3.49393875e-02], [ 2.52729998e-03, 2.42533905e-01], [ 1.15414183e-02, 2.60530433e-04], [-6.18356004e-02, -1.17787258e-01], [-1.05326009e-01, -2.04636195e-01], [ 2.69569056e-01, 4.04553011e-02], [-9.07078756e-02, -3.28639534e-01], [-1.97923979e-01, -5.54260752e-02], [ 1.56252373e-01, 3.86727623e-02], [-5.03854584e-03, -9.62136575e-03], [-1.31922187e-01, 9.13597278e-04], [ 2.10145962e-01, -3.79826424e-01], [ 1.23047553e-01, 1.71584296e-02], [ 1.31055135e-01, -4.63707944e-02], [-2.72772712e-02, -2.76753510e-02], [ 5.76324574e-02, -5.71926628e-03], [-1.71928557e-01, -6.90175024e-02], [ 4.49445212e-01, -5.76541436e-02], [-3.88910594e-02, 9.76740213e-02], [-7.55043152e-02, 1.10785594e-03], [-5.70198307e-02, 2.04236061e-01]])
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAA, _24c68bAAAAB, k1], tags={I1}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAB, _24c68bAAAAC, k2], tags={I2}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAC, _24c68bAAAAD, k3], tags={I3}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAD, _24c68bAAAAE, k4], tags={I4}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAE, _24c68bAAAAF, k5], tags={I5}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAF, _24c68bAAAAG, k6], tags={I6}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAG, _24c68bAAAAH, k7], tags={I7}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAH, _24c68bAAAAI, k8], tags={I8}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAI, _24c68bAAAAJ, k9], tags={I9}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAJ, _24c68bAAAAK, k10], tags={I10}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAK, _24c68bAAAAL, k11], tags={I11}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAL, _24c68bAAAAM, k12], tags={I12}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAM, _24c68bAAAAN, k13], tags={I13}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAN, _24c68bAAAAO, k14], tags={I14}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAO, _24c68bAAAAP, k15], tags={I15}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAP, _24c68bAAAAQ, k16], tags={I16}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAQ, _24c68bAAAAR, k17], tags={I17}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAR, _24c68bAAAAS, k18], tags={I18}),backend=numpy, dtype=float64, data=...
Tensor(shape=(50, 2), inds=[_24c68bAAAAS, k19], tags={I19}),backend=numpy, dtype=float64, data=array([[-0.01304843, 0.00136363], [ 0.02599829, 0.03206652], [ 0.00402236, 0.01722348], [ 0.02795805, 0.0025022 ], [-0.01508713, -0.02213426], [ 0.00738077, -0.00119062], [-0.00129195, -0.01857927], [ 0.02272557, 0.01076077], [ 0.00604588, 0.01330983], [-0.00493352, 0.04967493], [-0.04241798, -0.0099364 ], [-0.0264799 , 0.01300239], [-0.01629391, -0.01668951], [ 0.01458938, -0.03010086], [-0.00628288, -0.01362353], [-0.03162351, -0.00187636], [ 0.00546166, 0.00523729], [ 0.02457457, -0.00396729], [-0.00513676, 0.00360319], [-0.00836199, -0.02079419], [-0.00708095, -0.01782877], [-0.01472311, -0.00178954], [-0.03662311, 0.0070526 ], [-0.00372476, -0.00557199], [ 0.00943614, -0.03280574], [ 0.00167713, 0.00644863], [-0.00859858, 0.01448539], [ 0.00447755, -0.00110432], [-0.01228641, 0.00366931], [ 0.00101717, 0.01102541], [-0.00309926, -0.01777007], [-0.00895437, -0.02421761], [ 0.05582051, -0.02589686], [ 0.00389852, 0.00656662], [ 0.01558836, -0.01710938], [ 0.04766408, -0.00102042], [-0.01823222, 0.00818845], [-0.00549711, 0.00459023], [-0.0184782 , 0.00713011], [-0.03109486, 0.02643852], [ 0.00619224, -0.01742809], [ 0.00375878, 0.02685429], [ 0.01819982, 0.02628505], [ 0.01080021, 0.00459974], [ 0.00145328, -0.00599202], [-0.02381572, 0.00913693], [-0.02632923, -0.00698068], [ 0.01952039, -0.00489499], [ 0.00676699, -0.00050942], [-0.01703804, 0.01651599]])
p.show()  # 1D tensor networks also have a ascii ``show`` method
 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 
●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●
│  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │

We can then canonicalize the MPS:

p.left_canonize()
p.show()
 2 4 8 16 32 50 50 50 50 50 50 50 50 50 50 50 50 50 50 
>─>─>─>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──●
│ │ │ │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │

And we can compute the inner product as:

p.H @ p
0.9999999999999998

This relies on them sharing the same physical indices, site_ind_id, which the conjugated copy p.H naturally does.

Like any TN, we can graph the overlap for example, and make use of the site tags to color it:

(p.H & p).draw(color=[f'I{i}' for i in range(20)])
_images/c8beb9f9473c1f1a81db749fed0dc36ef0f62e3c13e05f4cc236df44b91d7087.svg

I.e. we used the fact that 1D tensor networks are tagged with the structure "I{}" denoting their sites. See the Examples for how to fix the positions of tensors when drawing them.

We can also add MPS, and multiply/divide them by scalars:

p2 = (p + p) / 2
p2.show()
 4 8 16 32 64 100 100 100 100 100 100 100 100 100 100 100 100 100 100 
●─●─●──●──●──●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●
│ │ │  │  │  │   │   │   │   │   │   │   │   │   │   │   │   │   │   │

Which doubles the bond dimension, as expected, but should still be normalized:

p2.H @ p2
0.9999999999999991

Because the MPS is the addition of two identical states, it should also compress right back down:

p2.compress(form=10)
p2.show()
 2 4 8 16 32 50 50 50 50 50 50 50 50 50 32 16 8 4 2 
>─>─>─>──>──>──>──>──>──>──●──<──<──<──<──<──<─<─<─<
│ │ │ │  │  │  │  │  │  │  │  │  │  │  │  │  │ │ │ │

Where we have also set the orthogonality center at the site 10.

When tensor networks are imbued with a structure, they can be indexed with integers and slices, which automatically get converted using TN.site_tag_id:

p2[10]  # get the tensor(s) with tag 'I10'.
Tensor(shape=(50, 50, 2), inds=[_24c68bAAAAJ, _24c68bAAAAK, k10], tags={I10}),backend=numpy, dtype=float64, data=...

Note the tensor has matching physical index 'k10'.

This tensor is the orthogonality center so:

   ─>─>─●─<─<─        ╭─●─╮
... │ │ │ │ │ ...  =  │ │ │
   ─>─>─●─<─<─        ╰─●─╯
       i=10            i=10

should compute the normalization of the whole state:

p2[10].H @ p2[10]  # all indices match -> inner product
1.0

Or equivalently:

p2[10].norm()
1.0

If two tensor networks with the same structure are combined, it is propagated. For example (p2.H & p2) can still be sliced.

Since the MPS is in canonical form, left and right pieces of the overlap should form the identity. The following forms a TN of the inner product, selects the 2 tensors corresponding to the last site (-1), contracts them, then gets the underlying data:

((p2.H & p2).select(-1) ^ all).data.round(12)  # should be close to the identity
array([[ 1., -0.],
       [-0.,  1.]])

Various builtin quantities are available to compute too:

and other non-trivial quantities such as the mutual information can be easily calculated using a combination of - partial_trace_compress() and approx_spectral_function() (see Examples). Finally, many quantities can be computed using local ‘gates’ see the section Gates: compute local quantities and simulate circuits.

5.2. Matrix Product Operators

The raw MPO class is MatrixProductOperator, which shares many features with MatrixProductState, but has both a MPO.upper_ind_id and a MPO.lower_ind_id.

Here we generate a random hermitian MPO and form a ‘overlap’ network with our MPS:

A = MPO_rand_herm(20, bond_dim=7, tags=['HAM'])
pH = p.H

# This inplace modifies the indices of each to form overlap
p.align_(A, pH)

(pH & A & p).draw(color='HAM', iterations=20, initial_layout='kamada_kawai')
_images/3d5413d1fd0e3f63faa276dd3df74a605a1db14e717757b724d50d36bdac5372.svg

Compute the actual contraction (... means contract everything, but use the structure if possible):

(pH & A & p) ^ ...
-1.0420173017863053e-06

5.3. Building Hamiltonians

There are various built-in MPO hamiltoanians:

These all accept a cyclic argument to enable periodic boundary conditions (PBC), and a S argument to set the size of spin.

For generating other spin Hamiltonians see SpinHam1D, or consider using the raw constructor of MatrixProductOperator.

5.4. Quick DMRG2 Intro

First we build a Hamiltonian term by term (though we could just use MPO_ham_heis):

builder = SpinHam1D(S=1)
builder += 1/2, '+', '-'
builder += 1/2, '-', '+'
builder += 1, 'Z', 'Z'
H = builder.build_mpo(L=100)

Then we construct the 2-site DMRG object (DMRG2), with the Hamiltonian MPO and a default sequence of maximum bond dimensions and a bond compression cutoff:

dmrg = DMRG2(H, bond_dims=[10, 20, 100, 100, 200], cutoffs=1e-10)

The DMRG object will automatically detect OBC/PBC. Now we can solve to a certain absolute energy tolerance, showing progress and a schematic of the final state:

dmrg.solve(tol=1e-6, verbosity=1)
SWEEP-1, direction=R, max_bond=(10/10), cutoff:1e-10
100%|###########################################| 99/99 [00:01<00:00, 93.61it/s]
Energy: -138.71354542441696 ... not converged.
SWEEP-2, direction=R, max_bond=(10/20), cutoff:1e-10
100%|##########################################| 99/99 [00:00<00:00, 250.23it/s]
Energy: -138.93662608086498 ... not converged.
SWEEP-3, direction=R, max_bond=(20/100), cutoff:1e-10
100%|###########################################| 99/99 [00:01<00:00, 74.36it/s]
Energy: -138.9400463321153 ... not converged.
SWEEP-4, direction=R, max_bond=(58/100), cutoff:1e-10
100%|###########################################| 99/99 [00:04<00:00, 21.45it/s]
Energy: -138.94008554031018 ... not converged.
SWEEP-5, direction=R, max_bond=(90/200), cutoff:1e-10
100%|###########################################| 99/99 [00:08<00:00, 11.55it/s]
Energy: -138.94008604176477 ... converged!

True
dmrg.state.show(max_width=80)
     3 9 27 53 64 75 80 83 87 89 92 93 94 94 95 96 95 94 94 93 93 93 92 92    
... >─>─>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>── ...
    │ │ │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │      
                                 ...                                  
     91 91 91 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90     
... >──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──> ...
    │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │    
                                 ...                                  
    90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 9    
... ──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>─ ...
      │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │     
                                 ...                                  
    0 90 90 91 90 90 90 90 90 92 93 94 95 95 96 97 95 95 93 90 87 83 78 73    
... ─>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>── ...
     │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │  │      
                                 ...                                  
     64 53 27 9 3 
    >──>──>──>─>─●
    │  │  │  │ │ │

There are many options stored in the dict DMRG.opts - an explanation of each of these is given in get_default_opts(), and it may be necessary to tweak these to achieve the best performance/accuracy, especially for PBC (see Examples).

Note

Performance Tips

  1. Make sure numpy is linked to a fast BLAS (e.g. MKL version that comes with conda).

  2. Install slepc4py, to use as the iterative eigensolver, it’s faster than scipy.

  3. If the hamiltonian is real, compile and use a real version of SLEPC (set the environment variable PETSC_ARCH before launch).

  4. Periodic systems are in some ways easier to solve if longer, since this reduces correlations the ‘long way round’.

5.5. Quick TEBD Intro

Time Evolving Block Decimation (TEBD) requires not a MPO but a specficiation of the local, interacting term(s) of a Hamiltonian. This is encapsulated in the LocalHam1D object, which is initialized with the sum of two site terms H2 and one-site terms (if any), H1.

LocalHam1D objects can also be built directly from a SpinHam1D instance using the build_local_ham() method. There are also the following built-in LocalHam1D Hamiltonians:

  • ham1d_heis

  • ham1d_ising

  • ham1d_XY

Here we build a LocalHam1D using a SpinHam1D:

builder = SpinHam1D(S=1 / 2)
builder.add_term(1.0, 'Z', 'Z')
builder.add_term(0.9, 'Y', 'Y')
builder.add_term(0.8, 'X', 'X')
builder.add_term(0.6, 'Z')

H = ham_1d_heis(20, bz=0.1)

# check the two site term
H.terms[0, 1]
array([[ 0.175,  0.   ,  0.   ,  0.   ],
       [ 0.   , -0.275,  0.5  ,  0.   ],
       [ 0.   ,  0.5  , -0.225,  0.   ],
       [ 0.   ,  0.   ,  0.   ,  0.325]])

Then we set up an initial state and the TEBD object itself - which mimics the general api of quimb.Evolution:

psi0 = MPS_neel_state(20)
tebd = TEBD(psi0, H)

Now we are ready to evolve. By setting a tol, the required timestep dt is computed for us:

tebd.update_to(T=3, tol=1e-3)
t=3, max-bond=34: 100%|#########################################################| 100/100 [00:02<00:00, 45.10%/s]

After the evolution we can see that entanglement has been generated throughout the chain:

tebd.pt.show()
 2 4 8 16 29 34 33 34 33 34 33 34 33 34 29 16 8 4 2 
>─>─>─>──>──>──>──>──>──>──>──>──>──>──>──>──>─>─>─●
│ │ │ │  │  │  │  │  │  │  │  │  │  │  │  │  │ │ │ │

A more complete demonstration can be found in the Examples.

5.6. Gates: compute local quantities and simulate circuits

On top of the builtin methods mentioned earlier (entropy(), schmidt_gap(), magnetization(), correlation(), logneg_subsys(), etc.), many other quantities are encapsulated by the gate() method, which works on any 1D tensor network vector (MPS, MERA, etc.). This ‘applies’ a given operator to 1 or more sites, whilst maintaining the ‘physical’, outer indices. This not only directly allows quantum circuit style computation simulation but also makes local quantities (i.e. non-MPO) easy to compute:

import quimb as qu
Z = qu.pauli('Z')

# compute <psi0|Z_i|psi0> for neel state above
[
    psi0.gate(Z, i).H @ psi0
    for i in range(10)
]
[(1+0j),
 (-1+0j),
 (1+0j),
 (-1+0j),
 (1+0j),
 (-1+0j),
 (1+0j),
 (-1+0j),
 (1+0j),
 (-1+0j)]

There are four ways in which a gate can be applied:

  • Lazily (contract=False) - the gate is added to the tensor network but nothing is contracted. This is the default.

  • Lazily with split (contract='split-gate') - the gate is split before it is added to the network.

  • Eagerly (contract=True) - the gate is contracted into the tensor network. If the gate acts on more than one site this will produce larger tensors.

  • Swap and Split (contract='swap+split') - sites will be swapped until adjacent, the gate will be applied and the resulting tensor split, then the sites swapped back into their original positions. This explicitly maintains the exact structure of an MPS (at the cost of increasing bond dimension), unlike the other two methods.

Here’s a quantum computation style demonstration of the lazy method:

import quimb as qu

# some operators to apply
H = qu.hadamard()
CNOT = qu.controlled('not')

# setup an intitial register of qubits
n = 10
psi0 = MPS_computational_state('0' * n, tags='PSI0')

# apply hadamard to each site
for i in range(n):
    psi0.gate_(H, i, tags='H')

# apply CNOT to even pairs
for i in range(0, n, 2):
    psi0.gate_(CNOT, (i, i + 1), tags='CNOT')

# apply CNOT to odd pairs
for i in range(1, n - 1, 2):
    psi0.gate_(CNOT, (i, i + 1), tags='CNOT')

Note we have used the inplace gate_ (with a trailing underscore) which modifies the original psi0 object. However psi0 has its physical site indices mantained such that it overall looks like the same object:

sorted(psi0.outer_inds())
['k0', 'k1', 'k2', 'k3', 'k4', 'k5', 'k6', 'k7', 'k8', 'k9']
(psi0.H & psi0) ^ all
0.9999999999999978

But the network now contains the gates as additional tensors:

psi0.draw(color=['PSI0', 'H', 'CNOT'], show_tags=False, layout='neato')
_images/70ae8b7206bbe38f9da317943e4126ac7528987cdef2ae20cfe1a604afa1235b.svg

With the swap and split method MPS form is always maintained, which allows a canonical form and thus optimal trimming of singular values:

n = 10
psi0 = MPS_computational_state('0' * n)

for i in range(n):
    # 'swap+split' will be ignore to one-site gates
    psi0.gate_(H, i, contract='swap+split')

# use Z-phase to create entanglement
Rz = qu.phase_gate(0.42)
for i in range(n):
    psi0.gate_(Rz, i, contract='swap+split')

for i in range(0, n, 2):
    psi0.gate_(CNOT, (i, i + 1), contract='swap+split')

for i in range(1, n - 1, 2):
    psi0.gate_(CNOT, (i, i + 1), contract='swap+split')

# act with one long-range CNOT
psi0.gate_(CNOT, (2, n - 2), contract='swap+split')
MatrixProductState(tensors=10, indices=19, L=10, max_bond=4)
Tensor(shape=(2, 2), inds=[_24c68bAADBv, k0], tags={I0}),backend=numpy, dtype=complex128, data=array([[-0.69157229-0.14740341j, -0.57136173-0.41658825j], [ 0.14740341-0.69157229j, -0.41658825+0.57136173j]])
Tensor(shape=(2, 2, 2), inds=[_24c68bAADBv, _24c68bAADBw, k1], tags={I1}),backend=numpy, dtype=complex128, data=array([[[-0.63146701-0.28199583j, -0.63146701-0.28199583j], [ 0.28199583-0.63146701j, -0.28199583+0.63146701j]], [[ 0.13459242+0.06010528j, -0.13459242-0.06010528j], [-0.06010528+0.13459242j, -0.06010528+0.13459242j]]])
Tensor(shape=(2, 4, 2), inds=[_24c68bAADBw, _24c68bAADBx, k2], tags={I2}),backend=numpy, dtype=complex128, data=array([[[-0.69157229-1.35024033e-33j, -0.69157229-1.58941482e-16j], [ 0.69157229-4.11484524e-32j, -0.69157229-3.46099928e-16j], [-0.14740341-4.18181363e-32j, -0.14740341-7.89141154e-16j], [-0.14740341+7.28581742e-32j, 0.14740341-8.89497702e-17j]], [[ 0.14740341+1.61996213e-17j, -0.14740341-7.80051266e-17j], [-0.14740341-4.47779744e-16j, -0.14740341+3.68505675e-16j], [-0.69157229-2.22945171e-15j, 0.69157229+1.75599909e-15j], [-0.69157229+5.26006104e-17j, -0.69157229+3.38894788e-16j]]])
Tensor(shape=(4, 4, 2), inds=[_24c68bAADBx, _24c68bAADBy, k3], tags={I3}),backend=numpy, dtype=complex128, data=array([[[-6.31467011e-01-2.81995831e-01j, -6.31467011e-01-2.81995831e-01j], [ 3.72117231e-01+5.35067400e-01j, -3.72117231e-01-5.35067400e-01j], [ 1.18163846e-01+1.98849653e-01j, -1.18163846e-01-1.98849653e-01j], [ 8.32025320e-17+9.53153805e-17j, 8.44896702e-17+5.34998446e-17j]], [[ 1.74211205e-16-1.24491099e-16j, -2.16773529e-16+1.10748954e-16j], [ 2.27741623e-01-4.04676686e-02j, 2.27741623e-01-4.04676686e-02j], [-6.48197018e-01+6.78894858e-02j, -6.48197018e-01+6.78894858e-02j], [ 6.61388628e-01-2.02082454e-01j, -6.61388628e-01+2.02082454e-01j]], [[ 1.34592422e-01+6.01052807e-02j, -1.34592422e-01-6.01052807e-02j], [-7.93139761e-02-1.14045574e-01j, -7.93139761e-02-1.14045574e-01j], [-2.51857310e-02-4.23833012e-02j, -2.51857310e-02-4.23833012e-02j], [-5.26431744e-17-4.61828643e-16j, 8.44537671e-16+1.65029897e-16j]], [[-3.29015990e-17-1.51137575e-17j, 3.28265424e-17+1.12027680e-17j], [ 4.85414062e-02-8.62537780e-03j, -4.85414062e-02+8.62537780e-03j], [-1.38158297e-01+1.44701310e-02j, 1.38158297e-01-1.44701310e-02j], [ 1.40969989e-01-4.30723482e-02j, 1.40969989e-01-4.30723482e-02j]]])
Tensor(shape=(4, 4, 2), inds=[_24c68bAADBy, _24c68bAADBz, k4], tags={I4}),backend=numpy, dtype=complex128, data=array([[[-6.91572292e-01+3.07949540e-17j, -6.91572292e-01-2.18995204e-16j], [ 1.58727444e-16-2.91236757e-17j, -1.57151427e-16-3.73007479e-17j], [-1.21998472e-01+8.27293044e-02j, -1.21998472e-01+8.27293044e-02j], [-2.05408712e-16+1.90736661e-17j, 1.65822112e-16+2.16757513e-17j]], [[-7.17926491e-02-1.18923989e-01j, 7.17926491e-02+1.18923989e-01j], [ 2.30546592e-01+1.87656073e-02j, 2.30546592e-01+1.87656073e-02j], [ 5.91926222e-01+2.72748426e-01j, -5.91926222e-01-2.72748426e-01j], [-3.62745098e-02+3.33889982e-02j, -3.62745098e-02+3.33889982e-02j]], [[-2.84299785e-02-4.02790465e-02j, 2.84299785e-02+4.02790465e-02j], [-6.51711786e-01-6.33280688e-03j, -6.51711786e-01-6.33280688e-03j], [ 2.16458340e-01+8.15454855e-02j, -2.16458340e-01-8.15454855e-02j], [ 9.52260285e-02-1.01139029e-01j, 9.52260285e-02-1.01139029e-01j]], [[-7.78403027e-19+5.92845201e-19j, 3.20268667e-19+2.68252700e-18j], [ 2.95583182e-02-1.44409386e-01j, -2.95583182e-02+1.44409386e-01j], [ 1.89228135e-16+2.14005000e-16j, 2.58919376e-17-5.02856075e-17j], [-4.03695814e-01-5.61517520e-01j, 4.03695814e-01+5.61517520e-01j]]])
Tensor(shape=(4, 4, 2), inds=[_24c68bAADBz, _24c68bAADCA, k5], tags={I5}),backend=numpy, dtype=complex128, data=array([[[-6.37043484e-01-2.69161353e-01j, -6.37043484e-01-2.69161353e-01j], [-1.45472593e-15+2.44586248e-15j, 1.39177138e-15-2.43324741e-15j], [-1.28661087e-01+6.79498756e-01j, 1.28661087e-01-6.79498756e-01j], [ 1.30521698e-16-1.45799141e-16j, -1.44487908e-16-4.11127044e-17j]], [[ 4.50478336e-17-7.20154035e-17j, -7.37412232e-17+7.96462038e-17j], [ 5.47862763e-01-4.22041026e-01j, 5.47862763e-01-4.22041026e-01j], [-1.43521945e-15+3.03481952e-15j, -1.16398907e-15+2.79124363e-15j], [-6.86893864e-01-8.03060071e-02j, 6.86893864e-01+8.03060071e-02j]], [[ 8.01807716e-02+1.23688353e-01j, -8.01807716e-02-1.23688353e-01j], [ 3.71478854e-16-3.18270864e-16j, 5.60565289e-16-2.76853195e-16j], [ 1.03981776e-01-1.04477534e-01j, 1.03981776e-01-1.04477534e-01j], [ 1.23403440e-17+2.98499439e-16j, -3.48886941e-17+2.49619912e-16j]], [[ 5.04695320e-18+1.11306184e-17j, -8.33754113e-18-1.18981698e-17j], [ 1.45307741e-01+2.47674251e-02j, -1.45307741e-01-2.47674251e-02j], [-6.32651606e-16+3.90783873e-16j, 5.44368385e-16-2.57052385e-16j], [-8.67459916e-02-1.19175911e-01j, -8.67459916e-02-1.19175911e-01j]]])
Tensor(shape=(4, 4, 2), inds=[_24c68bAADCA, _24c68bAADCB, k6], tags={I6}),backend=numpy, dtype=complex128, data=array([[[-6.25631994e-01-2.94714850e-01j, -6.25631994e-01-2.94714850e-01j], [ 1.15430395e-16+1.77054897e-17j, 4.47717198e-17-8.96864554e-18j], [ 9.55880404e-02+1.12208250e-01j, 9.55880404e-02+1.12208250e-01j], [-4.49364486e-16+7.08646554e-17j, 1.06308278e-15+2.43818699e-16j]], [[-5.46474444e-16-1.29412201e-16j, 6.56259772e-16+1.75703859e-16j], [-3.73303631e-01+5.82165470e-01j, -3.73303631e-01+5.82165470e-01j], [-1.95427109e-15-1.92327072e-15j, 2.58269564e-15+1.68161531e-15j], [ 1.33348733e-01+6.28162436e-02j, 1.33348733e-01+6.28162436e-02j]], [[-1.17089110e-01-8.95427568e-02j, 1.17089110e-01+8.95427568e-02j], [ 1.93507763e-15-1.84804201e-15j, 2.59035877e-15-1.56775878e-15j], [-3.27271324e-01-6.09233712e-01j, 3.27271324e-01+6.09233712e-01j], [ 6.21772758e-17-2.03041050e-16j, 3.50374590e-17-1.85659894e-16j]], [[-1.18265088e-16-3.09853888e-16j, 1.14019962e-16+3.14814737e-16j], [-1.44388859e-01+2.96584253e-02j, 1.44388859e-01-2.96584253e-02j], [ 1.26531092e-16+1.14105492e-16j, -6.13153556e-17-1.40490307e-16j], [-2.25761099e-01-6.53685063e-01j, 2.25761099e-01+6.53685063e-01j]]])
Tensor(shape=(4, 2, 2), inds=[_24c68bAADCB, _24c68bAADCC, k7], tags={I7}),backend=numpy, dtype=complex128, data=array([[[-6.76727799e-01+9.62345676e-17j, -6.76727799e-01-5.13383990e-17j], [ 6.11728313e-16-3.73729155e-16j, 3.75757071e-16+4.33529143e-16j]], [[-3.18113640e-16-1.73874118e-15j, 4.41896816e-16+1.72143468e-15j], [-4.81220615e-02-3.68218125e-01j, 4.81220615e-02+3.68218125e-01j]], [[-1.31409086e-01+5.94698377e-02j, 1.31409086e-01-5.94698377e-02j], [ 2.11109524e-15-7.05193698e-16j, -1.97156534e-15+6.08143227e-16j]], [[-4.68789164e-17-4.33323825e-18j, -2.74473031e-17+6.76098278e-18j], [-7.91503026e-02+6.91453317e-18j, -7.91503026e-02-5.71206094e-17j]]])
Tensor(shape=(2, 2, 2), inds=[_24c68bAADCC, _24c68bAADCD, k8], tags={I8}),backend=numpy, dtype=complex128, data=array([[[-0.69121592+0.00000000e+00j, -0.69121592+1.84281771e-17j], [-0.03140173+2.27233324e-17j, -0.03140173-4.32172081e-18j]], [[-0.26848193+0.00000000e+00j, 0.26848193-1.67369305e-17j], [ 0.26848193-6.85921828e-16j, -0.26848193+6.65543728e-16j]]])
Tensor(shape=(2, 2), inds=[_24c68bAADCD, k9], tags={I9}),backend=numpy, dtype=complex128, data=array([[-0.70710678+0.j, -0.70710678+0.j], [ 0.70710678-0.j, -0.70710678+0.j]])

We now still have an MPS, but with increased bond dimension:

psi0.show()
 2 2 4 4 4 4 4 2 2 
>─>─>─>─>─>─>─●─●─<
│ │ │ │ │ │ │ │ │ │

Finally, the eager (contract=True) method works fairly simply:

psi0_CNOT = psi0.gate(CNOT, (1, n - 2), contract=True)
psi0_CNOT.draw(color=[psi0.site_tag(i) for i in range(n)])
_images/5632810275f17a0229b28e25e27eab296e36dde4e48e82687e7ea7010073ccaf.svg

Where we can see that the gate, site 1, and site 8 have been combined into a new rank-6 tensor.

A much more detailed run-through of quantum circuit simulation using tensor networks and the Circuit object can be found in the example Quantum Circuits.