5. 1D Algorithms¶
Although quimb.tensor
aims to be an interactive and general
base for arbitrary tensor networks, it also has fast
implementations of the following:
Static:
Time Evolving:
Two site DMRGX and TDVP slot into the same framework and should be easy to implement. All of these are based on 1D tensor networks, the primary representation of which is the matrix product state.
5.1. Matrix Product States¶
The basic constructor for MPS is MatrixProductState
.
This is a subclass of TensorNetwork
,
with a special tagging scheme (MPS.site_tag_id
) and special index
naming sceme (MPS.site_ind_id
).
It is also possible to instantiate a MPS directly from a dense vector using
from_dense()
, though
this is obviously not efficient for many sites.
In the following, we just generate a random MPS, and demonstrate some basic functionality.
%config InlineBackend.figure_formats = ['svg']
import quimb.tensor as qtn
p = qtn.MPS_rand_state(L=20, bond_dim=50)
print(f"Site tags: '{p.site_tag_id}', site inds: '{p.site_ind_id}'")
Site tags: 'I{}', site inds: 'k{}'
# in a notebook we can explore a colorized representation:
p
MatrixProductState(tensors=20, indices=39, L=20, max_bond=50)
Tensor(shape=(50, 2), inds=[_22232fAAAAA, k0], tags={I0}),
backend=numpy, dtype=float64, data=array([[-0.34973814, 0.14404285], [-0.45761753, 0.18119608], [ 0.11601885, -0.16572827], [ 0.14602795, -0.21163195], [ 0.40279694, -0.13382214], [ 0.03476981, 0.05706373], [-0.04263779, 0.10932274], [ 0.22465992, 0.08578363], [-0.36389105, -0.0483243 ], [ 0.26744791, 0.03286143], [-0.14524504, -0.0141532 ], [ 0.00770793, 0.20223121], [-0.12802803, -0.12663662], [ 0.16134839, -0.00303199], [-0.04593742, -0.08421111], [ 0.2999472 , 0.01230163], [ 0.13667601, -0.28149853], [ 0.13044828, -0.09169553], [-0.01657654, 0.00868969], [ 0.10620419, -0.0230706 ], [-0.26740965, 0.07042075], [-0.04038962, -0.02788901], [-0.25938614, -0.2448197 ], [ 0.11872146, -0.12455432], [ 0.34899233, -0.00552674], [ 0.01077258, -0.24855459], [ 0.40910581, -0.18101678], [ 0.19967348, 0.02255648], [-0.04174803, 0.23305137], [ 0.13573989, 0.01066037], [ 0.1334204 , 0.25297622], [-0.0817012 , -0.08332835], [-0.05219996, -0.1351644 ], [-0.16724571, -0.08329491], [ 0.16573075, 0.19928514], [ 0.03480022, -0.3307401 ], [ 0.38247874, 0.04821151], [ 0.01968916, 0.06596823], [-0.02083368, 0.27524714], [-0.07594148, -0.21846972], [-0.01597628, 0.12851172], [-0.07817599, 0.14584918], [-0.191781 , 0.13594807], [-0.04254349, -0.04684944], [-0.02502704, -0.27598 ], [-0.0308423 , 0.10086689], [-0.1805703 , -0.17166788], [-0.03384362, 0.07235297], [ 0.2075133 , -0.15794749], [-0.10246994, 0.19809273]])Tensor(shape=(50, 50, 2), inds=[_22232fAAAAA, _22232fAAAAB, k1], tags={I1}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAB, _22232fAAAAC, k2], tags={I2}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAC, _22232fAAAAD, k3], tags={I3}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAD, _22232fAAAAE, k4], tags={I4}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAE, _22232fAAAAF, k5], tags={I5}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAF, _22232fAAAAG, k6], tags={I6}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAG, _22232fAAAAH, k7], tags={I7}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAH, _22232fAAAAI, k8], tags={I8}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAI, _22232fAAAAJ, k9], tags={I9}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAJ, _22232fAAAAK, k10], tags={I10}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAK, _22232fAAAAL, k11], tags={I11}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAL, _22232fAAAAM, k12], tags={I12}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAM, _22232fAAAAN, k13], tags={I13}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAN, _22232fAAAAO, k14], tags={I14}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAO, _22232fAAAAP, k15], tags={I15}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAP, _22232fAAAAQ, k16], tags={I16}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAQ, _22232fAAAAR, k17], tags={I17}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 50, 2), inds=[_22232fAAAAR, _22232fAAAAS, k18], tags={I18}),
backend=numpy, dtype=float64, data=...Tensor(shape=(50, 2), inds=[_22232fAAAAS, k19], tags={I19}),
backend=numpy, dtype=float64, data=array([[-0.02258109, -0.01024228], [ 0.00342725, 0.00569815], [ 0.0305698 , -0.02093437], [ 0.03288087, 0.03274931], [ 0.0047818 , 0.02202655], [ 0.00588244, 0.03873898], [ 0.00802094, 0.01775352], [ 0.0124676 , -0.02353181], [ 0.02922852, 0.01083944], [-0.02210024, 0.02716485], [-0.01447841, -0.00915801], [-0.01685566, 0.00645133], [-0.02392316, -0.02444407], [-0.01458593, -0.00146573], [ 0.00154929, -0.01004175], [ 0.00509388, -0.0295116 ], [-0.01052114, -0.02007305], [ 0.01918596, -0.00283387], [ 0.03241771, 0.01954772], [-0.00147863, 0.02394796], [-0.01401698, 0.00338532], [ 0.00774415, 0.03335501], [ 0.0071466 , 0.01123106], [ 0.03519614, 0.0239949 ], [-0.02339447, 0.02331108], [-0.00961325, -0.02593699], [-0.00642978, -0.00315209], [ 0.05368819, -0.01176834], [-0.01051679, -0.02811233], [ 0.01311753, 0.02313445], [-0.03134253, 0.01153002], [-0.0007552 , 0.02337048], [ 0.00198314, 0.01548132], [-0.01073604, -0.01042047], [-0.00965656, -0.02031343], [ 0.00562352, -0.00166395], [ 0.02967754, -0.02130276], [-0.01438014, -0.00083549], [ 0.00364911, -0.04073218], [-0.00492973, -0.02218279], [ 0.01393595, -0.0066102 ], [ 0.00447471, 0.02903898], [ 0.01296508, 0.01661747], [-0.0183258 , 0.00121474], [-0.01511556, 0.00592369], [ 0.01008501, -0.03632744], [ 0.00991713, 0.00316558], [ 0.01730508, 0.01563186], [ 0.01720306, -0.01283212], [-0.0126285 , -0.00990431]])p.show() # 1D tensor networks also have a ascii ``show`` method
50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50
●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●──●
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
We can then canonicalize the MPS:
p.left_canonize()
p.show()
2 4 8 16 32 50 50 50 50 50 50 50 50 50 50 50 50 50 50
>─>─>─>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──●
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
And we can compute the inner product as:
p.H @ p
1.0000000000000009
This relies on them sharing the same physical indices, site_ind_id
,
which the conjugated copy p.H
naturally does.
Like any TN, we can graph the overlap for example, and make use of the site tags to color it:
(p.H & p).draw(color=[f'I{i}' for i in range(20)], layout="sfdp")
I.e. we used the fact that 1D tensor networks are tagged with the structure "I{}"
denoting their sites. See the Examples for how to fix the positions of tensors when drawing them.
We can also add MPS, and multiply/divide them by scalars:
p2 = (p + p) / 2
p2.show()
4 8 16 32 64 100 100 100 100 100 100 100 100 100 100 100 100 100 100
●─●─●──●──●──●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●━━━●
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
Which doubles the bond dimension, as expected, but should still be normalized:
p2.H @ p2
1.0
Because the MPS is the addition of two identical states, it should also compress right back down:
p2.compress(form=10)
p2.show()
2 4 8 16 32 50 50 50 50 50 50 50 50 50 32 16 8 4 2
>─>─>─>──>──>──>──>──>──>──●──<──<──<──<──<──<─<─<─<
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
Where we have also set the orthogonality center at the site 10.
When tensor networks are imbued with a structure
, they
can be indexed with integers and slices, which automatically get
converted using TN.site_tag_id
:
p2[10] # get the tensor(s) with tag 'I10'.
Tensor(shape=(50, 50, 2), inds=[_22232fAAAAJ, _22232fAAAAK, k10], tags={I10}),
backend=numpy, dtype=float64, data=...Note the tensor has matching physical index 'k10'
.
This tensor is the orthogonality center so:
─>─>─●─<─<─ ╭─●─╮
... │ │ │ │ │ ... = │ │ │
─>─>─●─<─<─ ╰─●─╯
i=10 i=10
should compute the normalization of the whole state:
p2[10].H @ p2[10] # all indices match -> inner product
0.9999999999999984
Or equivalently:
p2[10].norm()
0.9999999999999992
If two tensor networks with the same structure
are combined, it is propagated.
For example (p2.H & p2)
can still be sliced.
Since the MPS is in canonical form, left and right pieces of the overlap
should form the identity. The following forms a TN of the inner product,
selects the 2 tensors corresponding to the last site (-1
), contracts them,
then gets the underlying data:
((p2.H & p2).select(-1) ^ all).data.round(12) # should be close to the identity
array([[ 1., -0.],
[-0., 1.]])
Various builtin quantities are available to compute too:
and other non-trivial quantities such as the mutual information
can be easily calculated using a combination of -
partial_trace_compress()
and approx_spectral_function()
(see Examples).
Finally, many quantities can be computed using local ‘gates’ see the section
Gates: compute local quantities and simulate circuits.
5.2. Matrix Product Operators¶
The raw MPO class is MatrixProductOperator
,
which shares many features with MatrixProductState
,
but has both a MPO.upper_ind_id
and a MPO.lower_ind_id
.
Here we generate a random hermitian MPO and form a ‘overlap’ network with our MPS:
A = qtn.MPO_rand_herm(20, bond_dim=7, tags=['HAM'])
pH = p.H
# This inplace modifies the indices of each to form overlap
p.align_(A, pH)
(pH & A & p).draw(color='HAM', iterations=20, layout='sfdp')
Compute the actual contraction (...
means contract everything, but use the structure if possible):
(pH & A & p) ^ ...
-2.989066451210574e-07
5.3. Building Hamiltonians¶
There are various built-in MPO hamiltoanians:
These all accept a cyclic
argument to enable periodic boundary
conditions (PBC), and a S
argument to set the size of spin.
For generating other spin Hamiltonians see
SpinHam1D
, or consider using the raw
constructor of MatrixProductOperator
.
5.4. Quick DMRG2 Intro¶
First we build a Hamiltonian term by term (though we could just use MPO_ham_heis
):
builder = qtn.SpinHam1D(S=1)
builder += 1/2, '+', '-'
builder += 1/2, '-', '+'
builder += 1, 'Z', 'Z'
H = builder.build_mpo(L=100)
Then we construct the 2-site DMRG object (DMRG2
), with the Hamiltonian MPO and a default sequence of maximum bond dimensions and a bond compression cutoff:
dmrg = qtn.DMRG2(H, bond_dims=[10, 20, 100, 100, 200], cutoffs=1e-10)
The DMRG
object will automatically detect OBC/PBC. Now we can solve to a certain absolute energy tolerance, showing progress and a schematic of the final state:
dmrg.solve(tol=1e-6, verbosity=1)
1, R, max_bond=(10/10), cutoff:1e-10
100%|###########################################| 99/99 [00:01<00:00, 78.98it/s]
Energy: -138.70438890115457 ... not converged.
2, R, max_bond=(10/20), cutoff:1e-10
100%|##########################################| 99/99 [00:00<00:00, 163.85it/s]
Energy: -138.9365878885854 ... not converged.
3, R, max_bond=(20/100), cutoff:1e-10
100%|###########################################| 99/99 [00:01<00:00, 67.98it/s]
Energy: -138.94004531702979 ... not converged.
4, R, max_bond=(58/100), cutoff:1e-10
100%|###########################################| 99/99 [00:05<00:00, 17.74it/s]
Energy: -138.94008547699534 ... not converged.
5, R, max_bond=(92/200), cutoff:1e-10
100%|###########################################| 99/99 [00:07<00:00, 14.04it/s]
Energy: -138.94008604061847 ... converged!
True
dmrg.state.show(max_width=80)
3 9 27 54 65 74 78 82 86 89 91 94 95 95 95 95 95 95 94 93 93 93 93 92
... >─>─>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>── ...
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
...
91 91 91 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
... >──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──> ...
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
...
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 9
... ──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>─ ...
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
...
0 90 90 90 90 90 90 90 91 94 96 97 97 98 97 96 96 95 92 90 87 83 78 73
... ─>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>──>── ...
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
...
64 53 27 9 3
>──>──>──>─>─●
│ │ │ │ │ │
There are many options stored in the dict DMRG.opts
- an explanation of each of
these is given in get_default_opts()
, and it may be
necessary to tweak these to achieve the best performance/accuracy, especially for
PBC (see Examples).
Note
Performance Tips
Make sure numpy is linked to a fast BLAS (e.g. MKL version that comes with conda).
Install slepc4py, to use as the iterative eigensolver, it’s faster than scipy.
If the hamiltonian is real, compile and use a real version of SLEPC (set the environment variable PETSC_ARCH before launch).
Periodic systems are in some ways easier to solve if longer, since this reduces correlations the ‘long way round’.
5.5. Quick TEBD Intro¶
Time Evolving Block Decimation (TEBD
) requires not a
MPO but a specficiation of the local, interacting term(s) of a Hamiltonian.
This is encapsulated in the LocalHam1D
object, which is
initialized with the sum of two site terms H2
and one-site terms (if any), H1
.
LocalHam1D
objects can also be built directly
from a SpinHam1D
instance
using the build_local_ham()
method.
There are also the following built-in LocalHam1D Hamiltonians:
ham1d_heis
ham1d_ising
ham1d_XY
Here we build a LocalHam1D
using a
SpinHam1D
:
builder = qtn.SpinHam1D(S=1 / 2)
builder.add_term(1.0, 'Z', 'Z')
builder.add_term(0.9, 'Y', 'Y')
builder.add_term(0.8, 'X', 'X')
builder.add_term(0.6, 'Z')
H = qtn.ham_1d_heis(20, bz=0.1)
# check the two site term
H.terms[0, 1]
array([[ 0.175, 0. , 0. , 0. ],
[ 0. , -0.275, 0.5 , 0. ],
[ 0. , 0.5 , -0.225, 0. ],
[ 0. , 0. , 0. , 0.325]])
Then we set up an initial state and the TEBD
object itself -
which mimics the general api of quimb.Evolution
:
psi0 = qtn.MPS_neel_state(20)
tebd = qtn.TEBD(psi0, H)
Now we are ready to evolve. By setting a tol
, the required timestep dt
is computed for us:
tebd.update_to(T=3, tol=1e-3)
t=3, max-bond=34: 100%|##########| 100/100 [00:02<00:00, 49.45%/s]
After the evolution we can see that entanglement has been generated throughout the chain:
tebd.pt.show()
2 4 8 16 29 34 33 34 33 34 33 34 33 34 29 16 8 4 2
>─>─>─>──>──>──>──>──>──>──>──>──>──>──>──>──>─>─>─●
│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
A more complete demonstration can be found in the Examples.
5.6. Gates: compute local quantities and simulate circuits¶
On top of the builtin methods mentioned earlier
(entropy()
,
schmidt_gap()
,
magnetization()
,
correlation()
,
logneg_subsys()
, etc.),
many other quantities are encapsulated by the
gate()
method,
which works on any 1D tensor network vector (MPS, MERA, etc.).
This ‘applies’ a given operator to 1 or more sites, whilst maintaining
the ‘physical’, outer indices.
This not only directly allows quantum circuit style computation simulation
but also makes local quantities (i.e. non-MPO) easy to compute:
import quimb as qu
Z = qu.pauli('Z')
# compute <psi0|Z_i|psi0> for neel state above
[
psi0.gate(Z, i).H @ psi0
for i in range(10)
]
[(1+0j),
(-1+0j),
(1+0j),
(-1+0j),
(1+0j),
(-1+0j),
(1+0j),
(-1+0j),
(1+0j),
(-1+0j)]
There are four ways in which a gate can be applied:
Lazily (
contract=False
) - the gate is added to the tensor network but nothing is contracted. This is the default.Lazily with split (
contract='split-gate'
) - the gate is split before it is added to the network.Eagerly (
contract=True
) - the gate is contracted into the tensor network. If the gate acts on more than one site this will produce larger tensors.Swap and Split (
contract='swap+split'
) - sites will be swapped until adjacent, the gate will be applied and the resulting tensor split, then the sites swapped back into their original positions. This explicitly maintains the exact structure of an MPS (at the cost of increasing bond dimension), unlike the other two methods.
Here’s a quantum computation style demonstration of the lazy method:
import quimb as qu
# some operators to apply
H = qu.hadamard()
CNOT = qu.controlled('not')
# setup an intitial register of qubits
n = 10
psi0 = qtn.MPS_computational_state('0' * n, tags='PSI0')
# apply hadamard to each site
for i in range(n):
psi0.gate_(H, i, tags='H')
# apply CNOT to even pairs
for i in range(0, n, 2):
psi0.gate_(CNOT, (i, i + 1), tags='CNOT')
# apply CNOT to odd pairs
for i in range(1, n - 1, 2):
psi0.gate_(CNOT, (i, i + 1), tags='CNOT')
Note we have used the inplace gate_
(with a trailing
underscore) which modifies the original psi0
object.
However psi0
has its physical site indices mantained
such that it overall looks like the same object:
sorted(psi0.outer_inds())
['k0', 'k1', 'k2', 'k3', 'k4', 'k5', 'k6', 'k7', 'k8', 'k9']
(psi0.H & psi0) ^ all
0.9999999999999981
But the network now contains the gates as additional tensors:
psi0.draw(color=['PSI0', 'H', 'CNOT'], show_tags=False, layout='neato')
With the swap and split method MPS form is always maintained, which allows a canonical form and thus optimal trimming of singular values:
n = 10
psi0 = qtn.MPS_computational_state('0' * n)
for i in range(n):
# 'swap+split' will be ignore to one-site gates
psi0.gate_(H, i, contract='swap+split')
# use Z-phase to create entanglement
Rz = qu.phase_gate(0.42)
for i in range(n):
psi0.gate_(Rz, i, contract='swap+split')
for i in range(0, n, 2):
psi0.gate_(CNOT, (i, i + 1), contract='swap+split')
for i in range(1, n - 1, 2):
psi0.gate_(CNOT, (i, i + 1), contract='swap+split')
# act with one long-range CNOT
psi0.gate_(CNOT, (2, n - 2), contract='swap+split')
MatrixProductState(tensors=10, indices=19, L=10, max_bond=4)
Tensor(shape=(2, 2), inds=[_22232fAADAE, k0], tags={I0}),
backend=numpy, dtype=complex128, data=array([[-0.69157229-0.14740341j, -0.57136173-0.41658825j], [ 0.14740341-0.69157229j, -0.41658825+0.57136173j]])Tensor(shape=(2, 2, 2), inds=[_22232fAADAE, _22232fAADAF, k1], tags={I1}),
backend=numpy, dtype=complex128, data=array([[[-0.63146701-0.28199583j, -0.63146701-0.28199583j], [ 0.28199583-0.63146701j, -0.28199583+0.63146701j]], [[ 0.13459242+0.06010528j, -0.13459242-0.06010528j], [-0.06010528+0.13459242j, -0.06010528+0.13459242j]]])Tensor(shape=(2, 4, 2), inds=[_22232fAADAF, _22232fAADAG, k2], tags={I2}),
backend=numpy, dtype=complex128, data=array([[[-0.69157229-5.06340122e-34j, -0.69157229+3.06471179e-17j], [ 0.69157229-1.80369494e-32j, -0.69157229+6.30619571e-17j], [-0.14740341-4.41224182e-35j, -0.14740341+1.98137178e-16j], [-0.14740341+2.28829951e-32j, 0.14740341-4.60565387e-17j]], [[ 0.14740341+1.83325009e-18j, -0.14740341-1.26160978e-18j], [-0.14740341+7.81685242e-17j, -0.14740341-7.33404745e-17j], [-0.69157229+3.64364586e-16j, 0.69157229-5.20067967e-16j], [-0.69157229-6.22252566e-18j, -0.69157229+1.81895695e-16j]]])Tensor(shape=(4, 4, 2), inds=[_22232fAADAG, _22232fAADAH, k3], tags={I3}),
backend=numpy, dtype=complex128, data=array([[[-6.31467011e-01-2.81995831e-01j, -6.31467011e-01-2.81995831e-01j], [-1.71894979e-01-6.69156246e-01j, 1.71894979e-01+6.69156246e-01j], [ 3.04177747e-02-5.38780985e-03j, -3.04177747e-02+5.38780985e-03j], [ 3.88468468e-16-3.07601379e-16j, -3.57278839e-16+3.40021186e-16j]], [[-7.66933892e-17-4.75507077e-18j, 7.96786982e-18-3.85788441e-17j], [ 2.09226685e-02-2.27268884e-02j, 2.09226685e-02-2.27268884e-02j], [-4.71220642e-01-5.05241598e-01j, -4.71220642e-01-5.05241598e-01j], [-6.87453831e-01+7.53622285e-02j, 6.87453831e-01-7.53622285e-02j]], [[ 1.34592422e-01+6.01052807e-02j, -1.34592422e-01-6.01052807e-02j], [ 3.66381160e-02+1.42625598e-01j, 3.66381160e-02+1.42625598e-01j], [-6.48331886e-03+1.14837096e-03j, -6.48331886e-03+1.14837096e-03j], [-6.37193382e-17+3.66090457e-17j, 1.18989786e-16+1.57802129e-16j]], [[-1.20201023e-17-6.03115544e-18j, 1.23668169e-17+6.91148033e-18j], [ 4.45950871e-03-4.84406455e-03j, -4.45950871e-03+4.84406455e-03j], [-1.00437120e-01-1.07688429e-01j, 1.00437120e-01+1.07688429e-01j], [-1.46525590e-01+1.60628896e-02j, -1.46525590e-01+1.60628896e-02j]]])Tensor(shape=(4, 4, 2), inds=[_22232fAADAH, _22232fAADAI, k4], tags={I4}),
backend=numpy, dtype=complex128, data=array([[[-6.91572292e-01+2.02282124e-17j, -6.91572292e-01-2.86771434e-16j], [ 2.36799306e-16+2.06003464e-17j, -3.47277599e-16+5.23433187e-17j], [ 1.04122024e-01-1.04337764e-01j, 1.04122024e-01-1.04337764e-01j], [ 7.75784688e-17+8.85366567e-17j, -1.62572312e-16-1.23754605e-17j]], [[ 1.15290281e-01+9.16109370e-02j, -1.15290281e-01-9.16109370e-02j], [ 1.53440330e-03-3.08531217e-02j, 1.53440330e-03-3.08531217e-02j], [ 6.86319649e-01-7.92673005e-02j, -6.86319649e-01+7.92673005e-02j], [-3.13212867e-03-5.79154119e-03j, -3.13212867e-03-5.79154119e-03j]], [[ 3.69220586e-03-5.45158648e-03j, -3.69220586e-03+5.45158648e-03j], [ 6.90640435e-01-1.82689835e-02j, 6.90640435e-01-1.82689835e-02j], [-5.86819171e-03-3.03287626e-02j, 5.86819171e-03+3.03287626e-02j], [ 1.23824120e-01-7.96994363e-02j, 1.23824120e-01-7.96994363e-02j]], [[ 1.41298989e-18+5.37152185e-20j, 5.31018372e-19+1.26323283e-18j], [ 1.15736515e-01-9.12843035e-02j, -1.15736515e-01+9.12843035e-02j], [ 2.22227132e-17+1.83322793e-16j, 9.89388385e-17-2.15410432e-16j], [-2.42014427e-01+6.47843540e-01j, 2.42014427e-01-6.47843540e-01j]]])Tensor(shape=(4, 4, 2), inds=[_22232fAADAI, _22232fAADAJ, k5], tags={I5}),
backend=numpy, dtype=complex128, data=array([[[-6.37043484e-01-2.69161353e-01j, -6.37043484e-01-2.69161353e-01j], [ 4.38847620e-01-5.34495090e-01j, -4.38847620e-01+5.34495090e-01j], [-8.42827568e-15+3.64864481e-15j, 8.43431997e-15-3.62076726e-15j], [-3.17686305e-16-1.18096751e-16j, -2.03782985e-16-4.12483914e-16j]], [[ 9.03110715e-18+7.01099285e-17j, -2.45394760e-17-7.20494640e-17j], [ 3.91120903e-15+8.20190993e-15j, 4.05946991e-15+8.39777808e-15j], [-1.79692706e-02+6.91338803e-01j, -1.79692706e-02+6.91338803e-01j], [-6.70524726e-01+1.69318716e-01j, 6.70524726e-01-1.69318716e-01j]], [[-5.53037823e-02-1.36635488e-01j, 5.53037823e-02+1.36635488e-01j], [ 1.46711669e-01-1.42636302e-02j, 1.46711669e-01-1.42636302e-02j], [-1.88231497e-15-7.56913232e-16j, -2.02805101e-15-5.91822977e-16j], [ 1.65493617e-16+2.15033368e-15j, 8.32451617e-16-1.60372395e-15j]], [[ 7.40742386e-19+9.06635535e-18j, -3.91503747e-18-1.07132734e-17j], [ 1.68435764e-16-1.91554856e-15j, -1.64955679e-16+1.96133622e-15j], [ 7.97220496e-02-1.23984514e-01j, -7.97220496e-02+1.23984514e-01j], [ 1.40902084e-01+4.32939666e-02j, 1.40902084e-01+4.32939666e-02j]]])Tensor(shape=(4, 4, 2), inds=[_22232fAADAJ, _22232fAADAK, k6], tags={I6}),
backend=numpy, dtype=complex128, data=array([[[-6.25631994e-01-2.94714850e-01j, -6.25631994e-01-2.94714850e-01j], [-1.93575895e-17-1.38804594e-16j, -2.14886301e-16+6.99204856e-17j], [-6.50249776e-02+1.32285741e-01j, -6.50249776e-02+1.32285741e-01j], [-1.70133670e-16-1.15668486e-16j, 7.55532574e-16+3.92817523e-16j]], [[ 1.45691485e-01+2.23999108e-02j, -1.45691485e-01-2.23999108e-02j], [ 8.32736323e-15-1.63476707e-15j, 9.45165230e-15-1.65847847e-15j], [-1.16445494e-01+6.81698381e-01j, 1.16445494e-01-6.81698381e-01j], [ 3.46807899e-17-1.04207283e-16j, 2.32156061e-17-8.07484472e-17j]], [[-1.81164427e-15+6.48324212e-16j, 1.92973594e-15-5.87419162e-16j], [ 5.46874115e-01-4.23321318e-01j, 5.46874115e-01-4.23321318e-01j], [-2.45145900e-15-8.36127994e-15j, 2.96252939e-15+9.34201566e-15j], [ 1.33348733e-01+6.28162436e-02j, 1.33348733e-01+6.28162436e-02j]], [[ 2.32465429e-16+1.94696662e-16j, -2.39524235e-16-2.03895516e-16j], [-1.36591120e-01+5.54132725e-02j, 1.36591120e-01-5.54132725e-02j], [ 4.04557643e-17-1.55966815e-16j, -2.70339525e-17+1.68018453e-16j], [ 5.22850801e-01+4.52658011e-01j, -5.22850801e-01-4.52658011e-01j]]])Tensor(shape=(4, 2, 2), inds=[_22232fAADAK, _22232fAADAL, k7], tags={I7}),
backend=numpy, dtype=complex128, data=array([[[-6.76727799e-01+7.01299234e-17j, -6.76727799e-01+2.30578002e-17j], [ 3.58142257e-16+1.97696615e-16j, 6.21019416e-16-1.03481804e-16j]], [[ 9.26437433e-16+1.45262957e-15j, -9.05053712e-16-1.51743788e-15j], [ 1.68784583e-01+3.30774976e-01j, -1.68784583e-01-3.30774976e-01j]], [[ 2.39851759e-03+1.44219474e-01j, -2.39851759e-03-1.44219474e-01j], [ 1.51353762e-16-2.16241443e-15j, -7.34436398e-17+2.08991582e-15j]], [[-2.69944941e-17+8.24371205e-18j, -3.06115110e-17-4.77190565e-18j], [-7.91503026e-02-1.90343698e-17j, -7.91503026e-02-4.63692950e-17j]]])Tensor(shape=(2, 2, 2), inds=[_22232fAADAL, _22232fAADAM, k8], tags={I8}),
backend=numpy, dtype=complex128, data=array([[[-0.69121592+0.00000000e+00j, -0.69121592+3.58317539e-17j], [-0.03140173+2.85984134e-17j, -0.03140173-1.00373452e-17j]], [[-0.26848193+0.00000000e+00j, 0.26848193-2.87919301e-17j], [ 0.26848193-1.65364136e-16j, -0.26848193+1.62442620e-16j]]])Tensor(shape=(2, 2), inds=[_22232fAADAM, k9], tags={I9}),
backend=numpy, dtype=complex128, data=array([[-0.70710678+0.00000000e+00j, -0.70710678+3.03974502e-17j], [ 0.70710678-2.72977158e-33j, -0.70710678+3.03974502e-17j]])We now still have an MPS, but with increased bond dimension:
psi0.show()
2 2 4 4 4 4 4 2 2
>─>─>─>─>─>─>─●─●─<
│ │ │ │ │ │ │ │ │ │
Finally, the eager (contract=True
) method works fairly simply:
psi0_CNOT = psi0.gate(CNOT, (1, n - 2), contract=True)
psi0_CNOT.draw(color=[psi0.site_tag(i) for i in range(n)])
Where we can see that the gate, site 1, and site 8 have been combined into a new rank-6 tensor.
A much more detailed run-through of quantum circuit simulation using
tensor networks and the Circuit
object
can be found in the example Quantum Circuits.