7. Quantum Circuits

quimb has powerful support for simulating quantum circuits via its ability to represent and contract arbitrary geometry tensor networks. However, because its representation is generally neither the full wavefunction (like many other simulators) or a specific TN (for example an MPS or PEPS like some other simulators), using it is a bit different and requires potentially extra thought.

Specifically, the computational memory and effort is very sensitive to what you want to compute, but also how long you are willing to spend computing how to compute it - essentially, pre-processing.

Note

All of which to say, you are unfortunately quite unlikely to achieve the best performance without some tweaking of the default arguments.

Nonetheless, here’s a quick preview of the kind of circuit that many classical simulators might struggle with - an 80 qubit GHZ-state prepared using a completely randomly ordered sequence of CNOTs:

%config InlineBackend.figure_formats = ['svg']

import random
import quimb as qu
import quimb.tensor as qtn

N = 80
circ = qtn.Circuit(N)

# randomly permute the order of qubits
regs = list(range(N))
random.shuffle(regs)

# hamadard on one of the qubits
circ.apply_gate('H', regs[0])

# chain of cnots to generate GHZ-state
for i in range(N - 1):
    circ.apply_gate('CNOT', regs[i], regs[i + 1])

# apply multi-controlled NOT
circ.apply_gate('X', regs[-1], controls=regs[:-1])

# sample it a few times
for b in circ.sample(1):
    print(b)
11111111111111111111111110111111111111111111111111111111111111111111111111111111

As mentioned above, various pre-processing steps need to occur (which will happen on the first run if not explicitly called). The results of these are cached such that the more you sample the more the simulation should speed up:

%%time
# sample it 8 times
for b in circ.sample(8):
    print(b)
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
11111111111111111111111110111111111111111111111111111111111111111111111111111111
11111111111111111111111110111111111111111111111111111111111111111111111111111111
11111111111111111111111110111111111111111111111111111111111111111111111111111111
11111111111111111111111110111111111111111111111111111111111111111111111111111111
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
CPU times: user 794 ms, sys: 137 μs, total: 794 ms
Wall time: 793 ms

Collect some statistics:

%%time
from collections import Counter

# sample it 100 times, count results:
Counter(circ.sample(100))
CPU times: user 254 ms, sys: 45.3 ms, total: 299 ms
Wall time: 286 ms
Counter({'00000000000000000000000000000000000000000000000000000000000000000000000000000000': 65,
         '11111111111111111111111110111111111111111111111111111111111111111111111111111111': 35})

7.1. Simulation Steps

Here’s an overview of the general steps for a tensor network quantum circuit simulation:

  1. Build the tensor network representation of the circuit, this involves taking the initial state (by default the product state $ | 000 \ldots 00 \rangle $ ) and adding tensors representing the gates to it, possibly performing low-rank decompositions on them if beneficial.

  2. Form the entire tensor network of the quantity you actually want to compute, this might include:

    • the full, dense, wavefunction (i.e. a single tensor)

    • a local expectation value or reduced density matrix

    • a marginal probability distribution to sample bitstrings from, mimicking a real quantum computer (this is what is happening above)

    • the fidelity with a target state or unitary, maybe to use automatic differentation to train the parameters of a given circuit to perform a specific task

  3. Perform local simplifications on the tensor network to make it easier (possibly trivial!) to contract. This step, whilst efficient in the complexity sense, can still introduce some significant overhead.

  4. Find a contraction path for this simplified tensor network. This a series of pairwise tensor contractions that turn the tensor network into a single tensor - represented by a binary contraction tree. The memory required for the intermediate tensors can be checked in advance at this stage.

  5. Optionally slice (or ‘chunk’) the contraction, breaking it into many independent, smaller contractions, either to fit memory constraints or introduce embarassing parallelism.

  6. Perform the contraction! Up until this point the tensors are generally very small and so can be easily passed to some other library with which to perform the actual contraction (for example, one with GPU support).

Warning

The overall computational effort memory required in this last step is very sensitive (we are talking possibly orders and order of magnitude) to how well one finds the so-called ‘contraction path’ or ‘contraction tree’ - which itself can take some effort. The overall simulation is thus a careful balancing of time spent (a) simplifying (b) path finding, and (c) contracting.

Note

It’s also important to note that this last step is where the exponential slow-down expected for generic quantum circuits will appear. Unless the circuit is trivial in some way, the tensor network simplification and path finding can only ever shave off a (potentially very significant) prefactor from an underlying exponential scaling.

7.2. Building the Circuit

The main relevant object is Circuit. Under the hood this uses gate_TN_1D(), which applies an operator on some number of sites to any notionally 1D tensor network (not just an MPS), whilst maintaining the outer indices (e.g. 'k0', 'k1', 'k2', ...). . The various options for applying the operator and propagating tags to it (if not contracted in) can be found in gate_TN_1D(). Note that the ‘1D’ nature of the TN is just for indexing, gates can be applied to arbitrary combinations of sites within this ‘register’.

The following is a basic example of building a quantum circuit TN by applying a variety of gates to, for visualization purposes, nearest neighbors in a chain.

# 10 qubits and tag the initial wavefunction tensors
circ = qtn.Circuit(N=10)

# initial layer of hadamards
for i in range(10):
    circ.apply_gate('H', i, gate_round=0)

# 8 rounds of entangling gates
for r in range(1, 9):

    # even pairs
    for i in range(0, 10, 2):
        circ.apply_gate('CX', i, i + 1, gate_round=r)

    # Y-rotations
    for i in range(10):
        circ.apply_gate('RZ', 1.234, i, gate_round=r)

    # odd pairs
    for i in range(1, 9, 2):
        circ.apply_gate('CZ', i, i + 1, gate_round=r)

    # X-rotations
    for i in range(10):
        circ.apply_gate('RX', 1.234, i, gate_round=r)

# final layer of hadamards
for i in range(10):
    circ.apply_gate('H', i, gate_round=r + 1)

circ
<Circuit(n=10, num_gates=252, gate_opts={'contract': 'auto-split-gate', 'propagate_tags': 'register'})>

The basic tensor network representing the state is stored in the .psi attribute, which we can then visualize:

circ.psi.draw(color=['PSI0', 'H', 'CX', 'RZ', 'RX', 'CZ'])
_images/4ffe13fc3680bb09bf389cc7d9c8128569937e5d3236c6da1ddaa9caab1862c4.svg

Note by default the CNOT and CZ gates have been split via a rank-2 spatial decomposition into two parts acting on each site seperately but connected by a new bond. We can also graph the default (propagate_tags='register') method for adding site tags to the applied operators:

circ.psi.draw(color=[f'I{i}' for i in range(10)])
_images/f0b581b0f57ed9641d1ddada842ce542ee8c23de8384ab8f910fedb684d5da5b.svg

Or since we supplied gate_round as an keyword (which is optional), the tensors are also tagged in that way:

circ.psi.draw(color=['PSI0'] + [f'ROUND_{i}' for i in range(10)])
_images/1434f8f5de39676d5ff9fbb9d34c29bf3909defe50e822d3dc17d9f592869c76.svg

All of these might be helpful when addressing only certain tensors:

# select the subnetwork of tensors with *all* following tags
circ.psi.select(['CX', 'I3', 'ROUND_3'], which='all')
TensorNetworkGenVector(tensors=1, indices=3)
Tensor(shape=(2, 2, 2), inds=[_119073AASdN, _119073AASda, _119073AASdM], tags={GATE_69, ROUND_3, CX, I3}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642+0.j, 0. +0.j], [ 0. +0.j, 0.84089642+0.j]], [[-0. +0.j, -0.84089642+0.j], [-0.84089642+0.j, 0. +0.j]]])

Note

The tensor(s) of each gate is/are also individually tagged like [f'GATE_{g}' for g in range(circ.num_gates)].

The full list of currently implemented gates is here:

print("\n".join(sorted(qtn.circuit.ALL_GATES)))
CCNOT
CCX
CCY
CCZ
CNOT
CRX
CRY
CRZ
CSWAP
CU1
CU2
CU3
CX
CY
CZ
FREDKIN
FS
FSIM
FSIMG
GIVENS
GIVENS2
H
HZ_1_2
IDEN
IS
ISWAP
RX
RXX
RY
RYY
RZ
RZZ
S
SDG
SU4
SWAP
T
TDG
TOFFOLI
U1
U2
U3
W_1_2
X
X_1_2
Y
Y_1_2
Z
Z_1_2

7.3. Parametrized Gates

Of these gates, any which take parameters - ['RX', 'RY', 'RZ', 'U3', 'FSIM', 'RZZ', ...] - can be ‘parametrized’, which adds the gate to the network as a PTensor. The main use of this is that when optimizing a TN, for example, the parameters that generate the tensor data will be optimized rather than the tensor data itself.

circ_param = qtn.Circuit(6)

for l in range(3):
    for i in range(0, 6, 2):
        circ_param.apply_gate('FSIM', random.random(), random.random(), i, i + 1, parametrize=True, contract=False)
    for i in range(1, 5, 2):
        circ_param.apply_gate('FSIM', random.random(), random.random(), i, i + 1, parametrize=True, contract=False)
    for i in range(6):
        circ_param.apply_gate('U3', random.random(), random.random(), random.random(), i, parametrize=True)

circ_param.psi.draw(color=['PSI0', 'FSIM', 'U3'])
_images/c164b1c9c1c64b5079d05f9e60b987fefcb10067c7fe985ed7ba17ade3da75d2.svg

We’ve used the contract=False option which doesn’t try and split the gate tensor in any way, so here there is now a single tensor per two qubit gate. In fact, for 'FSIM' and random parameters there is no low-rank decomposition that would happen anyway, but this is also the only mode compatible with parametrized tensors:

circ_param.psi['GATE_0']
PTensor(shape=(2, 2, 2, 2), inds=[_119073AASiy, _119073AASiu, _119073AASio, _119073AASip], tags={GATE_0, FSIM, I0, I1}),backend=numpy, dtype=None, data=array([[[[1. +0.j , 0. +0.j ], [0. +0.j , 0. +0.j ]], [[0. +0.j , 0.87547496+0.j ], [0. -0.48326349j, 0. +0.j ]]], [[[0. +0.j , 0. -0.48326349j], [0.87547496+0.j , 0. +0.j ]], [[0. +0.j , 0. +0.j ], [0. +0.j , 0.94853626-0.31666855j]]]])

For most tasks like contraction these are transparently handled like normal tensors:

circ_param.amplitude('101001')
(-0.07115178600720336+0.02949649184357368j)

7.4. Forming the Target Tensor Network

You can access the wavefunction tensor network \(U |0\rangle^{\otimes{N}}\) or more generally \(U |\psi_0\rangle\) with Circuit.psi or just the unitary, \(U\), with Circuit.uni, and then manipulate and contract these yourself. However, there are built-in methods for constructing and contracting the tensor network to perform various common tasks.

7.4.1. Compute an amplitude

amplitude

This computes a single wavefunction amplitude coefficient, or transition amplitude:

\[ c_x = \langle x | U | \psi_0 \rangle \]

with, \(x=0101010101 \ldots\), for example. The probability of sampling \(x\) from this circuit is \(|c_x|^2\).

Example usage:

circ.amplitude('0101010101')
/home/kjs/.pyenv/versions/3.11.9/envs/quimb/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
(-0.0062675896452940244+0.012702244544177444j)

7.4.2. Compute a local expectation

local_expectation

For an operator \(G_{\bar{q}}\) acting on qubits \(\bar{q}\), this computes:

\[ \langle \psi_{\bar{q}} | G_{\bar{q}} | \psi_{\bar{q}} \rangle \]

where \(\psi_{\bar{q}}\) is the circuit wavefunction but only with gates which are in the ‘reverse lightcone’ (i.e. the causal cone) of qubits \(\bar{q}\). In the picture above the gates which we know cancel to the identity have been greyed out (and removed from the TN used).

Example usage:

circ.local_expectation(qu.pauli('Z') & qu.pauli('Z'), (4, 5))
-0.018188965185456228

You can compute several individual expectations on the same sites by supplying a list (they are computed in a single contraction):

circ.local_expectation(
    [qu.pauli('X') & qu.pauli('X'),
     qu.pauli('Y') & qu.pauli('Y'),
     qu.pauli('Z') & qu.pauli('Z')],
     where=(4, 5),
)
((-0.005784719259097598+4.9439619065339e-17j),
 (0.05890188167924251+1.6046192152785466e-17j),
 (-0.018188965185456277-1.474514954580286e-17j))

7.4.3. Compute a reduced density matrix

partial_trace

This similarly takes a subset of the qubits, \(\bar{q}\), and contracts the wavefunction with its bra, but now only the qubits outside of \(\bar{q}\), producing a reduced density matrix:

\[ \rho_{\bar{q}} = \mathrm{Tr}_{\bar{p}} | \psi_{\bar{q}} \rangle \langle \psi_{\bar{q}} | \]

where the partial trace is over \(\bar{p}\), the complementary set of qubits to \(\bar{q}\). Obviously once you have \(\rho_{\bar{q}}\) you can compute many different local expectations and so it can be more efficient than repeatedly calling local_expectation().

Example usage:

circ.partial_trace((4, 5)).round(3)
array([[ 0.252+0.j   ,  0.013+0.011j, -0.019+0.007j, -0.016-0.003j],
       [ 0.013-0.011j,  0.255+0.j   ,  0.013+0.014j,  0.02 +0.017j],
       [-0.019-0.007j,  0.013-0.014j,  0.254+0.j   ,  0.019+0.012j],
       [-0.016+0.003j,  0.02 -0.017j,  0.019-0.012j,  0.239+0.j   ]])

7.4.4. Compute a marginal probability distribution

compute_marginal

This method computes the probability distribution over some qubits, \(\bar{q}\), conditioned on some partial fixed result on qubits \(\bar{f}\) (which can be no qubits).

\[ p(\bar{q} | x_{\bar{f}}) = \mathrm{diag} \mathrm{Tr}_{\bar{p}} \langle x_{\bar{f}} | \psi_{\bar{f} \cup \bar{q}} \rangle \langle \psi_{\bar{f} \cup \bar{q}} | x_{\bar{f}} \rangle \]

Here only the causal cone relevant to \(\bar{f} \cup \bar{q}\) is needed, with the remaining qubits, \(\bar{p}\) being traced out. We directly take the diagonal within the contraction using hyper-indices (depicted as a COPY-tenso above) to avoid forming the full reduced density matrix. The result is a \(2^{|\bar{q}|}\) dimensional tensor containing the probabilites for each bit-string \(x_{\bar{q}}\), given that we have already ‘measured’ \(x_{\bar{f}}\).

Example usage:

p = circ.compute_marginal((1, 2), fix={0: '1', 3: '0', 4: '1'}, dtype='complex128')
p
array([[0.03422455, 0.02085596],
       [0.03080204, 0.02780321]])
qtn.circuit.sample_bitstring_from_prob_ndarray(p / p.sum())
'01'

7.4.5. Generate unbiased samples

sample

The main use of Circuit.compute_marginal is as a subroutine used to generate unbiased samples from circuits. We first pick some group of qubits, \(\bar{q_A}\) to ‘measure’, then condition on the resulting bitstring \(x_{\bar{q_A}}\), to compute the marginal on the next group of qubits \(\bar{q_B}\) and so forth. Eventually we reach the ‘final marginal’ where we no longer need to trace any qubits out, so instead we can compute:

\[ p(\bar{q_Z} | x_{\bar{q_A}} x_{\bar{q_B}} \ldots) = |\langle x_{\bar{q_A}} x_{\bar{q_B}} \ldots | \psi \rangle|^2 \]

since the ‘bra’ representing the partial bit-string only acts on some of the qubits this object is still a \(2^{|\bar{q}|}\) dimensional tensor, which we sample from to get the final bit-string \(x_{\bar{q_z}}\). The overall sample generated is then the concatenation of all these bit-strings:

\[ x_{\bar{q_A}} x_{\bar{q_B}} \ldots x_{\bar{q_Z}} \]

As such, to generate a sample once we have put our qubits into \(N_g\) groups, we need to perform \(N_g\) contractions. The contractions near the beginning are generally easier since we only need the causal cone for a small number of qubits, and the contractions towards the end are easier since we have largely or fully severed the bonds between the ket and bra by conditioning.

This is generally more expensive than computing local quantities but there are a couple of reprieves:

  1. Because of causality we are free to choose the order and groupings of the qubits in whichever way is most efficient.

The automatic choice is to start at the qubit(s) with the smallest reverse lightcone and greedily expand (see section below). Grouping the qubits together can have a large beneficial impact on overall computation time, but imposes a hard upper limit on the required memory like \(2^{|\bar{q}|}\).

Note

You can set the group size to be that of the whole sytem, which is equivalent to sampling from the full wavefunction, if you want to do this, it would be more efficient to call Circuit.simulate_counts, which doesn’t draw the samples individually.

  1. Once we have computed a particular marginal we can cache the result, meaning if we come across the same sub-string result, we don’t need to contract anything again, the trivial example being the first marginal we compute.

branching

The second point is easy to understand if we think of the sampling process as the repeated exploration of a probability tree as above - which is shown for 3 qubits grouped individually, with a first sample of \(011\) drawn. If the next sample we drew was \(010\) we wouldn’t have to perform any more contractions, since we’d be following already explored branches. In the extreme case of the GHZ-state at the top, there are only two branches, so once we have generated the all-zeros and the all-ones result there we won’t need to perform any more contractions.

Example usage:

for b in circ.sample(10, group_size=3):
    print(b)
0011011010
0101100000
0100001010
0110011111
1000000001
1011000100
1111010101
0011101001
0011010001
0101110000

7.4.6. Generate samples from a chaotic circuit

sample_chaotic

Some circuits can be assumed to produce chaotic results, and a useful property of these is that if you remove (partially trace) a certain number of qubits, the remaining marginal probability distribution is close to uniform. This is like saying as we travel along the probability tree depicted above, the probabilities are all very similar until we reach ‘the last few qubits’, whose marginal distribution then depends sensitively on the bit-string generated so far.

If we know roughly what number of qubits suffices for this property to hold, \(m\), we can uniformly sample bit-strings for the first \(f = N - m\) qubits then we only need to contract the ‘final marginal’ from above. In other words, we only need to compute and sample from:

\[ p( \bar{m} | x_{\bar{f}} ) = |\langle x_{\bar{f}} | \psi \rangle|^2 \]

Where \(\bar{m}\) is the set of marginal qubits, and \(\bar{f}\) is the set of qubits fixed to a random bit-string. If \(m\) is not too large, this is generally a very similar cost to that of computing a single amplitude.

Note

This task is the relevant method for classically simulating the results in “Quantum supremacy using a programmable superconducting processor”.

Example usage:

for b in circ.sample_chaotic(10, marginal_qubits=5):
    print(b)
1000111100
1001110110
0101101101
1110100010
0101111000
0001101011
0000111011
0110001111
0110111010
0110101110

Five of these qubits will now be sampled completely randomly.

7.4.7. Compute the dense vector representation of the state

to_dense

In other words just contract the core circ.psi object into a single tensor:

\[ U | \psi_0 \rangle \rightarrow |\psi_{\mathrm{dense}}\rangle \]

Where \(|\psi_{\mathrm{dense}}\rangle\) is a column vector. Unlike other simulators however, the contraction order here isn’t defined by the order the gates were applied in, meaning the full wavefunction does not neccessarily need to be formed until the last few contractions.

Hint

For small to medium circuits, the benefits of doing this as compared with standard, ‘Schrodinger-style’ simulation might be negligible (since the overall scaling is still limited by the number of qubits). Indeed the savings are likely outweighed by the pre-processing step’s overhead if you are only running the circuit geometry once.

Example usage:

circ.to_dense()
[[ 0.022278+0.044826j]
 [ 0.047567+0.001852j]
 [-0.028239+0.01407j ]
 ...
 [ 0.016   -0.008447j]
 [-0.025437-0.015225j]
 [-0.033285-0.030653j]]

7.4.8. Rehearsals

Each of the above methods can perform a trial run, where the tensor networks and contraction paths are generated and intermediates possibly cached, but the main contraction is not performed. Either supply rehearse=True or use the corresponding partial methods:

These each return a dict with the tensor network that would be contracted in the main part of the computation (with the key 'tn'), and the cotengra.ContractionTree object describing the contraction path found for that tensor network (with the key 'tree'). For example:

rehs = circ.amplitude_rehearse()

# contraction width
W = rehs['tree'].contraction_width()
W
7.0

Upper twenties is the limit for standard (~10GB) amounts of RAM.

# contraction cost
# N.B.
#       * 2  to get real dtype FLOPs
#       * 8  to get complex dtype FLOPs (relevant for most QC)
C = rehs['tree'].contraction_cost(log=10)
C
3.93791890264778
# perform contraction
rehs['tn'].contract(all, optimize=rehs['tree'], output_inds=())
(0.0074846830625545525+0.030157252558037258j)

sample_rehearse() and sample_chaotic_rehearse() both return a dict of dicts, where the keys of the top dict are the (ordered) groups of marginal qubits used, and the values are the rehearsal dicts as above.

rehs = circ.sample_rehearse(group_size=3)
rehs.keys()
dict_keys([(0, 1, 2), (3, 4, 9), (5, 6, 7), (8,)])
rehs[(3, 4, 9)].keys()
dict_keys(['tn', 'tree', 'W', 'C'])

7.4.9. Unitary Reverse Lightcone Cancellation

In several of the examples above we made use of ‘reverse lightcone’, or the set of gates that have a causal effect on a particular set of output qubits, \(\bar{q}\), to work with a potentially much smaller TN representation of the wavefunction:

\[ | \psi_{\bar{q}} \rangle \]

This can simply be understood as cancellation of the gate unitaries at the boundary where the bra and ket meet:

\[ U^{\dagger} U = \mathcal{1} \]

if there are no operators or projectors breaking this bond between the bra and ket. Whilst such simplifications can be found by the local simplifications (see below) its easier and quicker to drop these explicitly.

You can see which gate tags are in the reverse lightcone of which regions of qubits by calling:

# just show the first 10...
lc_tags = circ.get_reverse_lightcone_tags(where=(0,))
lc_tags[:10]
('PSI0',
 'GATE_0',
 'GATE_1',
 'GATE_2',
 'GATE_3',
 'GATE_4',
 'GATE_5',
 'GATE_6',
 'GATE_7',
 'GATE_8')
circ.psi.draw(color=lc_tags)
_images/74822364f5fb4d7bb3ab709a5d85b574e2586600231fa23d3f7e0ed9b798fcea.svg

We can plot the effect this has as selecting only these, \(| \psi \rangle \rightarrow | \psi_{\bar{q}} \rangle\), on the norm with the following:

# get the reverse lightcone wavefunction of qubit 0
psi_q0 = circ.get_psi_reverse_lightcone(where=(0,))

# plot its norm
(psi_q0.H & psi_q0).draw(color=['PSI0'] + [f'ROUND_{i}' for i in range(10)])
_images/5ff7227e90ffd37547bc9cfe6d21b2f8b6be149a18f2cb82bffdd33b1415bf48.svg

Note

Although we have specified gate rounds here, this is not required to find the reverse lightcones, and indeed arbitrary geometry is handled too.

7.5. Locally Simplifying the Tensor Network (the simplify_sequence kwarg)

All of the main circuit methods take a simplify_sequence kwarg that controls local tensor network simplifications that are performed on the target TN object before the main contraction. The kwarg is a string of letters which is cycled through til convergence, which each letter corresponding to a different method:

The final object thus both depends on which letters and the order specified – 'ADCRS' is the default.

As an example, here is the amplitude tensor network of the circuit above, with only ‘rank simplification’ (contracting neighboring tensors that won’t increase rank) performed:

(
    circ
    # get the tensor network
    .amplitude_rehearse(simplify_sequence='R')['tn']
    # plot it with each qubit register highlighted
    .draw(color=[f'I{q}' for q in range(10)])
)
_images/a6aeb93858700869db47f9b8bf55fca658b6409311c27d98de60e7d952a671de.svg

You can see that only 3+ dimensional tensors remain. Now if we turn on all the simplification methods we get an even smaller tensor network:

(
    circ
    # get the tensor network
    .amplitude_rehearse(simplify_sequence='ADCRS')['tn']
    # plot it with each qubit register highlighted
    .draw(color=[f'I{q}' for q in range(10)])
)
_images/adc8571a46ca43f145f014de2c19dcd51c8fb32b343ae8c5a75fc7be760dde38.svg

And we also now have hyper-indices - indices shared by more than two tensors - that have been introduced by the TensorNetwork.diagonal_reduce method.

Hint

Of the five methods, only TensorNetwork.rank_simplify doesn’t require looking inside the tensors at the sparsity structure. This means that, at least for the moment, it is the only method that can be back-propagated through, for example.

The five methods combined can have a significant effect on the complexity of the main TN to be contracted, in the most extreme case they can reduce a TN to a scalar:

norm = circ.psi.H & circ.psi
norm
TensorNetworkGen(tensors=668, indices=802)
Tensor(shape=(2), inds=[_119073AASbJ], tags={I0, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbK], tags={I1, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbL], tags={I2, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbM], tags={I3, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbN], tags={I4, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbO], tags={I5, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbP], tags={I6, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbQ], tags={I7, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbR], tags={I8, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2), inds=[_119073AASbS], tags={I9, PSI0}),backend=numpy, dtype=complex128, data=array([1.-0.j, 0.-0.j])
Tensor(shape=(2, 2), inds=[_119073AASbT, _119073AASbJ], tags={GATE_0, ROUND_0, H, I0}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbU, _119073AASbK], tags={GATE_1, ROUND_0, H, I1}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbV, _119073AASbL], tags={GATE_2, ROUND_0, H, I2}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbW, _119073AASbM], tags={GATE_3, ROUND_0, H, I3}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbY, _119073AASbN], tags={GATE_4, ROUND_0, H, I4}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbZ, _119073AASbO], tags={GATE_5, ROUND_0, H, I5}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbb, _119073AASbP], tags={GATE_6, ROUND_0, H, I6}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbc, _119073AASbQ], tags={GATE_7, ROUND_0, H, I7}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbe, _119073AASbR], tags={GATE_8, ROUND_0, H, I8}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2), inds=[_119073AASbf, _119073AASbS], tags={GATE_9, ROUND_0, H, I9}),backend=numpy, dtype=complex128, data=array([[ 0.70710678-0.j, 0.70710678-0.j], [ 0.70710678-0.j, -0.70710678-0.j]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbh, _119073AASbT, b], tags={GATE_10, ROUND_1, CX, I0}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[b, _119073AASbi, _119073AASbU], tags={GATE_10, ROUND_1, CX, I1}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbj, _119073AASbV, _119073AASbX], tags={GATE_11, ROUND_1, CX, I2}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbX, _119073AASbk, _119073AASbW], tags={GATE_11, ROUND_1, CX, I3}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbl, _119073AASbY, _119073AASba], tags={GATE_12, ROUND_1, CX, I4}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASba, _119073AASbm, _119073AASbZ], tags={GATE_12, ROUND_1, CX, I5}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbn, _119073AASbb, _119073AASbd], tags={GATE_13, ROUND_1, CX, I6}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbd, _119073AASbo, _119073AASbc], tags={GATE_13, ROUND_1, CX, I7}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbp, _119073AASbe, _119073AASbg], tags={GATE_14, ROUND_1, CX, I8}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbg, _119073AASbq, _119073AASbf], tags={GATE_14, ROUND_1, CX, I9}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2), inds=[_119073AAScD, _119073AASbh], tags={GATE_15, ROUND_1, RZ, I0}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASbr, _119073AASbi], tags={GATE_16, ROUND_1, RZ, I1}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASbs, _119073AASbj], tags={GATE_17, ROUND_1, RZ, I2}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASbu, _119073AASbk], tags={GATE_18, ROUND_1, RZ, I3}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASbv, _119073AASbl], tags={GATE_19, ROUND_1, RZ, I4}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASbx, _119073AASbm], tags={GATE_20, ROUND_1, RZ, I5}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASby, _119073AASbn], tags={GATE_21, ROUND_1, RZ, I6}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScA, _119073AASbo], tags={GATE_22, ROUND_1, RZ, I7}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScB, _119073AASbp], tags={GATE_23, ROUND_1, RZ, I8}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScM, _119073AASbq], tags={GATE_24, ROUND_1, RZ, I9}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScE, _119073AASbr, _119073AASbt], tags={GATE_25, ROUND_1, CZ, I1}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbt, _119073AAScF, _119073AASbs], tags={GATE_25, ROUND_1, CZ, I2}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScG, _119073AASbu, _119073AASbw], tags={GATE_26, ROUND_1, CZ, I3}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbw, _119073AAScH, _119073AASbv], tags={GATE_26, ROUND_1, CZ, I4}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScI, _119073AASbx, _119073AASbz], tags={GATE_27, ROUND_1, CZ, I5}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASbz, _119073AAScJ, _119073AASby], tags={GATE_27, ROUND_1, CZ, I6}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScK, _119073AAScA, _119073AAScC], tags={GATE_28, ROUND_1, CZ, I7}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScC, _119073AAScL, _119073AAScB], tags={GATE_28, ROUND_1, CZ, I8}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2), inds=[_119073AAScN, _119073AAScD], tags={GATE_29, ROUND_1, RX, I0}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScO, _119073AAScE], tags={GATE_30, ROUND_1, RX, I1}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScQ, _119073AAScF], tags={GATE_31, ROUND_1, RX, I2}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScR, _119073AAScG], tags={GATE_32, ROUND_1, RX, I3}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScT, _119073AAScH], tags={GATE_33, ROUND_1, RX, I4}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScU, _119073AAScI], tags={GATE_34, ROUND_1, RX, I5}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScW, _119073AAScJ], tags={GATE_35, ROUND_1, RX, I6}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScX, _119073AAScK], tags={GATE_36, ROUND_1, RX, I7}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AAScZ, _119073AAScL], tags={GATE_37, ROUND_1, RX, I8}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASca, _119073AAScM], tags={GATE_38, ROUND_1, RX, I9}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScc, _119073AAScN, _119073AAScP], tags={GATE_39, ROUND_2, CX, I0}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScP, _119073AAScd, _119073AAScO], tags={GATE_39, ROUND_2, CX, I1}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASce, _119073AAScQ, _119073AAScS], tags={GATE_40, ROUND_2, CX, I2}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScS, _119073AAScf, _119073AAScR], tags={GATE_40, ROUND_2, CX, I3}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScg, _119073AAScT, _119073AAScV], tags={GATE_41, ROUND_2, CX, I4}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScV, _119073AASch, _119073AAScU], tags={GATE_41, ROUND_2, CX, I5}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASci, _119073AAScW, _119073AAScY], tags={GATE_42, ROUND_2, CX, I6}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScY, _119073AAScj, _119073AAScX], tags={GATE_42, ROUND_2, CX, I7}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASck, _119073AAScZ, _119073AAScb], tags={GATE_43, ROUND_2, CX, I8}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScb, _119073AAScl, _119073AASca], tags={GATE_43, ROUND_2, CX, I9}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2), inds=[_119073AAScy, _119073AAScc], tags={GATE_44, ROUND_2, RZ, I0}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScm, _119073AAScd], tags={GATE_45, ROUND_2, RZ, I1}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScn, _119073AASce], tags={GATE_46, ROUND_2, RZ, I2}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScp, _119073AAScf], tags={GATE_47, ROUND_2, RZ, I3}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScq, _119073AAScg], tags={GATE_48, ROUND_2, RZ, I4}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScs, _119073AASch], tags={GATE_49, ROUND_2, RZ, I5}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASct, _119073AASci], tags={GATE_50, ROUND_2, RZ, I6}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScv, _119073AAScj], tags={GATE_51, ROUND_2, RZ, I7}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAScw, _119073AASck], tags={GATE_52, ROUND_2, RZ, I8}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASdH, _119073AAScl], tags={GATE_53, ROUND_2, RZ, I9}),backend=numpy, dtype=complex128, data=array([[0.8156179+0.57859091j, 0. -0.j ], [0. -0.j , 0.8156179-0.57859091j]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScz, _119073AAScm, _119073AASco], tags={GATE_54, ROUND_2, CZ, I1}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASco, _119073AASdA, _119073AAScn], tags={GATE_54, ROUND_2, CZ, I2}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdB, _119073AAScp, _119073AAScr], tags={GATE_55, ROUND_2, CZ, I3}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScr, _119073AASdC, _119073AAScq], tags={GATE_55, ROUND_2, CZ, I4}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdD, _119073AAScs, _119073AAScu], tags={GATE_56, ROUND_2, CZ, I5}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScu, _119073AASdE, _119073AASct], tags={GATE_56, ROUND_2, CZ, I6}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdF, _119073AAScv, _119073AAScx], tags={GATE_57, ROUND_2, CZ, I7}),backend=numpy, dtype=complex128, data=array([[[-0.34461337-0.j, 1.13818065-0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [-1.13818065-0.j, -0.34461337-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAScx, _119073AASdG, _119073AAScw], tags={GATE_57, ROUND_2, CZ, I8}),backend=numpy, dtype=complex128, data=array([[[-1.04849371-0.j, 0. -0.j], [ 0. -0.j, 0.5611368 -0.j]], [[ 0.5611368 -0.j, 0. -0.j], [ 0. -0.j, 1.04849371-0.j]]])
Tensor(shape=(2, 2), inds=[_119073AASdI, _119073AAScy], tags={GATE_58, ROUND_2, RX, I0}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdJ, _119073AAScz], tags={GATE_59, ROUND_2, RX, I1}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdL, _119073AASdA], tags={GATE_60, ROUND_2, RX, I2}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdM, _119073AASdB], tags={GATE_61, ROUND_2, RX, I3}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdO, _119073AASdC], tags={GATE_62, ROUND_2, RX, I4}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdP, _119073AASdD], tags={GATE_63, ROUND_2, RX, I5}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdR, _119073AASdE], tags={GATE_64, ROUND_2, RX, I6}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdS, _119073AASdF], tags={GATE_65, ROUND_2, RX, I7}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdU, _119073AASdG], tags={GATE_66, ROUND_2, RX, I8}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2), inds=[_119073AASdV, _119073AASdH], tags={GATE_67, ROUND_2, RX, I9}),backend=numpy, dtype=complex128, data=array([[0.8156179-0.j , 0. +0.57859091j], [0. +0.57859091j, 0.8156179-0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdX, _119073AASdI, _119073AASdK], tags={GATE_68, ROUND_3, CX, I0}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdK, _119073AASdY, _119073AASdJ], tags={GATE_68, ROUND_3, CX, I1}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdZ, _119073AASdL, _119073AASdN], tags={GATE_69, ROUND_3, CX, I2}),backend=numpy, dtype=complex128, data=array([[[ 1.18920712-0.j, 0. -0.j], [ 0. -0.j, 0. -0.j]], [[ 0. -0.j, 0. -0.j], [ 0. -0.j, -1.18920712-0.j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdN, _119073AASda, _119073AASdM], tags={GATE_69, ROUND_3, CX, I3}),backend=numpy, dtype=complex128, data=array([[[ 0.84089642-0.j, 0. -0.j], [ 0. -0.j, 0.84089642-0.j]], [[-0. -0.j, -0.84089642-0.j], [-0.84089642-0.j, 0. -0.j]]])

...

norm.full_simplify_(seq='ADCRS')
TensorNetworkGen(tensors=87, indices=67)
Tensor(shape=(2, 2), inds=[_119073AAYDm, _119073AAYDn], tags={GATE_138, ROUND_5, RZ, I7, GATE_144, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASfo, _119073AAYDl], tags={GATE_136, ROUND_5, RZ, I5, GATE_143, CZ, I6}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYCr, _119073AAYCs], tags={GATE_109, ROUND_4, RZ, I7, GATE_115, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYCp, _119073AAYCq], tags={GATE_107, ROUND_4, RZ, I5, GATE_114, CZ, I6}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASer, _119073AAYCo], tags={GATE_105, ROUND_4, RZ, I3, GATE_113, CZ, I4}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYBw, _119073AAYBx], tags={GATE_80, ROUND_3, RZ, I7, GATE_86, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYBu, _119073AAYBv], tags={GATE_78, ROUND_3, RZ, I5, GATE_85, CZ, I6}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYBs, _119073AAYBt], tags={GATE_76, ROUND_3, RZ, I3, GATE_84, CZ, I4}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASdu, _119073AAYBr], tags={GATE_74, ROUND_3, RZ, I1, GATE_83, CZ, I2}),backend=numpy, dtype=complex128, data=array([[ 0.8156179-0.57859091j, 0.8156179-0.57859091j], [ 0.8156179+0.57859091j, -0.8156179-0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYEh, _119073AASgm], tags={GATE_167, ROUND_6, RZ, I7, GATE_173, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASfq, _119073AASfr], tags={GATE_138, ROUND_5, RZ, I7, GATE_144, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASev, _119073AASew], tags={GATE_109, ROUND_4, RZ, I7, GATE_115, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASet, _119073AASeu], tags={GATE_107, ROUND_4, RZ, I5, GATE_114, CZ, I6}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASeA, _119073AASeB], tags={GATE_80, ROUND_3, RZ, I7, GATE_86, CZ, I8}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASdy, _119073AASdz], tags={GATE_78, ROUND_3, RZ, I5, GATE_85, CZ, I6}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AASdw, _119073AASdx], tags={GATE_76, ROUND_3, RZ, I3, GATE_84, CZ, I4}),backend=numpy, dtype=complex128, data=array([[ 0.8156179+0.57859091j, 0.8156179+0.57859091j], [ 0.8156179-0.57859091j, -0.8156179+0.57859091j]])
Tensor(shape=(2, 2), inds=[_119073AAYEi, _119073AAYDn], tags={GATE_159, ROUND_6, CX, I8, GATE_153, ROUND_5, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYEi, _119073AAYEj, _119073AAYDo], tags={GATE_168, ROUND_6, RZ, I8, GATE_169, I9, GATE_154, ROUND_5, RX, GATE_159, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01-6.47317875e-01j, -4.59200617e-01-1.60782850e-01j], [-2.51780799e-18-4.86535026e-01j, 6.85850166e-01+2.33613923e-17j]], [[-2.51780799e-18+4.86535026e-01j, -6.85850166e-01+2.33613923e-17j], [-2.26649549e-01-6.47317875e-01j, -4.59200617e-01+1.60782850e-01j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASgk, _119073AAYDm, _119073AAYEh], tags={GATE_166, ROUND_6, RZ, I6, GATE_152, ROUND_5, RX, I7, GATE_158, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYDn, _119073AAYCs], tags={GATE_130, ROUND_5, CX, I8, GATE_124, ROUND_4, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYDn, _119073AAYDo, _119073AAYCt], tags={GATE_139, ROUND_5, RZ, I8, GATE_140, I9, GATE_125, ROUND_4, RX, GATE_130, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01-6.47317875e-01j, -4.59200617e-01-1.60782850e-01j], [-2.51780799e-18-4.86535026e-01j, 6.85850166e-01+2.33613923e-17j]], [[-2.51780799e-18+4.86535026e-01j, -6.85850166e-01+2.33613923e-17j], [-2.26649549e-01-6.47317875e-01j, -4.59200617e-01+1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AAYDl, _119073AAYCq], tags={GATE_129, ROUND_5, CX, I6, GATE_122, ROUND_4, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYDl, _119073AAYCr, _119073AAYDm], tags={GATE_137, ROUND_5, RZ, I6, GATE_123, ROUND_4, RX, I7, GATE_129, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYCs, _119073AAYBx], tags={GATE_101, ROUND_4, CX, I8, GATE_95, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYCs, _119073AAYCt, _119073AAYBy], tags={GATE_110, ROUND_4, RZ, I8, GATE_111, I9, GATE_96, ROUND_3, RX, GATE_101, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01-6.47317875e-01j, -4.59200617e-01-1.60782850e-01j], [-2.51780799e-18-4.86535026e-01j, 6.85850166e-01+2.33613923e-17j]], [[-2.51780799e-18+4.86535026e-01j, -6.85850166e-01+2.33613923e-17j], [-2.26649549e-01-6.47317875e-01j, -4.59200617e-01+1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AAYCq, _119073AAYBv], tags={GATE_100, ROUND_4, CX, I6, GATE_93, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYCq, _119073AAYBw, _119073AAYCr], tags={GATE_108, ROUND_4, RZ, I6, GATE_94, ROUND_3, RX, I7, GATE_100, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYCo, _119073AAYBt], tags={GATE_99, ROUND_4, CX, I4, GATE_91, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYCo, _119073AAYBu, _119073AAYCp], tags={GATE_106, ROUND_4, RZ, I4, GATE_92, ROUND_3, RX, I5, GATE_99, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYBx, _119073AAYBC], tags={GATE_72, ROUND_3, CX, I8, GATE_66, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYBx, _119073AAYBy, _119073AAYBD], tags={GATE_81, ROUND_3, RZ, I8, GATE_82, I9, GATE_67, ROUND_2, RX, GATE_72, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01-6.47317875e-01j, -4.59200617e-01-1.60782850e-01j], [-2.51780799e-18-4.86535026e-01j, 6.85850166e-01+2.33613923e-17j]], [[-2.51780799e-18+4.86535026e-01j, -6.85850166e-01+2.33613923e-17j], [-2.26649549e-01-6.47317875e-01j, -4.59200617e-01+1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AAYBv, _119073AAYBA], tags={GATE_71, ROUND_3, CX, I6, GATE_64, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYBv, _119073AAYBB, _119073AAYBw], tags={GATE_79, ROUND_3, RZ, I6, GATE_65, ROUND_2, RX, I7, GATE_71, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYBt, _119073AAYAy], tags={GATE_70, ROUND_3, CX, I4, GATE_62, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYBt, _119073AAYAz, _119073AAYBu], tags={GATE_77, ROUND_3, RZ, I4, GATE_63, ROUND_2, RX, I5, GATE_70, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYBr, _119073AAYAw], tags={GATE_69, ROUND_3, CX, I2, GATE_60, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.j , 0. -0.68806443j], [ 0. +0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYBr, _119073AAYAx, _119073AAYBs], tags={GATE_75, ROUND_3, RZ, I2, GATE_61, ROUND_2, RX, I3, GATE_69, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYBC, _119073AAYBB], tags={GATE_43, ROUND_2, CX, I8, GATE_51, RZ, I7, GATE_57, CZ}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.68806443j, 0.96993861+0.68806443j], [-0.96993861+0.68806443j, 0.96993861+0.68806443j]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYBA, _119073AAYAG, _119073AAYBB], tags={GATE_50, ROUND_2, RZ, I6, GATE_36, ROUND_1, RX, I7, GATE_42, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167-0.39682667j, -0.28150475-0.39682667j], [-0.28150475-0.39682667j, 0.55939167-0.39682667j]], [[-0.28150475+0.39682667j, -0.55939167-0.39682667j], [-0.55939167-0.39682667j, -0.28150475+0.39682667j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAShh, _119073AAYFe, _119073AASgn], tags={GATE_197, ROUND_7, RZ, I8, GATE_198, I9, GATE_183, ROUND_6, RX, GATE_188, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01+6.47317875e-01j, -4.59200617e-01+1.60782850e-01j], [-2.51780799e-18+4.86535026e-01j, 6.85850166e-01-2.33613923e-17j]], [[-2.51780799e-18-4.86535026e-01j, -6.85850166e-01-2.33613923e-17j], [-2.26649549e-01+6.47317875e-01j, -4.59200617e-01-1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AASgm, _119073AASfr], tags={GATE_159, ROUND_6, CX, I8, GATE_153, ROUND_5, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASgm, _119073AASgn, _119073AASfs], tags={GATE_168, ROUND_6, RZ, I8, GATE_169, I9, GATE_154, ROUND_5, RX, GATE_159, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01+6.47317875e-01j, -4.59200617e-01+1.60782850e-01j], [-2.51780799e-18+4.86535026e-01j, 6.85850166e-01-2.33613923e-17j]], [[-2.51780799e-18-4.86535026e-01j, -6.85850166e-01-2.33613923e-17j], [-2.26649549e-01+6.47317875e-01j, -4.59200617e-01-1.60782850e-01j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASgk, _119073AASfq, _119073AAYEh], tags={GATE_166, ROUND_6, RZ, I6, GATE_152, ROUND_5, RX, I7, GATE_158, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASfr, _119073AASew], tags={GATE_130, ROUND_5, CX, I8, GATE_124, ROUND_4, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASfr, _119073AASfs, _119073AASex], tags={GATE_139, ROUND_5, RZ, I8, GATE_140, I9, GATE_125, ROUND_4, RX, GATE_130, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01+6.47317875e-01j, -4.59200617e-01+1.60782850e-01j], [-2.51780799e-18+4.86535026e-01j, 6.85850166e-01-2.33613923e-17j]], [[-2.51780799e-18-4.86535026e-01j, -6.85850166e-01-2.33613923e-17j], [-2.26649549e-01+6.47317875e-01j, -4.59200617e-01-1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AASfp, _119073AASeu], tags={GATE_129, ROUND_5, CX, I6, GATE_122, ROUND_4, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASfp, _119073AASev, _119073AASfq], tags={GATE_137, ROUND_5, RZ, I6, GATE_123, ROUND_4, RX, I7, GATE_129, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYDj, _119073AASes], tags={GATE_128, ROUND_5, CX, I4, GATE_120, ROUND_4, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYDj, _119073AASet, _119073AASfo], tags={GATE_135, ROUND_5, RZ, I4, GATE_121, ROUND_4, RX, I5, GATE_128, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASew, _119073AASeB], tags={GATE_101, ROUND_4, CX, I8, GATE_95, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASew, _119073AASex, _119073AASeC], tags={GATE_110, ROUND_4, RZ, I8, GATE_111, I9, GATE_96, ROUND_3, RX, GATE_101, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01+6.47317875e-01j, -4.59200617e-01+1.60782850e-01j], [-2.51780799e-18+4.86535026e-01j, 6.85850166e-01-2.33613923e-17j]], [[-2.51780799e-18-4.86535026e-01j, -6.85850166e-01-2.33613923e-17j], [-2.26649549e-01+6.47317875e-01j, -4.59200617e-01-1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AASeu, _119073AASdz], tags={GATE_100, ROUND_4, CX, I6, GATE_93, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASeu, _119073AASeA, _119073AASev], tags={GATE_108, ROUND_4, RZ, I6, GATE_94, ROUND_3, RX, I7, GATE_100, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASes, _119073AASdx], tags={GATE_99, ROUND_4, CX, I4, GATE_91, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASes, _119073AASdy, _119073AASet], tags={GATE_106, ROUND_4, RZ, I4, GATE_92, ROUND_3, RX, I5, GATE_99, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AAYCm, _119073AASdv], tags={GATE_98, ROUND_4, CX, I2, GATE_89, ROUND_3, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYCm, _119073AASdw, _119073AASer], tags={GATE_104, ROUND_4, RZ, I2, GATE_90, ROUND_3, RX, I3, GATE_98, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASeB, _119073AASdG], tags={GATE_72, ROUND_3, CX, I8, GATE_66, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASeB, _119073AASeC, _119073AASdH], tags={GATE_81, ROUND_3, RZ, I8, GATE_82, I9, GATE_67, ROUND_2, RX, GATE_72, CX}),backend=numpy, dtype=complex128, data=array([[[ 2.26649549e-01+6.47317875e-01j, -4.59200617e-01+1.60782850e-01j], [-2.51780799e-18+4.86535026e-01j, 6.85850166e-01-2.33613923e-17j]], [[-2.51780799e-18-4.86535026e-01j, -6.85850166e-01-2.33613923e-17j], [-2.26649549e-01+6.47317875e-01j, -4.59200617e-01-1.60782850e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AASdz, _119073AASdE], tags={GATE_71, ROUND_3, CX, I6, GATE_64, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdz, _119073AASdF, _119073AASeA], tags={GATE_79, ROUND_3, RZ, I6, GATE_65, ROUND_2, RX, I7, GATE_71, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASdx, _119073AASdC], tags={GATE_70, ROUND_3, CX, I4, GATE_62, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdx, _119073AASdD, _119073AASdy], tags={GATE_77, ROUND_3, RZ, I4, GATE_63, ROUND_2, RX, I5, GATE_70, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASdv, _119073AASdA], tags={GATE_69, ROUND_3, CX, I2, GATE_60, ROUND_2, RX}),backend=numpy, dtype=complex128, data=array([[ 0.96993861-0.j , 0. +0.68806443j], [ 0. -0.68806443j, -0.96993861+0.j ]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdv, _119073AASdB, _119073AASdw], tags={GATE_75, ROUND_3, RZ, I2, GATE_61, ROUND_2, RX, I3, GATE_69, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASdG, _119073AASdF], tags={GATE_43, ROUND_2, CX, I8, GATE_51, RZ, I7, GATE_57, CZ}),backend=numpy, dtype=complex128, data=array([[ 0.96993861+0.68806443j, 0.96993861-0.68806443j], [-0.96993861-0.68806443j, 0.96993861-0.68806443j]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdE, _119073AAScK, _119073AASdF], tags={GATE_50, ROUND_2, RZ, I6, GATE_36, ROUND_1, RX, I7, GATE_42, CX}),backend=numpy, dtype=complex128, data=array([[[ 0.55939167+0.39682667j, -0.28150475+0.39682667j], [-0.28150475+0.39682667j, 0.55939167+0.39682667j]], [[-0.28150475-0.39682667j, -0.55939167+0.39682667j], [-0.55939167+0.39682667j, -0.28150475-0.39682667j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdG, _119073AAScK, _119073AASdH], tags={GATE_52, ROUND_2, RZ, I8, GATE_14, ROUND_1, CX, GATE_22, I7, GATE_28, CZ, GATE_37, RX, GATE_23, GATE_8, ROUND_0, H, GATE_53, I9, GATE_38, GATE_43, GATE_24, GATE_9, GATE_21, I6, GATE_6, GATE_7, GATE_13}),backend=numpy, dtype=complex128, data=array([[[ 0.5027836 +0.11370736j, -0.02428325-0.12148106j], [-0.02428325-0.12148106j, -0.02428325+0.01722628j]], [[ 0.02428325+0.01722628j, 0.1226808 +0.01722628j], [-0.02428325+0.12148106j, 0.142557 +0.49537684j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYBC, _119073AAYAG, _119073AAYBD], tags={GATE_52, ROUND_2, RZ, I8, GATE_14, ROUND_1, CX, GATE_22, I7, GATE_28, CZ, GATE_37, RX, GATE_23, GATE_8, ROUND_0, H, GATE_53, I9, GATE_38, GATE_43, GATE_24, GATE_9, GATE_21, I6, GATE_6, GATE_7, GATE_13}),backend=numpy, dtype=complex128, data=array([[[ 0.5027836 -0.11370736j, -0.02428325+0.12148106j], [-0.02428325+0.12148106j, -0.02428325-0.01722628j]], [[ 0.02428325-0.01722628j, 0.1226808 -0.01722628j], [-0.02428325-0.12148106j, 0.142557 -0.49537684j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdB, _119073AASdC, _119073AASdA], tags={GATE_47, ROUND_2, RZ, I3, GATE_55, CZ, I4, GATE_41, CX, GATE_33, ROUND_1, RX, GATE_12, GATE_18, GATE_26, GATE_19, GATE_4, ROUND_0, H, GATE_5, I5, GATE_46, I2, GATE_32, GATE_40, GATE_17, GATE_2, GATE_3, GATE_11}),backend=numpy, dtype=complex128, data=array([[[-0.53798078+3.63434853e-01j, 0.28016385-2.68095775e-17j], [ 0.09258438+2.64423745e-01j, -0.16523255+6.27858598e-01j]], [[-0.28016385+7.81422312e-17j, -0.64718797-5.15358662e-02j], [-0.16523255+6.27858598e-01j, 0.09258438-2.64423745e-01j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdD, _119073AASdE, _119073AASdC], tags={GATE_49, ROUND_2, RZ, I5, GATE_56, CZ, I6, GATE_42, CX, GATE_35, ROUND_1, RX, GATE_13, GATE_20, GATE_27, GATE_21, GATE_6, ROUND_0, H, GATE_7, I7, GATE_48, I4, GATE_34, GATE_41, GATE_19, GATE_4, GATE_5, GATE_12}),backend=numpy, dtype=complex128, data=array([[[-0.53798078+3.63434853e-01j, 0.28016385-2.68095775e-17j], [ 0.09258438+2.64423745e-01j, -0.16523255+6.27858598e-01j]], [[-0.28016385+7.81422312e-17j, -0.64718797-5.15358662e-02j], [-0.16523255+6.27858598e-01j, 0.09258438-2.64423745e-01j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYAx, _119073AAYAy, _119073AAYAw], tags={GATE_47, ROUND_2, RZ, I3, GATE_55, CZ, I4, GATE_41, CX, GATE_33, ROUND_1, RX, GATE_12, GATE_18, GATE_26, GATE_19, GATE_4, ROUND_0, H, GATE_5, I5, GATE_46, I2, GATE_32, GATE_40, GATE_17, GATE_2, GATE_3, GATE_11}),backend=numpy, dtype=complex128, data=array([[[-0.53798078-3.63434853e-01j, 0.28016385+2.68095775e-17j], [ 0.09258438-2.64423745e-01j, -0.16523255-6.27858598e-01j]], [[-0.28016385-7.81422312e-17j, -0.64718797+5.15358662e-02j], [-0.16523255-6.27858598e-01j, 0.09258438+2.64423745e-01j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYAz, _119073AAYBA, _119073AAYAy], tags={GATE_49, ROUND_2, RZ, I5, GATE_56, CZ, I6, GATE_42, CX, GATE_35, ROUND_1, RX, GATE_13, GATE_20, GATE_27, GATE_21, GATE_6, ROUND_0, H, GATE_7, I7, GATE_48, I4, GATE_34, GATE_41, GATE_19, GATE_4, GATE_5, GATE_12}),backend=numpy, dtype=complex128, data=array([[[-0.53798078-3.63434853e-01j, 0.28016385+2.68095775e-17j], [ 0.09258438-2.64423745e-01j, -0.16523255-6.27858598e-01j]], [[-0.28016385-7.81422312e-17j, -0.64718797+5.15358662e-02j], [-0.16523255-6.27858598e-01j, 0.09258438+2.64423745e-01j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AAShh, _119073AAYFe, _119073AAYEj], tags={GATE_197, ROUND_7, RZ, I8, GATE_198, I9, GATE_183, ROUND_6, RX, GATE_188, CX, GATE_212, GATE_217, ROUND_8, GATE_227, GATE_241, GATE_251, ROUND_9, H}),backend=numpy, dtype=complex128, data=array([[[-1.90588793e-01+5.44327281e-01j, 3.86140152e-01+1.35201722e-01j], [ 2.11721571e-18+4.09125559e-01j, -5.76728946e-01-1.96445111e-17j]], [[ 2.11721571e-18-4.09125559e-01j, 5.76728946e-01-1.96445111e-17j], [ 1.90588793e-01+5.44327281e-01j, 3.86140152e-01-1.35201722e-01j]]])
Tensor(shape=(2, 2), inds=[_119073AAShh, _119073AASgm], tags={GATE_188, ROUND_7, CX, I8, GATE_182, ROUND_6, RX, GATE_217, ROUND_8, GATE_211, GATE_212, I9, GATE_227, RZ, GATE_241, GATE_251, ROUND_9, H, GATE_226, GATE_240, GATE_250, GATE_231, CZ, I7, GATE_225}),backend=numpy, dtype=complex128, data=array([[ 0.9407809 -0.66738026j, 0.47343266+0.66738026j], [-0.47343266-0.66738026j, -0.9407809 +0.66738026j]])
Tensor(shape=(2, 2), inds=[_119073AAShh, _119073AAYEi], tags={GATE_188, ROUND_7, CX, I8, GATE_182, ROUND_6, RX, GATE_196, RZ, I7, GATE_202, CZ}),backend=numpy, dtype=complex128, data=array([[-9.69938606e-01+0.00000000e+00j, 0.00000000e+00+6.88064432e-01j], [-5.07964202e-18-6.88064432e-01j, 9.69938606e-01-7.16058072e-18j]])
Tensor(shape=(2, 2), inds=[_119073AAYEh, _119073AAYEi], tags={GATE_167, ROUND_6, RZ, I7, GATE_173, CZ, I8, GATE_195, ROUND_7, I6, GATE_181, RX, GATE_187, CX, GATE_210, GATE_216, ROUND_8, GATE_224, GATE_238, GATE_248, ROUND_9, H, GATE_225, GATE_239, GATE_249, GATE_226, GATE_231, GATE_196, GATE_202}),backend=numpy, dtype=complex128, data=array([[-0.62892738+0.44615459j, -0.62892738+0.44615459j], [-0.62892738-0.44615459j, 0.62892738+0.44615459j]])
Tensor(shape=(2, 2), inds=[_119073AASgk, _119073AASfp], tags={GATE_158, ROUND_6, CX, I6, GATE_151, ROUND_5, RX, GATE_187, ROUND_7, GATE_180, GATE_195, RZ, GATE_181, I7, GATE_210, GATE_216, ROUND_8, GATE_224, GATE_238, GATE_248, ROUND_9, H, GATE_225, GATE_239, GATE_249, GATE_226, I8, GATE_231, CZ, GATE_196, GATE_202, GATE_209, GATE_230, I5, GATE_223, GATE_194, GATE_201, GATE_208, GATE_215, GATE_222, I4, GATE_236, GATE_246, GATE_237, GATE_247}),backend=numpy, dtype=complex128, data=array([[ 1.15345789e+00-5.31640927e-17j, 3.77140584e-17+8.18251118e-01j], [-4.66651456e-17-8.18251118e-01j, -1.15345789e+00+6.57821044e-17j]])
Tensor(shape=(2, 2), inds=[_119073AASgk, _119073AAYDl], tags={GATE_158, ROUND_6, CX, I6, GATE_151, ROUND_5, RX, GATE_165, RZ, I5, GATE_172, CZ}),backend=numpy, dtype=complex128, data=array([[-9.69938606e-01+0.00000000e+00j, 0.00000000e+00+6.88064432e-01j], [ 5.07964202e-18-6.88064432e-01j, 9.69938606e-01+7.16058072e-18j]])
Tensor(shape=(2, 2), inds=[_119073AASfo, _119073AASfp], tags={GATE_136, ROUND_5, RZ, I5, GATE_143, CZ, I6, GATE_164, ROUND_6, I4, GATE_150, RX, GATE_157, CX, GATE_193, ROUND_7, GATE_179, GATE_186, GATE_194, GATE_201, GATE_208, GATE_215, ROUND_8, GATE_222, GATE_236, GATE_246, ROUND_9, H, GATE_223, GATE_237, GATE_247, GATE_224, GATE_230, GATE_165, GATE_172}),backend=numpy, dtype=complex128, data=array([[ 0.61545076+0.43659441j, 0.61545076+0.43659441j], [ 0.61545076-0.43659441j, -0.61545076+0.43659441j]])
Tensor(shape=(2, 2), inds=[_119073AAYDj, _119073AAYCo], tags={GATE_128, ROUND_5, CX, I4, GATE_120, ROUND_4, RX, GATE_157, ROUND_6, GATE_149, GATE_164, RZ, GATE_150, I5, GATE_193, ROUND_7, GATE_179, GATE_186, GATE_194, GATE_201, CZ, I6, GATE_208, GATE_215, ROUND_8, GATE_222, GATE_236, GATE_246, ROUND_9, H, GATE_223, GATE_237, GATE_247, GATE_224, GATE_230, GATE_165, GATE_172, GATE_178, GATE_192, I3, GATE_200, GATE_206, GATE_214, GATE_220, I2, GATE_234, GATE_244, GATE_221, GATE_235, GATE_245, GATE_229, GATE_207, GATE_163, GATE_171, GATE_191, GATE_177, GATE_185}),backend=numpy, dtype=complex128, data=array([[ 1.15345789e+00-6.10152182e-17j, -4.32835657e-17-8.18251118e-01j], [ 5.45687982e-17+8.18251118e-01j, -1.15345789e+00+7.69235868e-17j]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYDj, _119073AAYCp, _119073AASfo], tags={GATE_135, ROUND_5, RZ, I4, GATE_121, ROUND_4, RX, I5, GATE_128, CX, GATE_134, I3, GATE_142, CZ}),backend=numpy, dtype=complex128, data=array([[[-0.55939167+0.39682667j, 0.28150475+0.39682667j], [ 0.28150475+0.39682667j, -0.55939167+0.39682667j]], [[ 0.28150475-0.39682667j, 0.55939167+0.39682667j], [ 0.55939167+0.39682667j, 0.28150475-0.39682667j]]])
Tensor(shape=(2, 2), inds=[_119073AASer, _119073AASes], tags={GATE_105, ROUND_4, RZ, I3, GATE_113, CZ, I4, GATE_133, ROUND_5, I2, GATE_119, RX, GATE_127, CX, GATE_162, ROUND_6, GATE_148, GATE_156, GATE_163, GATE_171, GATE_191, ROUND_7, GATE_177, GATE_185, GATE_192, GATE_200, GATE_206, GATE_214, ROUND_8, GATE_220, GATE_234, GATE_244, ROUND_9, H, GATE_221, GATE_235, GATE_245, GATE_222, GATE_229, GATE_134, GATE_142}),backend=numpy, dtype=complex128, data=array([[ 0.61212697+0.43423655j, 0.61212697+0.43423655j], [ 0.61212697-0.43423655j, -0.61212697+0.43423655j]])
Tensor(shape=(2, 2), inds=[_119073AAYCm, _119073AAYBr], tags={GATE_98, ROUND_4, CX, I2, GATE_89, ROUND_3, RX, GATE_127, ROUND_5, GATE_118, GATE_133, RZ, GATE_119, I3, GATE_162, ROUND_6, GATE_148, GATE_156, GATE_163, GATE_171, CZ, I4, GATE_191, ROUND_7, GATE_177, GATE_185, GATE_192, GATE_200, GATE_206, GATE_214, ROUND_8, GATE_220, GATE_234, GATE_244, ROUND_9, H, GATE_221, GATE_235, GATE_245, GATE_222, GATE_229, GATE_134, GATE_142, GATE_132, I1, GATE_141, GATE_160, I0, GATE_146, GATE_155, GATE_161, GATE_170, GATE_189, GATE_175, GATE_184, GATE_190, GATE_199, GATE_219, GATE_233, GATE_243, GATE_204, GATE_213, GATE_218, GATE_228, GATE_147, GATE_176, GATE_205}),backend=numpy, dtype=complex128, data=array([[ 1.02947075e+00+1.00729510e-17j, 7.14564743e-18-7.30295919e-01j], [-1.41994610e-17+7.30295919e-01j, -1.02947075e+00-2.00164473e-17j]])
Tensor(shape=(2, 2, 2), inds=[_119073AAYCm, _119073AAYBs, _119073AASer], tags={GATE_104, ROUND_4, RZ, I2, GATE_90, ROUND_3, RX, I3, GATE_98, CX, GATE_103, I1, GATE_112, CZ}),backend=numpy, dtype=complex128, data=array([[[-0.55939167+0.39682667j, 0.28150475+0.39682667j], [ 0.28150475+0.39682667j, -0.55939167+0.39682667j]], [[ 0.28150475-0.39682667j, 0.55939167+0.39682667j], [ 0.55939167+0.39682667j, 0.28150475-0.39682667j]]])
Tensor(shape=(2, 2, 2), inds=[_119073AASdu, _119073AAYAw, _119073AASdA], tags={GATE_97, ROUND_4, CX, I0, GATE_87, ROUND_3, RX, GATE_73, RZ, GATE_59, ROUND_2, I1, GATE_68, GATE_45, GATE_54, CZ, I2, GATE_40, GATE_31, ROUND_1, GATE_11, GATE_39, GATE_16, GATE_25, GATE_30, GATE_10, GATE_0, ROUND_0, H, GATE_1, GATE_15, GATE_29, GATE_44, GATE_58, GATE_17, GATE_2, GATE_3, I3, GATE_126, ROUND_5, GATE_116, GATE_155, ROUND_6, GATE_145, GATE_184, ROUND_7, GATE_174, GATE_213, ROUND_8, GATE_203, GATE_218, GATE_232, GATE_242, ROUND_9, GATE_219, GATE_233, GATE_243, GATE_204, GATE_220, GATE_228, GATE_189, GATE_175, GATE_190, GATE_199, GATE_160, GATE_146, GATE_161, GATE_170, GATE_131, GATE_117, GATE_132, GATE_141, GATE_156, GATE_147, GATE_185, GATE_176, GATE_214, GATE_205, GATE_206, GATE_234, GATE_244, GATE_221, GATE_235, GATE_245, GATE_222, I4, GATE_229, GATE_191, GATE_177, GATE_192, GATE_200, GATE_162, GATE_148, GATE_163, GATE_171, GATE_102, GATE_88, GATE_103, GATE_112}),backend=numpy, dtype=complex128, data=array([[[ 0.9367917 -0.66455036j, -0.01811946-0.44375959j], [ 0.41284054+0.16374854j, 0.30657993-0.21748464j]], [[ 0.4201327 -0.2980378j , -0.3613847 -0.56550071j], [ 0.41430484+0.52795972j, 1.05034446-0.74510352j]]])
Tensor(shape=(2, 2), inds=[_119073AASdu, _119073AASdv], tags={GATE_74, ROUND_3, RZ, I1, GATE_83, CZ, I2, GATE_102, ROUND_4, I0, GATE_88, RX, GATE_97, CX, GATE_131, ROUND_5, GATE_117, GATE_126, GATE_132, GATE_141, GATE_160, ROUND_6, GATE_146, GATE_155, GATE_161, GATE_170, GATE_189, ROUND_7, GATE_175, GATE_184, GATE_190, GATE_199, GATE_219, ROUND_8, GATE_233, GATE_243, ROUND_9, H, GATE_204, GATE_213, GATE_218, GATE_220, GATE_228, GATE_156, GATE_147, GATE_185, GATE_176, GATE_214, GATE_205, GATE_206, I3, GATE_234, GATE_244, GATE_221, GATE_235, GATE_245, GATE_222, I4, GATE_229, GATE_191, GATE_177, GATE_192, GATE_200, GATE_162, GATE_148, GATE_163, GATE_171, GATE_103, GATE_112}),backend=numpy, dtype=complex128, data=array([[-0.21325027+0.74103163j, -0.21325027+0.74103163j], [ 0.62892738+0.44615459j, -0.62892738-0.44615459j]])

Here, TensorNetwork.full_simplify, (which is the method that cycles through the other five) has reduced the 500 tensors to a single scalar via local simplifications only. Clearly we know the answer should be 1 anyway, but its nice to confirm it can indeed be found automatically as well:

# we specify output_inds since we now have a hyper tensor network
norm.contract(..., output_inds=())
0.99999999999996

7.6. Finding a Contraction Path (the optimize kwarg)

As mentioned previously, the main computational bottleneck (i.e. as we scale up circuits, the step that always becomes most expensive) is the actual contraction of the main tensor network objects, post simplification. The cost of this step (which is recorded in the rehearsal’s 'tree' objects) can be incredibly sensitive to the contraction path - the series of pairwise merges that define how to turn the collection of tensors into a single tensor.

You control this aspect of the quantum circuit simulation via the optimize kwarg, which can take a number different types of values:

  1. A string, specifiying a cotengra registered path optimizer.

  2. A custom cotengra.HyperOptimizer instance

  3. A custom opt_einsum.PathOptimizer instance

  4. An explicit path - a sequence of pairs of ints - likely found from a previous rehearsal, for example.

Note

The default value is 'auto-hq' which is a high quality preset cotengra has, but this is quite unlikely to offer the best performance for large or complex circuits.

As an example we’ll show how to use each type of optimize kwarg for computing the local expecation:

\[ \langle \psi_{3, 4} | Z_3 \otimes Z_4 | \psi_{3, 4} \rangle \]
# compute the ZZ correlation on qubits 3 & 4
ZZ = qu.pauli('Z') & qu.pauli('Z')
where = (3, 4)

7.6.1. An cotengra preset

First we use the fast but low quality 'greedy' preset:

rehs = circ.local_expectation_rehearse(ZZ, where, optimize='greedy')
tn, tree = rehs['tn'], rehs['tree']
tree.contraction_cost()
222176

Because we used a preset, the path is cached by quimb, meaning the path optimization won’t run again for the same geometry.

Now we can run the actual computation, reusing that path automatically:

%%timeit
circ.local_expectation(ZZ, where, optimize='greedy')
43.8 ms ± 713 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)

We can compare this to just performing the main contraction:

%%timeit
tn.contract(all, optimize=tree, output_inds=())
3.73 ms ± 137 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Where we see that most of the time is evidently spent preparing the TN.

7.6.2. An opt_einsum.PathOptimizer instance

You can also supply a customized PathOptimizer instance here, an example of which is the opt_einsum.RandomGreedy optimizer (which is itself called by 'auto-hq' in fact).

import opt_einsum as oe

# up the number of repeats and make it run in parallel
opt_rg = oe.RandomGreedy(max_repeats=256, parallel=True)

rehs = circ.local_expectation_rehearse(ZZ, where, optimize=tree)
tn, tree = rehs['tn'], rehs['tree']
tree.contraction_cost()
222176

We see it has found a much better path than 'greedy', which is not so surprising.

Unlike before, if we want to reuse the path found from this, we can directly access the .path attribute from the info object (or the PathOptimizer object):

tn.contract(all, output_inds=(), optimize=tree)
0.04035324286371912
# %%timeit
circ.local_expectation(ZZ, where, optimize=tree)
0.04035324286371912

We’ve shaved some time off but not much because the computation is not dominated by the contraction at this scale.

Note

If you supplied the opt_rg optimizer again, it would run for an additional 256 repeats, before returning its best path – this can be useful if you want to incrementally optimize the path, check its cost and then optimize more, before switching to .path when you actually want to contract, for example.

7.6.3. A custom opt_einsum.PathOptimizer instance

opt_einsum defines an interface for custom path optimizers, which other libraries, or any user, can subclass and then supply as the optimize kwarg and will thus be compatible with quimb. The cotengra library offers ‘hyper’-optimized contraction paths that are aimed at (and strongly recommended for) large and complex tensor networks. It provides:

  • The optimize='hyper' preset (once cotengra is imported)

  • The cotengra.HyperOptimizer single-shot path optimizer, (like RandomGreedy above)

  • The cotengra.ReusableHyperOptimizer, which can be used for many contractions, and caches the results (optionally to disk)

The last is probably the most practical so we’ll demonstrate it here:

import cotengra as ctg

# the kwargs ReusableHyperOptimizer take are the same as HyperOptimizer
opt = ctg.ReusableHyperOptimizer(
    max_repeats=16,
    reconf_opts={},
    parallel=False,
    progbar=True,
#     directory=True,  # if you want a persistent path cache
)

rehs = circ.local_expectation_rehearse(ZZ, where, optimize=opt)
tn, tree = rehs['tn'], rehs['tree']
tree.contract_stats()
F=4.64 C=5.94 S=10.00 P=11.63: 100%|██████████| 16/16 [00:09<00:00,  1.65it/s]
{'flops': 43442, 'write': 12995, 'size': 1024}

We can see even for this small contraction it has improved on the RandomGreedy path cost. We could use info.path here but since we have a ReusableHyperOptimizer path optimizer, this second time its called on the same contraction it will simply get the path from its cache anway:

%%timeit
circ.local_expectation(ZZ, where, optimize=opt)
40.7 ms ± 1.25 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Again, since the main contraction is very small, we don’t see any real improvement.

cotengra also has a ContractionTree object for manipulating and visualizing the contraction paths found.

tree.plot_tent(order=True)
_images/a8f9b82a28f8481158d96529a0d1bacb6a8043e58e185cf25e90e88eef73680c.svg
(<Figure size 500x500 with 3 Axes>, <Axes: >)

Here the, grey network at the bottom is the TN to be contracted, and the tree above it depicts the series of pairwise contractions and their individual cost needed to find the output answer (the node at the top).

7.7. Performing the Contraction (the backend kwarg)

quimb and opt_einsum both try and be agnostic to the actual arrays they manipulate / contract. Currently however, the tensor network Circuit constructs and simplifies is made up of numpy.ndarray backed tensors since they are all generally very small:

{t.size for t in tn}
{2, 4, 8}

When it comes to the actual contraction however, where large tensors will appear, it can be advantageous to use a different library to perform the contractions. If you specify a backend kwarg, before contraction, the arrays will converted to the backend, then the contraction performed, and the result converted back to numpy. The list of available backends is here, including:

  • cupy

  • jax (note this actively defaults to single precision)

  • torch

  • tensorflow

Sampling is an excellent candidate for GPU acceleration as the same geometry TNs are contracted over and over again and since sampling is inherently a low precision task, single precision arrays are a natural fit.

for b in circ.sample(10, backend='jax', dtype='complex64'):
    print(b)
1011111001
1101000001
1000100011
0011101000
0010100101
1100001101
1111011010
0001111101
0111001001
1001000000

Note

Both sampling methods, Circuit.sample and Circuit.sample_chaotic, default to dtype='complex64'. The other methods default to dtype='complex128'.

7.8. Performance Checklist

Here’s a list of things to check if you want to ensure you are getting the most performance out of your circuit simulation:

  • What contraction path optimizer are you using? If performance isn’t great, have you tried cotengra?

  • How are you applying the gates? For example, gate_opts={'contract': False} (no decomposition) can be better for diagonal 2-qubit gates.

  • How are you grouping the qubits? For sampling, there is a sweet spot for group_size usually. For chaotic sampling, you might try the last $M$ marginal qubits rather than the first, for example.

  • What local simplifications are you using, and in what order? simplify_sequence='SADCR' is also good sometimes.

  • If the computation is contraction dominated, can you run it on a GPU?