# Tensor Network Training of Quantum Circuits¶

Here we’ll run through constructing a tensor network of an ansatz quantum circuit, then training certain ‘parametrizable’ tensors representing quantum gates in that tensor network to replicate the behaviour of a target unitary.

[1]:

import quimb as qu
import quimb.tensor as qtn


## The Ansatz Circuit¶

First we set up the ansatz circuit and extract the tensor network. Key here is that when we supply parametrize=True to the 'U3' gate call, it injects a PTensor into the network, which lazily represents its data array with a function and set of parameters. Later, when the optimizer sees this it then knows to optimize the parameters rather than the array itself.

[2]:

def single_qubit_layer(circ, gate_round=None):
"""Apply a parametrizable layer of single qubit U3 gates.
"""
for i in range(circ.N):
# initialize with random parameters
params = qu.randn(3, dist='uniform')
circ.apply_gate(
'U3', *params, i,
gate_round=gate_round, parametrize=True)

def two_qubit_layer(circ, gate2='CZ', reverse=False, gate_round=None):
"""Apply a layer of constant entangling gates.
"""
regs = range(0, circ.N - 1)
if reverse:
regs = reversed(regs)

for i in regs:
circ.apply_gate(
gate2, i, i + 1, gate_round=gate_round)

def ansatz_circuit(n, depth, gate2='CZ', **kwargs):
"""Construct a circuit of single qubit and entangling layers.
"""
circ = qtn.Circuit(n, **kwargs)

for r in range(depth):
# single qubit gate layer
single_qubit_layer(circ, gate_round=r)

# alternate between forward and backward CZ layers
two_qubit_layer(
circ, gate2=gate2, gate_round=r, reverse=r % 2 == 0)

# add a final single qubit layer
single_qubit_layer(circ, gate_round=r + 1)

return circ


The form of the 'U3' gate (which generalizes all possible single qubit gates) can be seen here - U_gate(). Now we are ready to instantiate a circuit:

[3]:

n = 6
depth = 9
gate2 = 'CZ'

circ = ansatz_circuit(n, depth, gate2=gate2)
circ

[3]:

<Circuit(n=6, n_gates=105, gate_opts={'contract': 'auto-split-gate', 'propagate_tags': 'register'})>


We can extract just the unitary part of the circuit as a tensor network like so:

[4]:

V = circ.uni


You can see it already has various tags identifying its structure (indeed enough to uniquely identify each gate):

[5]:

V.graph(color=['U3', gate2], show_inds=True)

[6]:

V.graph(color=[f'ROUND_{i}' for i in range(depth)], show_inds=True)

[7]:

V.graph(color=[f'I{i}' for i in range(n)], show_inds=True)


## The Target Unitary¶

Next we need a target unitary to try and digitially replicate. Here we’ll take an Ising Hamiltonian and a short time evolution. Once we have the dense (matrix) form of the target unitary $U$ we need to convert it to a tensor which we can put in a tensor network:

[8]:

# the hamiltonian
H = qu.ham_ising(n, jz=1.0, bx=0.7, cyclic=False)

# the propagator for the hamiltonian
t = 2
U_dense = qu.expm(-1j * t * H)

# 'tensorized' version of the unitary propagator
U = qtn.Tensor(
data=U_dense.reshape([2] * (2 * n)),
inds=[f'k{i}' for i in range(n)] + [f'b{i}' for i in range(n)],
tags={'U_TARGET'}
)
U.graph(color=['U3', gate2, 'U_TARGET'])


The core object describing how similar two unitaries are is: $$\mathrm{Tr}(V^{\dagger}U)$$, which we can naturally visualize at a tensor network:

[9]:

(V.H & U).graph(color=['U3', gate2, 'U_TARGET'])


For our loss function we’ll normalize this and negate it (since the optimizer minimizes).

[10]:

def loss(V, U):
return 1 - abs((V.H & U).contract(all, optimize='auto-hq')) / 2**n

# check our current unitary 'infidelity':
loss(V, U)

[10]:

0.9916803129508406


So as expected currently the two unitaries are not similar at all.

## The Tensor Network Optimization¶

Now we are ready to construct the TNOptimizer object, with options detailed below:

[11]:

# use the autograd/jax based optimizer

tnopt = qto.TNOptimizer(
V,                        # the tensor network we want to optimize
loss,                     # the function we want to minimize
loss_constants={'U': U},  # supply U to the loss function as a constant TN
constant_tags=[gate2],    # within V we also want to keep all the CZ gates constant
optimizer='L-BFGS-B',     # the optimization algorithm
)


We could call optimize for pure gradient based optimization, but since unitary circuits can be tricky we’ll use optimize_basinhopping which combines gradient descent with ‘hopping’ to escape local minima:

[12]:

# allow 10 hops with 500 steps in each 'basin'
V_opt = tnopt.optimize_basinhopping(n=500, nhop=10)

  0%|          | 0/5000 [00:00<?, ?it/s]/home/jg3014/conda/lib/python3.7/site-packages/jax/lax/lax.py:1883: ComplexWarning: Casting complex values to real discards the imaginary part
lambda t, new_dtype, old_dtype: [convert_element_type(t, old_dtype)])
0.004679322242736816 [best: 0.004676163196563721] :  37%|███▋      | 1848/5000 [01:25<02:26, 21.54it/s]


The optimized tensor network still contains PTensor instances but now with optimized parameters. For example, here’s the tensor of the U3 gate acting on qubit-2 in round-4:

[13]:

V_opt['U3', 'I2', 'ROUND_4']

[13]:

PTensor(shape=(2, 2), inds=('_bcf61100000b5', '_bcf61100000a3'), tags={'U3', 'I2', 'ROUND_4', '__VARIABLE26__'})


We can see the parameters have been updated by the training:

[14]:

# the initial values
V['U3', 'ROUND_4', 'I2'].params

[14]:

array([0.32954757, 0.81472235, 0.94463414])

[15]:

# the optimized values
V_opt['U3', 'ROUND_4', 'I2'].params

[15]:

array([ 0.62982094,  1.26137484, -0.89128324])


We can see what gate these parameters would generate:

[16]:

qu.U_gate(*V_opt['U3', 'ROUND_4', 'I2'].params)

[16]:

[[ 0.950824-0.j       -0.19464 +0.240933j]
[ 0.094316+0.295022j  0.886448+0.343914j]]


A final sanity check we can perform is to try evolving a random state with the target unitary and trained circuit and check the fidelity between the resulting states.

First we turn the tensor network version of $$V$$ into a dense matrix:

[17]:

V_opt_dense = V_opt.to_dense([f'k{i}' for i in range(n)], [f'b{i}' for i in range(n)])


Next we create a random initial state, and evolve it with the

[18]:

psi0 = qu.rand_ket(2**n)

# this is the exact state we want
psif_exact = U_dense @ psi0

# this is the state our circuit will produce if fed psi0
psif_apprx = V_opt_dense @ psi0


The (in)fidelity should broadly match our training loss:

[19]:

f"Fidelity: {100 * qu.fidelity(psif_apprx, psif_exact):.2f} %"

[19]:

'Fidelity: 99.52 %'