16. Converting a Circuit to an MPO

This example shows how to convert a quantum circuit (that is sufficiently shallow) into a matrix product operator (MPO).

%config InlineBackend.figure_formats = ['svg']
import quimb.tensor as qtn

First we generate some random 1D gates:

gates = qtn.circuit_gen.gates_1D_rand(10, depth=8, seed=42)

Then we construct the Circuit:

circ = qtn.Circuit.from_gates(
    gates,
    # this ensure each tensor belongs to one site only
    gate_contract="split-gate",
    # just for cleaner tags
    tag_gate_numbers=False,
)

Next we extract just the unitary part of the tensor network:

tn_uni = circ.get_uni()
tn_uni.draw(tn_uni.site_tags, show_tags=False)
../_images/da5b1aa4271486c17bd43148eea1b123c66623f2a1a5e25505450606d9f4b9a9.svg

16.1. By direct contraction:

Then we contract each group of tensor associate with each site (color above):

for site in tn_uni.site_tags:
    tn_uni ^= site

tn_uni.draw(tn_uni.site_tags, show_tags=False)
../_images/b23d5fae7c0b3437b7634e093276de844e035a42d9f7c2dc34745f6548b6cf26.svg
tn_uni.fuse_multibonds_()
tn_uni.draw(tn_uni.site_tags, show_tags=False)
../_images/bc2f36ecd0c460f7aa1694eb57ab12ac0ada7622c4f8c071ace3c16c904fa39f.svg

Now it has the form of an MPO, we can cast it the actual MatrixProductOperator class:

tn_uni.view_as_(
    qtn.MatrixProductOperator,
    cyclic=False,
    L=circ.N,
)
MatrixProductOperator(tensors=10, indices=29, L=10, max_bond=256)
Tensor(shape=(2, 256, 2), inds=[b0, _595a34AAAAf, k0], tags={U3, I0, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b1, _595a34AAAAf, _595a34AAAAp, k1], tags={U3, I1, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b2, _595a34AAAAQ, _595a34AAAAp, k2], tags={U3, I2, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b3, _595a34AAAAQ, _595a34AAAAV, k3], tags={U3, I3, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b4, _595a34AAAAV, _595a34AAAAu, k4], tags={U3, I4, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b5, _595a34AAAAa, _595a34AAAAu, k5], tags={U3, I5, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b6, b, _595a34AAAAa, k6], tags={U3, I6, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b7, b, _595a34AAABi, k7], tags={U3, I7, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 256, 2), inds=[b8, _595a34AAABi, _595a34AAABs, k8], tags={U3, I8, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(2, 256, 2), inds=[b9, _595a34AAABs, k9], tags={U3, I9, CZ}),backend=numpy, dtype=complex128, data=...

This allows us to call MPO specific methods.

tn_uni.compress(cutoff=1e-6, cutoff_mode="rel")
tn_uni.show()
│4│16│64│128│125│128│64│16│4│
●─<──<──<━━━<━━━<━━━<──<──<─<
│ │  │  │   │   │   │  │  │ │

16.2. By fitting

An alternative approach that will scale better, especially if a fixed max_bond is known ahead of time is to directly fit a low-rank 1D TN to the unitary:

# get a fresh (uncontracted) TN representation of the circuit
tn_uni = circ.get_uni()
# compress via fitting:
tnc = qtn.tensor_network_1d_compress(
    tn_uni,
    max_bond=32,
    cutoff=0.0,
    method="fit",
    bsz=2,  # bsz=1 is cheaper per sweep, but possibly slower to converge
    max_iterations=100,
    tol=1e-6,
    progbar=True,
)
max_tdiff=8.62e-07:   8%|8         | 8/100 [00:01<00:14,  6.56it/s]

Compute relative frobenius norm error:

tnc.distance_normalized(tn_uni)
np.float64(0.09292853617632899)
# againt cast as MPO if we want more methods
tnc.view_as_(
    qtn.MatrixProductOperator,
    cyclic=False,
    L=circ.N,
)
MatrixProductOperator(tensors=10, indices=29, L=10, max_bond=32)
Tensor(shape=(32, 32, 2, 2), inds=[_595a34AAAHg, _595a34AAAHh, b6, k6], tags={U3, I6, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(32, 16, 2, 2), inds=[_595a34AAAHg, _595a34AAAHi, b7, k7], tags={U3, I7, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(32, 16, 2, 2), inds=[_595a34AAAHj, _595a34AAAHk, b2, k2], tags={U3, I2, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(32, 32, 2, 2), inds=[_595a34AAAHj, _595a34AAAHl, b3, k3], tags={U3, I3, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(32, 32, 2, 2), inds=[_595a34AAAHl, _595a34AAAHm, b4, k4], tags={U3, I4, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(32, 32, 2, 2), inds=[_595a34AAAHh, _595a34AAAHm, b5, k5], tags={U3, I5, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(4, 2, 2), inds=[_595a34AAAHn, b0, k0], tags={U3, I0, CZ}),backend=numpy, dtype=complex128, data=array([[[-0.61604372+2.40819678e-17j, 0.25919934+2.30325559e-01j], [-0.05734167-3.41258809e-01j, 0.32784968-5.22517871e-01j]], [[-0.65808868-2.40819678e-17j, 0.1087671 -2.34730203e-01j], [-0.24610535+7.93029015e-02j, -0.07633141+6.53687583e-01j]], [[-0.10613633-2.75222489e-17j, -0.079237 -6.94514262e-01j], [ 0.59143999-3.73011592e-01j, -0.06566129-8.28022244e-02j]], [[ 0.41969573-3.59768069e-17j, 0.53097181-2.05615319e-01j], [-0.32049612-4.70893724e-01j, 0.34493524+2.37081819e-01j]]])
Tensor(shape=(4, 16, 2, 2), inds=[_595a34AAAHn, _595a34AAAHk, b1, k1], tags={U3, I1, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(16, 4, 2, 2), inds=[_595a34AAAHi, _595a34AAAHo, b8, k8], tags={U3, I8, CZ}),backend=numpy, dtype=complex128, data=...
Tensor(shape=(4, 2, 2), inds=[_595a34AAAHo, b9, k9], tags={U3, I9, CZ}),backend=numpy, dtype=complex128, data=array([[[ 10.46656957-0.j , -8.28078722+3.04946084j], [ -7.62832633-4.40215621j, -10.30453618-1.75636178j]], [[ -8.43641373-0.j , 3.89935365-8.36740555j], [ -7.84146634-4.86507481j, -7.0820341 +4.61468449j]], [[ -5.17145175-0.j , -6.68877199+4.18585282j], [ 5.20979118-5.94800211j, -0.82457769+5.10399981j]], [[ 7.02674189-0.j , 3.03830461-4.03398835j], [ 3.26177468-3.84660945j, 1.56292165+6.85004125j]]])
tnc.show()
│4│16│32│32│32│32│32│16│4│
>─>──>──>──>──>──>──>──>─●
│ │  │  │  │  │  │  │  │ │