9. Optimizing a Tensor Network using Tensorflow

In this example we show how a general machine learning strategy can be used to optimize arbitrary tensor networks with respect to any target loss function.

We’ll take the example of maximizing the overlap of some matrix product state with periodic boundary conditions with a densely represented state, since this does not have a simple, deterministic alternative.

quimb makes use of cotengra which can contract tensors with a variety of backends as well as autoray for handling array operations agnostically. Here we’ll use tensorflow-v2 for the actual auto-gradient computation.

%config InlineBackend.figure_formats = ['svg']

import quimb as qu
import quimb.tensor as qtn
from quimb.tensor.optimize import TNOptimizer

First, find a (dense) PBC groundstate, \(| gs \rangle\):

L = 16
H = qu.ham_heis(L, sparse=True, cyclic=True)
gs = qu.groundstate(H)

Then we convert it to a dense 1D ‘tensor network’:

# this converts the dense vector to an effective 1D tensor network (with only one tensor)
target = qtn.Dense1D(gs)
print(target)
Dense1D([
    Tensor(shape=(2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2), inds=('k0', 'k1', 'k2', 'k3', 'k4', 'k5', 'k6', 'k7', 'k8', 'k9', 'k10', 'k11', 'k12', 'k13', 'k14', 'k15'), tags=oset(['I0', 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13', 'I14', 'I15'])),
], tensors=1, indices=16, L=16, max_bond=2)

Next we create an initial guess random MPS, \(|\psi\rangle\):

bond_dim = 32
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)
mps.draw()
../_images/b57a8e081514c871266f3424d286f82b000f85987e86cce62c65cab03236521b.svg

We now need to set-up the function that ‘prepares’ our tensor network. In the current example this involves making sure the state is always normalized.

def normalize_state(psi):
    return psi / (psi.H @ psi) ** 0.5

Then we need to set-up our ‘loss’ function, the function that returns the scalar quantity we want to minimize.

def negative_overlap(psi, target):
    return - (psi.H @ target) ** 2  # minus so as to minimize

Now we can set up the tensor network optimizer object:

optmzr = TNOptimizer(
    mps,                                # our initial input, the tensors of which to optimize
    loss_fn=negative_overlap,
    norm_fn=normalize_state,
    loss_constants={'target': target},  # this is a constant TN to supply to loss_fn
    autodiff_backend='tensorflow',      # {'jax', 'tensorflow', 'autograd'}
    optimizer='L-BFGS-B',               # supplied to scipy.minimize
)
2023-10-13 15:16:10.049837: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-13 15:16:10.851994: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-10-13 15:16:11.991099: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-10-13 15:16:12.047657: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

Then we are ready to optimize our tensor network! Note how we supplied the constant tensor network target - its tensors will not be changed.

mps_opt = optmzr.optimize(100)  # perform ~100 gradient descent steps
-0.999723964428 [best: -0.999723964428] : : 109it [00:19,  5.48it/s]                       

The output optimized (and normalized) tensor netwwork has already been converted back to numpy:

mps_opt[0].backend
'numpy'

And we can explicitly check the returned state indeed matches the loss shown above:

((mps_opt.H & target) ^ all) ** 2
0.9997239644280398

Other things to think about might be:

  • try other scipy optimizers for the optimizer= option

  • try other autodiff backends for the autodiff_backend= option

    • 'jax' - likely the best performance but slow to compile the initial computation

    • 'autograd' - numpy based, cpu-only optimization

    • 'torch' - (pytorch), quick compilation and decent performance, though no complex support (yet?)

  • using single precision data for better GPU acceleration

We can also keep optimizing:

mps_opt = optmzr.optimize(100)  # perform another ~100 gradient descent steps
  0%|          | 0/100 [00:00<?, ?it/s]
-0.999923989141 [best: -0.999923989141] : : 108it [00:20,  5.30it/s]