4. Example - Tensor Renormalization Group (TRG)

TRG[1, 2, 3] is an tensor network algorithm for computing partition functions of 2D classical spin models, using real space renormalization. It is simple but quite powerful, and the basis for many more advanced algorithms.

In its simplest form it only requires a manipulating a few tensors, so does not require any of the quimb functionality dealing with large and complex geometry networks. However, implementing it here does demonstrate:

  • the basic low-level tensor operations of contracting, decomposing and relabelling indices etc.

  • the more advanced feature of treating a small tensor network transparently as a ‘lazy’ tensor to enable more efficient iterative operations e.g.

4.1. Define the algorithm

The following function runs the entire algorithm and is pretty extensively commented:

import quimb.tensor as qtn
from autoray import do
from math import log, log1p, cosh, sinh, cos, pi


def TRG(
    beta, 
    chi, 
    iterations, 
    j=1.0,
    h=0.0,
    cutoff=0.0,
    lazy=False,
    to_backend=None, 
    progbar=False, 
    **split_opts
):
    """Run the TRG algorithm on the square lattice.
    
    Parameters
    ----------
    beta : float
        Inverse temperature.
    chi : int
        The maximum bond dimension.
    iterations : int
        The number of iterations, the overall effective lattice size is then
        ``(2**iterations, 2**iterations)``, with PBC.
    j : float, optional
        The coupling constant.
    h : float, optional
        The external field.
    cutoff : float, optional
        The cutoff for the bond truncations.
    lazy : bool, optional
        Whether to explicitly contract the effective site tensor at each 
        iteration (``False``), or treat it lazily as the loop from the last
        iteration, allowing a more efficient iterative decomposition at large
        ``chi``.
    to_backend : callable, optional
        A function that takes a numpy array and converts it to the desired
        backend tensor.
    
    Returns
    -------
    f : scalar
        The free energy per site.
    """
    if lazy and cutoff == 0.0:
        # by default use a low-rank iterative decomposition
        split_opts.setdefault('method', 'svds')
    
    # setup the initial single site array, allowing custom backends
    t = qtn.tensor_builder.classical_ising_T_matrix(beta, j=j, h=h, directions='lrud')
    if to_backend is not None:
        t = to_backend(t)

    # This is the effective lattice
    #
    #      u          u   
    #      |          |    
    #   l--A--r .. l--A--r
    #      |          |    
    #      d          d    
    #      :          :  
    #      u          u   
    #      |          |    
    #   l--A--r .. l--A--r
    #      |          |    
    #      d          d    
    #
    A = qtn.Tensor(t, ('d', 'l', 'u', 'r'))
    
    # track the very large overall scalar in log with this
    exponent = 0.0
    
    if progbar:
        import tqdm
        its = tqdm.trange(2 * iterations)
    else:
        its = range(2 * iterations)
    
    for i in its:

        # split site tensor in two ways:
        #     u                    u
        #     |                    |
        #  l--A--r  ->  l--AL~~b~~AU--r
        #     |            |
        #     d            d
        AL, AU = A.split(
            left_inds=['d', 'l'], get='tensors', bond_ind='b', 
            max_bond=chi, cutoff=cutoff, **split_opts)
        #     u            u
        #     |            |
        #  l--A--r  ->  l--BU~~b~~BL--r
        #     |                   |
        #     d                   d
        BU, BL = A.split(
            left_inds=['l', 'u'], get='tensors', bond_ind='b', 
            max_bond=chi, cutoff=cutoff, **split_opts)
        
        # reindex to form a plaquette
        #             u
        #  l ~~BL--AL~~
        #      |    |       w/ inner loop indices: dp, lp, up, rp
        #    ~~AU--BU~~ r
        #    d
        AU.reindex_({'b': 'd', 'r': 'dp', 'u': 'lp'})
        BL.reindex_({'b': 'l', 'd': 'lp', 'r': 'up'})
        AL.reindex_({'b': 'u', 'l': 'up', 'd': 'rp'})
        BU.reindex_({'b': 'r', 'u': 'rp', 'l': 'dp'})

        # we can just form the TN of this loop and treat like a tensor
        A = (AU | BL | AL | BU)
        if not lazy:
            # ... or contract to dense A tensor explicitly
            A = A.contract()
        
        # bookeeping: move normalization into separate 'exponent'
        nfact = A.largest_element()
        A /= nfact
        exponent *= 2  # first account for lattice doubling in size
        exponent += do('log', nfact)
        
    # perform the final periodic trace
    mantissa = A.trace(['u', 'd'], ['l', 'r'])
    
    # combine with the separately tracked  exponent
    logZ = do('log', mantissa) + exponent
    N = 2**(iterations * 2)
    
    return - logZ / (N * beta)

Note we are mostly just are manipulating a few objects at the Tensor level. However, our main object A can actually be a TensorNetwork because many methods have exactly the same signature and usage, specifically here:

4.2. Run the algorithm

We can run the function for pretty large chi if we use this lazy iterative feature, (which doesn’t affect accuracy):

chi = 64

# the critical temperature is known analytically
beta = log1p(2**0.5) / 2

f = TRG(
    beta=beta,
    chi=chi,    
    iterations=16,  # L = 2**16
    lazy=True,      # lazily treat loop TN as new tensor
    progbar=True,
)
f
100%|██████████| 32/32 [01:33<00:00,  2.92s/it]
-2.1096509964871495

4.3. Check against exact result

The exact free energy is also known analytically in the thermodynamic limit[4, 5], which we can compute here as a check:

def free_energy_2d_exact(beta, j=1.0):
    from scipy.integrate import quad
    
    def inner1(theta1, theta2):
        return log(
            cosh(2 * beta * j)**2 -
            sinh(2 * beta * j) * cos(theta1) -
            sinh(2 * beta * j) * cos(theta2)
        )
    
    def inner2(theta2):
        return quad(
            lambda theta1: inner1(theta1, theta2),
            0, 2 * pi,
        )[0]
    
    I = quad(inner2, 0, 2 * pi)[0]
    return -(log(2) + I / (8 * pi**2)) / beta

fex = free_energy_2d_exact(beta)

So our relative error is given by:

err = 1 - f / fex
err
7.02111621064816e-08

4.4. Extensions

Which is pretty decent, though methods which take into account the environement when truncating can do even better. Things you might try:

  • use a GPU backend (pass to_backend), this might require method='svd' and lazy=False

  • use other iterative SVD methods (e.g. 'isvd' or 'rsvd') and play with lazy

  • using fit() to optimize the projectors at each step