{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "51f916b5-d6eb-4aad-9d56-b0f9e82f0595",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"(tensor-network-basics)=\n",
"\n",
"# Basics\n",
"\n",
"\n",
"The tensor network functionality in `quimb` aims to be versatile and interactive, whilst not compromising ease and efficiency. Some key features are the following:\n",
"\n",
"- a core tensor network object orientated around handling arbitrary graph geometries including hyper edges\n",
"- a flexible tensor tagging system to structure and organise these\n",
"- dispatching of array operations to [`autoray`](https://autoray.readthedocs.io) so that many backends can be used\n",
"- automated contraction of many tensors using [`cotengra`](https://cotengra.readthedocs.io)\n",
"- automated drawing of arbitrary tensor networks\n",
"\n",
"Roughly speaking ``quimb`` works on five levels, each useful for different tasks:\n",
"\n",
"1. **'array'** objects: the underlying **CPU, GPU or abstract data** that ``quimb`` manipulates using [`autoray`](https://github.com/jcmgray/autoray);\n",
"2. [`Tensor`](quimb.tensor.tensor_core.Tensor) objects: these wrap the array, labelling the dimensions as **indices** and also carrying an arbitrary number of **tags** identifying them, such as their position in a lattice etc.;\n",
"3. [`TensorNetwork`](quimb.tensor.tensor_core.TensorNetwork) objects: these store a collection of tensors, tracking all index and tag locations and allowing methods based on the network structure;\n",
"4. **Specialized tensor networks**, such as [`MatrixProductState`](quimb.tensor.tensor_1d.MatrixProductState), that promise a particular structure, enabling specialized methods;\n",
"5. **High level interfaces and algorithms,** such as [`Circuit`](quimb.tensor.circuit.Circuit) and [`DMRG2`](quimb.tensor.tensor_dmrg.DMRG2), which handle manipulating one or more tensor networks for you.\n",
"\n",
"A more detailed breakdown of this design can be found on the page {ref}`tensor-network-design`. This page introduces the basic ideas of the first three levels."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9f2756c6-1df3-4ff4-9054-7fb7423176e4",
"metadata": {},
"outputs": [],
"source": [
"%config InlineBackend.figure_formats = ['svg']\n",
"import quimb as qu\n",
"import quimb.tensor as qtn"
]
},
{
"cell_type": "markdown",
"id": "0b3e1355-f577-429e-b65e-1ca43dde5c54",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"## Creating Tensors\n",
"\n",
"To create a Tensor you just need:\n",
"\n",
"* `data` - a raw array, and\n",
"* `inds` - a set of 'indices' to label each dimension with.\n",
"\n",
"Whilst naming the dimensions is useful so you don't have to remember which axis is which, the crucial point is that tensors simply sharing the same index name automatically form a 'bond' or implicit contraction when put together. Tensors can also carry an arbitrary number of identifiers - `tags` - which you can use to refer to single or groups of tensors once they are embedded in networks.\n",
"\n",
"For example, let's create the singlet state in tensor form, i.e., an index for each qubit:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f8e575b2-8f76-40ca-8eb3-8a00ec3be204",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Tensor(shape=(2, 2), inds=[k0, k1], tags={KET}),backend=numpy, dtype=complex128, data=array([[ 0. +0.j, 0.70710678+0.j],\n",
" [-0.70710678+0.j, 0. +0.j]])"
],
"text/plain": [
"Tensor(shape=(2, 2), inds=('k0', 'k1'), tags=oset(['KET']))"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data = qu.bell_state('psi-').reshape(2, 2)\n",
"inds = ('k0', 'k1')\n",
"tags = ('KET',)\n",
"\n",
"ket = qtn.Tensor(data=data, inds=inds, tags=tags)\n",
"ket"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2cb31903-aef9-4653-8e17-20cd092952e6",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"ket.draw()"
]
},
{
"cell_type": "markdown",
"id": "88d2446b-bfda-4442-a75c-4c83291f6e58",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"This is pretty much like a n-dimensional array, but with a few key differences:\n",
"\n",
"1. Methods manipulating dimensions use the names of indices, thus provided these are labelled correctly, their specific permutation doesn't matter.\n",
"2. The underlying `data` object can be anything that [autoray](https://github.com/jcmgray/autoray) supports - for example a symbolic or GPU array. These are passed around by reference and assumed to be immutable.\n",
"\n",
"Some common [`Tensor`](quimb.tensor.tensor_core.Tensor) methods are:\n",
"\n",
"- [`Tensor.reindex`](quimb.tensor.tensor_core.Tensor.reindex)\n",
"- [`Tensor.retag`](quimb.tensor.tensor_core.Tensor.retag)\n",
"- [`Tensor.fuse`](quimb.tensor.tensor_core.Tensor.fuse)\n",
"- [`Tensor.squeeze`](quimb.tensor.tensor_core.Tensor.squeeze)\n",
"- [`Tensor.gate`](quimb.tensor.tensor_core.Tensor.gate)\n",
"- [`Tensor.isel`](quimb.tensor.tensor_core.Tensor.isel)\n",
"- [`Tensor.new_ind`](quimb.tensor.tensor_core.Tensor.new_ind)\n",
"- [`Tensor.transpose`](quimb.tensor.tensor_core.Tensor.transpose)\n",
"- [`Tensor.trace`](quimb.tensor.tensor_core.Tensor.trace)\n",
"- [`Tensor.norm`](quimb.tensor.tensor_core.Tensor.norm)\n",
"\n",
":::{hint}\n",
"Many of these have inplace versions with an underscore appended,\n",
"so that `ket.transpose_('k1', 'k0')` would perform a\n",
"tranposition on `ket` directly, rather than making a new\n",
"tensor. The same convention is used for many\n",
"[`TensorNetwork`](quimb.tensor.tensor_core.TensorNetwork) methods.\n",
":::\n",
"\n",
"Let's also create some tensor paulis, with indices that act on the bell state and map the physical indices into two new ones:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cee2ae1c-1c7e-442c-8c44-59baed4508b2",
"metadata": {},
"outputs": [],
"source": [
"X = qtn.Tensor(qu.pauli('X'), inds=('k0', 'b0'), tags=['PAULI', 'X', '0'])\n",
"Y = qtn.Tensor(qu.pauli('Y'), inds=('k1', 'b1'), tags=['PAULI', 'Y', '1'])"
]
},
{
"cell_type": "markdown",
"id": "be211a49-32b5-4512-a916-7639590f2bae",
"metadata": {},
"source": [
"And finally, a random 'bra' to complete the inner product:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cbea620b-317a-4de2-9272-e4c758feb6ff",
"metadata": {},
"outputs": [],
"source": [
"bra = qtn.Tensor(qu.rand_ket(4).reshape(2, 2), inds=('b0', 'b1'), tags=['BRA'])"
]
},
{
"cell_type": "markdown",
"id": "db757fb1-bd13-4f65-8978-c6dd11d18303",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"Note how repeating an index name is all that is required to define a contraction.\n",
"If you want to join two tensors and have the index generated automatically\n",
"you can use the function [`qtn.connect`](quimb.tensor.tensor_core.connect)."
]
},
{
"cell_type": "markdown",
"id": "e5bba4e1-7799-4dd9-b088-a31709a68712",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
":::{note}\n",
"Indices and tags should be strings - though this is currently not enforced.\n",
"A useful convention is to keep `inds` lower case, and `tags` upper.\n",
"Whilst the order of `tags` doesn't specifically matter, it is kept internally as a ordered set - [`oset`](quimb.utils.oset) - so that all operations are deterministic.\n",
":::\n",
"\n",
"## Creating Tensor Networks\n",
"\n",
"We can now combine these into a [`TensorNetwork`](quimb.tensor.tensor_core.TensorNetwork) using the `&` operator overload:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c3af54e0-0fd8-4429-90a4-767287a57475",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TensorNetwork([\n",
" Tensor(shape=(2, 2), inds=('k0', 'k1'), tags=oset(['KET'])),\n",
" Tensor(shape=(2, 2), inds=('k0', 'b0'), tags=oset(['PAULI', 'X', '0'])),\n",
" Tensor(shape=(2, 2), inds=('k1', 'b1'), tags=oset(['PAULI', 'Y', '1'])),\n",
" Tensor(shape=(2, 2), inds=('b0', 'b1'), tags=oset(['BRA'])),\n",
"], tensors=4, indices=4)\n"
]
}
],
"source": [
"TN = ket.H & X & Y & bra\n",
"print(TN)"
]
},
{
"cell_type": "markdown",
"id": "aaee6ec9-7bba-4d70-893a-7a8fd69ad75f",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"(note that `.H` conjugates the data but leaves the indices).\n",
"We could also use the [`TensorNetwork`](quimb.tensor.tensor_core.TensorNetwork)\n",
"constructor, which takes any sequence of tensors and/or tensor networks,\n",
"and has various advanced options.\n",
"\n",
"The geometry of this network is completely defined by the repeated\n",
"indices forming edges between tensors, as well as arbitrary tags\n",
"identifying the tensors. The internal data of the tensor network\n",
"allows efficient access to any tensors based on their `tags` or `inds`.\n",
"\n",
":::{warning}\n",
"In order to naturally maintain networks geometry, bonds (repeated\n",
"indices) can be mangled when two tensor networks are combined.\n",
"As a result of this, only exterior indices are guaranteed to\n",
"keep their absolute value - since these define the overall object.\n",
"The [`qtn.bonds`](quimb.tensor.tensor_core.bonds) function can be used to\n",
"find the names of indices connecting tensors if explicitly required.\n",
":::\n",
"\n",
"Some common [`TensorNetwork`](quimb.tensor.tensor_core.TensorNetwork) methods\n",
"are:\n",
"\n",
"- [`TensorNetwork.reindex`](quimb.tensor.tensor_core.TensorNetwork.reindex)\n",
"- [`TensorNetwork.retag`](quimb.tensor.tensor_core.TensorNetwork.retag)\n",
"- [`TensorNetwork.gate_inds`](quimb.tensor.tensor_core.TensorNetwork.gate_inds)\n",
"- [`TensorNetwork.isel`](quimb.tensor.tensor_core.TensorNetwork.isel)\n",
"- [`TensorNetwork.trace`](quimb.tensor.tensor_core.TensorNetwork.trace)\n",
"- [`TensorNetwork.contract`](quimb.tensor.tensor_core.TensorNetwork.contract)\n",
"- [`TensorNetwork.norm`](quimb.tensor.tensor_core.TensorNetwork.norm)\n",
"\n",
"Note that many of these are shared with\n",
"[`Tensor`](quimb.tensor.tensor_core.Tensor), meaning it is possible to\n",
"write many agnostic functions that operate on either.\n",
"\n",
"Any network can also be drawn using\n",
"[`TensorNetwork.draw`](quimb.tensor.tensor_core.TensorNetwork.draw), which will pick\n",
"a layout and also represent bond size as edge thickness, and optionally\n",
"color the nodes based on `tags`."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "16c76c97-2640-46e3-a724-7a69d595658a",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"TN.draw(color=['KET', 'PAULI', 'BRA', 'X', 'Y'], figsize=(4, 4), show_inds='all')"
]
},
{
"cell_type": "markdown",
"id": "cdc25ee4-5a83-4241-8e25-cb761a0654f3",
"metadata": {},
"source": [
"Note the tags can be used to identify both paulis at once. But they could also be uniquely identified using their ``'X'`` and ``'Y'`` tags respectively:"
]
},
{
"cell_type": "markdown",
"id": "b48d4fc3-b7df-48c9-9d5a-4146561e60e5",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"A detailed guide to drawing can be found on the page {ref}`tensor-network-drawing`.\n",
"\n",
"(tn-creation-graph-style)=\n",
"\n",
"## Graph Orientated Tensor Network Creation\n",
"\n",
"Another way to create tensor networks is define the tensors (nodes)\n",
"first and the make indices (edges) afterwards. This is mostly enabled\n",
"by the functions [`Tensor.new_ind`](quimb.tensor.tensor_core.Tensor.new_ind) and\n",
"[`Tensor.new_bond`](quimb.tensor.tensor_core.Tensor.new_bond). Take for example\n",
"making a small periodic matrix product state with bond dimension 7:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ac74eca3-0f61-492f-8a0b-391ef1317655",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"L = 10\n",
"\n",
"# create the nodes, by default just the scalar 1.0\n",
"tensors = [qtn.Tensor() for _ in range(L)]\n",
"\n",
"for i in range(L):\n",
" # add the physical indices, each of size 2\n",
" tensors[i].new_ind(f'k{i}', size=2)\n",
"\n",
" # add bonds between neighbouring tensors, of size 7\n",
" tensors[i].new_bond(tensors[(i + 1) % L], size=7)\n",
"\n",
"mps = qtn.TensorNetwork(tensors)\n",
"mps.draw()"
]
},
{
"cell_type": "markdown",
"id": "1f61668c-9605-496b-b1f1-d67ffe96d7b0",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
":::{hint}\n",
"You can also add tensors or tensor networks in-place to an existing\n",
"tensor using the following syntax:\n",
"\n",
"```python3\n",
"# add a copy of ``t``\n",
"tn &= t\n",
"\n",
"# add ``t`` virtually\n",
"tn |= t\n",
"```\n",
":::\n",
"\n",
"## Contraction\n",
"\n",
"Actually performing the implicit sum-of-products that a tensor network represents\n",
"involves *contraction*. Specifically, to turn a tensor network of $N$ tensors\n",
"into a single tensor with the same 'outer' shape and indices requires a sequence of\n",
"$N - 1$ pairwise contractions. The cost of doing these intermediate contractions\n",
"can be **very** sensitive to the exact sequence, or *'path'*, chosen. `quimb`\n",
"automates both the path finding stage and the actual contraction stage, but there is\n",
"a non-trivial tradeoff between how long one spends finding the path, and how long the\n",
"actual contraction takes.\n",
"\n",
":::{warning}\n",
"By default, `quimb` employs a basic greedy path\n",
"optimizer that has very little overhead, but won't nearly be optimal on large or complex\n",
"graphs. See {ref}`tensor-network-contraction` for more details.\n",
":::\n",
"\n",
"To fully contract a network we can use the `^` operator, and the `...` object:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "088405f6-6c11-4e77-8daf-dcd6f9cc8281",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.556024406455527+0.28213663229389124j)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"TN ^ ..."
]
},
{
"cell_type": "markdown",
"id": "03e3a18a-2f0c-4ae3-ab61-63a56e1757ad",
"metadata": {},
"source": [
"Or if you only want to contract tensors with a specific set of tags, such as the two pauli operators,\n",
"supply a tag or sequence of tags:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ea5e2905-bbb2-4f48-b801-648a174db267",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"TensorNetwork([\n",
" Tensor(shape=(2, 2), inds=('k0', 'k1'), tags=oset(['KET'])),\n",
" Tensor(shape=(2, 2), inds=('b0', 'b1'), tags=oset(['BRA'])),\n",
" Tensor(shape=(2, 2, 2, 2), inds=('k0', 'b0', 'k1', 'b1'), tags=oset(['PAULI', 'X', '0', 'Y', '1'])),\n",
"], tensors=3, indices=4)\n"
]
}
],
"source": [
"TNc = TN ^ 'PAULI'\n",
"TNc.draw('PAULI')\n",
"print(TNc)"
]
},
{
"cell_type": "markdown",
"id": "2aecb912-418c-49e2-afed-79c96db6ca40",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"Note how the `tags` of the Paulis have been merged on the new tensor.\n",
"\n",
"The are many other contraction related options detailed in {ref}`tensor-network-contraction`, but the\n",
"core function that all of these are routed through is [`tensor_contract`](quimb.tensor.tensor_core.tensor_contract).\n",
"If you need to pass contraction options, such as the path finding strategy kwarg `optimize`, you'll\n",
"need to call the method like `tn.contract(..., optimize='auto-hq')`.\n",
"\n",
"One useful shorthand is the 'matmul' operator, `@`, which directly contracts two tensors:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e2aa08f9-db49-4b03-be59-28393cf5f186",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Tensor(shape=(2, 4), inds=[a, b], tags={A, B}),backend=numpy, dtype=float64, data=array([[-1.07482966, 1.40801906, 0.97244287, 2.06712511],\n",
" [-1.19598176, -1.1037286 , -0.15011799, 1.1052763 ]])"
],
"text/plain": [
"Tensor(shape=(2, 4), inds=('a', 'b'), tags=oset(['A', 'B']))"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ta = qtn.rand_tensor([2, 3], inds=['a', 'x'], tags='A')\n",
"tb = qtn.rand_tensor([4, 3], inds=['b', 'x'], tags='B')\n",
"\n",
"# matrix multiplication but with indices aligned automatically\n",
"ta @ tb"
]
},
{
"cell_type": "markdown",
"id": "b2554816-853e-4bf9-bf4e-83dc3a1d5d04",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"source": [
"Since the conjugate of a tensor network keeps the same outer indices,\n",
"this means that `tn.H @ tn` is always the frobenius norm squared\n",
"($\\mathrm{Tr} A^{\\dagger}A$) of any tensor network."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "b136e4aa-9ad6-4bc1-af35-ba53d319f5b4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.9999999999999999"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# get a normalized tensor network\n",
"psi = qtn.MPS_rand_state(10, 7)\n",
"\n",
"# compute its norm squared\n",
"psi.H @ psi # == (tn.H & tn) ^ ..."
]
},
{
"cell_type": "markdown",
"id": "7963ae63-db42-4dab-b6fe-b2898a3a68f3",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"## Decomposition\n",
"\n",
"A key part of many tensor network algorithms are various linear algebra\n",
"decompositions of tensors viewed as operators mapping one set of 'left'\n",
"indices to the remaining 'right' indices. This functionality is handled\n",
"by the core function [`tensor_split`](quimb.tensor.tensor_core.tensor_split)."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "227e4bfd-3b58-4879-b588-9571fe09ded4",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"# create a tensor with 5 legs\n",
"t = qtn.rand_tensor([2, 3, 4, 5, 6], inds=['a', 'b', 'c', 'd', 'e'])\n",
"t.draw(figsize=(3, 3))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "db070ecd-d78e-4db4-ad40-4735cea49f18",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"# split the tensor, by grouping some indices as 'left'\n",
"tn = t.split(['a', 'c', 'd'])\n",
"tn.draw(figsize=(3, 3))"
]
},
{
"cell_type": "markdown",
"id": "691cda75-a0ec-4bc8-b87e-df5db5aa931a",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"There are many options that can be passed to {func}`~quimb.tensor.tensor_core.tensor_split` such as:\n",
"\n",
"- which decomposition to use\n",
"- how or whether to absorb any singular values\n",
"- specific `tags` or `inds` to introduce\n",
"\n",
"Often it is convenient to perform a decomposition within a tensor network, here we decompose\n",
"a central tensor of an MPS, introducing a new tensor that will sit on the bond:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "2b21499c-40be-426a-9d3b-0226025580e5",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"psi = qtn.MPS_rand_state(6, 10)\n",
"psi.draw()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "af444a50-b491-4895-89fa-7f1c861b3644",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"psi.split_tensor(\n",
" # identify the tensor\n",
" tags='I2',\n",
" # here we give the right indices\n",
" left_inds=None,\n",
" right_inds=[psi.bond(2, 3)],\n",
" # a new tag for the right factor\n",
" rtags='NEW',\n",
")\n",
"\n",
"psi.draw('NEW')"
]
},
{
"cell_type": "markdown",
"id": "724772fe-8265-4f09-9414-eb742f14525b",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"### Gauging\n",
"\n",
"A key aspect of tensor networks is the freedom to 'gauge' bonds - locally change\n",
"tensors whilst keeping the overall tensor network object invariant. For example,\n",
"contracting $G^{-1}$ and $G$ into adjacent tensors (you can do this\n",
"explicitly with [`TensorNetwork.insert_gauge`](quimb.tensor.tensor_core.TensorNetwork.insert_gauge)).\n",
"\n",
"A common example is 'isometrizing' a tensor with respect to all but one of its\n",
"indices and absorbing an appropriate factor into its neighbor to take this transformation\n",
"into account. A common way to do this is via the QR decomposition:\n",
"\n",
"```{image} _static/canonize-bond.png\n",
":width: 1000px\n",
"```\n",
"\n",
"where the inward arrows imply:\n",
"\n",
"```{image} _static/isometric-tensor.png\n",
":width: 300px\n",
"```\n",
"\n",
"In `quimb` this is called 'canonizing' for short. When used in an MPS for example,\n",
"it can used to put the TN into a *canonical* form, but this is generally not possible\n",
"for 'loopy' TNs. The core function that performs this is\n",
"[`qtn.tensor_canonize_bond`](quimb.tensor.tensor_core.tensor_canonize_bond)."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "d0a57a78-c06b-4cdb-afd8-5fae91409189",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"ta = qtn.rand_tensor([4, 4, 4], ['a', 'b', 'c'], 'A')\n",
"tb = qtn.rand_tensor([4, 4, 4, 4], ['c', 'd', 'e', 'f'], 'B')\n",
"\n",
"qtn.tensor_canonize_bond(ta, tb)\n",
"\n",
"(ta | tb).draw(['A', 'B'], figsize=(4, 4), show_inds='all')"
]
},
{
"cell_type": "markdown",
"id": "8b990d65-559b-47e8-bd54-282cd4e2357a",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"You can also perform this within a TN with the method\n",
"[`TensorNetwork.canonize_between`](quimb.tensor.tensor_core.TensorNetwork.canonize_between)\n",
"or perform many such operations inwards on an automatically\n",
"generated spanning tree using\n",
"[`TensorNetwork.canonize_around`](quimb.tensor.tensor_core.TensorNetwork.canonize_around).\n",
"\n",
"### Compressing\n",
"\n",
"A similar operation is to 'compress' one or more bonds shared by\n",
"two tensors. The basic method for doing this is to contract\n",
"two 'reduced factors' from the neighboring tensors and perform\n",
"a truncated SVD on this central bond tensor before re-absorbing\n",
"the new decomposed parts either left or right:\n",
"\n",
"```{image} _static/basic-compress.png\n",
":width: 1000px\n",
"```\n",
"\n",
"The maximum bond dimension kept is often denoted $\\chi$.\n",
"The core function that handles this is\n",
"[`qtn.tensor_compress_bond`](quimb.tensor.tensor_core.tensor_compress_bond)."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "1f706cb8-37e0-4f81-b6e3-9e582afe474c",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"ta = qtn.rand_tensor([4, 4, 10], ['a', 'b', 'c'], 'A')\n",
"tb = qtn.rand_tensor([10, 4, 4, 4], ['c', 'd', 'e', 'f'], 'B')\n",
"(ta | tb).draw(['A', 'B'], figsize=(4, 4), show_inds='bond-size')"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "18180bab-1d2e-4a3f-aa0e-398c098f985f",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"# perform the compression\n",
"qtn.tensor_compress_bond(ta, tb, max_bond=2, absorb='left')\n",
"\n",
"# should now see the bond has been reduced in size to 2\n",
"(ta | tb).draw(['A', 'B'], figsize=(4, 4), show_inds='bond-size')"
]
},
{
"cell_type": "markdown",
"id": "31e643cd-e2d9-41e0-b67d-b39a9ebd4fe4",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"Again, there are many options for controlling the compression that can be found in\n",
"{func}`~quimb.tensor.tensor_core.tensor_compress_bond`, which in turn calls\n",
"{func}`~quimb.tensor.tensor_core.tensor_split` on the central factor.\n",
"\n",
"In order to perform compressions in a tensor network based on tags, one can call\n",
"{meth}`~quimb.tensor.tensor_core.TensorNetwork.compress_between`, which also has\n",
"options for taking the environment into account and is one of the main drivers\n",
"for 2D contraction for example.\n",
"\n",
"### `TNLinearOperator`\n",
"\n",
"Tensor networks can represent very large multi-linear operators implicitly - these are\n",
"often too large to form a dense representation of explicitly. However, many iterative\n",
"algorithms exist that only require the action of this operator on a vector, and this can\n",
"be cast as a contraction that is usually much cheaper.\n",
"\n",
"Consider the following contrived TN with overall shape\n",
"`(1000, 1000, 1000, 1000)`, but which is evidently low rank:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "a84c0de6-3dcc-4968-a354-ba96acbd62de",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"ta = qtn.rand_tensor([1000, 2, 2], inds=['a', 'w', 'x'], tags='A')\n",
"tb = qtn.rand_tensor([1000, 2, 2], inds=['b', 'w', 'y'], tags='B')\n",
"tc = qtn.rand_tensor([1000, 2, 2], inds=['c', 'x', 'z'], tags='C')\n",
"td = qtn.rand_tensor([1000, 2, 2], inds=['d', 'y', 'z'], tags='D')\n",
"tn = (ta | tb | tc | td)\n",
"tn.draw(['A', 'B', 'C', 'D'], show_inds='all')"
]
},
{
"cell_type": "markdown",
"id": "96df8da5-2c83-4b40-b0f2-e4d75f87405a",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"View this as a {class}`~quimb.tensor.tensor_core.TNLinearOperator` which is a\n",
"subclass of a {class}`scipy.sparse.linalg.LinearOperator` and so can be supplied\n",
"anywhere they can:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "1a3bcae3-17e3-477a-9f9a-a3f34c4866cc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<1000000x1000000 TNLinearOperator with dtype=float64>"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tnlo = tn.aslinearoperator(['a', 'b'], ['c', 'd'])\n",
"tnlo"
]
},
{
"cell_type": "markdown",
"id": "19e8585d-eb1d-4814-bb1f-599226801bc5",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"I.e. it maps a vector of size 1,000,000 spanning the indices `'a'` and `'b'`\n",
"to a vector of size 1,000,000 spanning the indices `'c'` and `'d'`.\n",
"\n",
"We can supply it to iterative functions directly:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "57136381-6ed2-4135-94bb-3a49dd81cc58",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([2900.03154824+0.j])"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"qu.eigvals(tnlo, k=1, which='LM')"
]
},
{
"cell_type": "markdown",
"id": "12c05935-a7eb-4fdb-b0e9-88343b6ee69f",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"Or the function {func}`~quimb.tensor.tensor_core.tensor_split` also\n",
"accepts a {class}`~quimb.tensor.tensor_core.TNLinearOperator` instead\n",
"of a {class}`~quimb.tensor.tensor_core.Tensor`."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "49269f52-20b9-484c-87c7-bef4acd14413",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"tn_decomp = qtn.tensor_split(\n",
" tnlo,\n",
" left_inds=tnlo.left_inds,\n",
" right_inds=tnlo.right_inds,\n",
" # make sure we supply a iterative method\n",
" method='svds', # {'rsvd', 'isvd', 'eigs', ...}\n",
" max_bond=4,\n",
")\n",
"\n",
"tn_decomp.draw(['A', 'B', 'C', 'D'], show_inds='bond-size')"
]
},
{
"cell_type": "markdown",
"id": "51ae6010-83b4-4330-8217-f7eedf119351",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"Again, this operation can be performed within a TN based on tags using the method\n",
"{meth}`~quimb.tensor.tensor_core.TensorNetwork.replace_with_svd`. Or you can treat\n",
"an entire tensor network as an operator to be decomposed with\n",
"{meth}`~quimb.tensor.tensor_core.TensorNetwork.split`.\n",
"\n",
":::{warning}\n",
"You are of course here still limited by the size of the left and right\n",
"vector spaces, which while generally are square root the size of the\n",
"dense operator, nonetheless grow exponentially in number of indices.\n",
":::\n",
"\n",
"## Selection\n",
"\n",
"Many methods use `tags` to specify which tensors to operator on within a TN.\n",
"This is often used in conjuction with a `which` kwarg specifying how to match\n",
"the tags. The following illustrates the options for a 2D TN which has both\n",
"rows and columns tagged:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "2a60349b-1ec3-4be5-8296-a59545c29eba",
"metadata": {},
"outputs": [],
"source": [
"tn = qtn.TN2D_rand(5, 5, D=4)"
]
},
{
"cell_type": "markdown",
"id": "1edad091-4af5-48d6-990e-590d03312557",
"metadata": {},
"source": [
"Get tensors which have **all** of the tags:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "bb157352-624b-494c-aa97-d8fb5f4c2d0f",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"tn.select(tags=['X2', 'Y3'], which='all').add_tag('ALL')\n",
"tn.draw('ALL', figsize=(3, 3))"
]
},
{
"cell_type": "markdown",
"id": "4fb3d3af-180e-4f3e-844f-9d31140a6172",
"metadata": {},
"source": [
"Get tensors which *don't* have **all** of the tags:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "9bf6efec-c02d-4af1-86f2-e657464b6a3e",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"tn.select(tags=['X2', 'Y3'], which='!all').add_tag('!ALL')\n",
"tn.draw('!ALL', figsize=(3, 3))"
]
},
{
"cell_type": "markdown",
"id": "deabe48d-218b-4197-8e92-457aff1e44ea",
"metadata": {},
"source": [
"Get tensors which have **any** of the tags:"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "49c89b3a-05ee-4ffc-9bc0-a4ea39f69d8b",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"tn.select(tags=['X2', 'Y3'], which='any').add_tag('ANY')\n",
"tn.draw('ANY', figsize=(3, 3))"
]
},
{
"cell_type": "markdown",
"id": "48f5f275-faf3-409b-b327-c256d40e33cc",
"metadata": {},
"source": [
"Get tensors which *don't* have **any** of the tags:"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "926bf5c8-85b5-4244-90a8-74cf1de46a27",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"tn.select(tags=['X2', 'Y3'], which='!any').add_tag('!ANY')\n",
"tn.draw('!ANY', figsize=(3, 3))"
]
},
{
"cell_type": "markdown",
"id": "beffc29c-a696-4dd0-8123-ef3ac7245d28",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"Several methods exist for returning the tagged tensors directly:\n",
"\n",
"- [`TensorNetwork.select`](quimb.tensor.tensor_core.TensorNetwork.select) - get a tagged region as a tensor network\n",
"- [`TensorNetwork.select_tensors`](quimb.tensor.tensor_core.TensorNetwork.select_tensors) - get tagged tensors directly\n",
"- [`TensorNetwork.partition`](quimb.tensor.tensor_core.TensorNetwork.partition) - partition into a two tensor networks\n",
"- [`TensorNetwork.partition_tensors`](quimb.tensor.tensor_core.TensorNetwork.partition_tensors) - partition into two groups of tensors directly\n",
"- [`TensorNetwork.select_local`](quimb.tensor.tensor_core.TensorNetwork.select_local) - get a local region as a tensor network\n",
"\n",
"Using the `tn[tags]` syntax is like calling `tn.select_tensors(tags, which='all')`:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "6a2c5e06-36d9-479a-acc2-7673e92550e4",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Tensor(shape=(4, 4, 4, 4), inds=[_835204AAABZ, _835204AAABb, _835204AAABc, _835204AAABT], tags={I2,3, X2, Y3, ALL, ANY}),backend=numpy, dtype=float64, data=..."
],
"text/plain": [
"Tensor(shape=(4, 4, 4, 4), inds=('_835204AAABZ', '_835204AAABb', '_835204AAABc', '_835204AAABT'), tags=oset(['I2,3', 'X2', 'Y3', 'ALL', 'ANY']))"
]
},
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tn['X2', 'Y3']"
]
},
{
"cell_type": "markdown",
"id": "92807b3c-63dc-49ce-9c23-0e1fea472672",
"metadata": {},
"source": [
"Although some special tensor networks also accept a lattice coordinate here as well:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "2a9d0254-3aeb-4b08-8242-34e580a49406",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Tensor(shape=(4, 4, 4, 4), inds=[_835204AAABZ, _835204AAABb, _835204AAABc, _835204AAABT], tags={I2,3, X2, Y3, ALL, ANY}),backend=numpy, dtype=float64, data=..."
],
"text/plain": [
"Tensor(shape=(4, 4, 4, 4), inds=('_835204AAABZ', '_835204AAABb', '_835204AAABc', '_835204AAABT'), tags=oset(['I2,3', 'X2', 'Y3', 'ALL', 'ANY']))"
]
},
"execution_count": 30,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tn[2, 3]"
]
},
{
"cell_type": "markdown",
"id": "fc373c17-3d26-4aaf-9780-ee6ac73622a8",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
":::{hint}\n",
"You can iterate over all tensors in a tensor network like so:\n",
"\n",
"```python3\n",
"for t in tn:\n",
" ...\n",
"```\n",
":::\n",
"\n",
"Behind the scenes, tensors in a tensor network are stored using a unique `tid` - an integer\n",
"denoting which node they are in the network. See {ref}`tensor-network-design` for a more low\n",
"level description.\n",
"\n",
"### Virtual vs Copies\n",
"\n",
"An important concept when selecting tensors from tensor networks is whether the operation\n",
"takes a copy of the tensors, or is simply viewing them 'virtually'. In the examples above\n",
"because {meth}`~quimb.tensor.tensor_core.TensorNetwork.select` defaults to `virtual=True`,\n",
"when we modified the tensors by adding the tags `('ALL', '!ALL', 'ANY', '!ANY')`, we\n",
"are modifying the tensors in the original network too. Similarly, when we construct a\n",
"tensor network like so:"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "040ac949-eb3c-4d59-b883-d0b3d5c47dee",
"metadata": {},
"outputs": [],
"source": [
"tn = qtn.TensorNetwork([ta, tb, tc, td], virtual=True)"
]
},
{
"cell_type": "markdown",
"id": "596ecf1e-615a-43c5-8b32-eebe638b731f",
"metadata": {},
"source": [
"which is equivalent to"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "2a30c5c5-10d4-4be7-9085-1df8c7689e0b",
"metadata": {},
"outputs": [],
"source": [
"tn = (ta | tb | tc | td)"
]
},
{
"cell_type": "markdown",
"id": "941a79bc-bd1d-4b0c-9f42-cbc7245870b6",
"metadata": {},
"source": [
"the new TN is *viewing* those tensors and so changes to them will affect ``tn``\n",
"and vice versa. Note this is *not* the default behaviour."
]
},
{
"cell_type": "markdown",
"id": "b8e5c1e7-35ac-4ea9-9ce7-cad5bab6f45a",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
":::{warning}\n",
"The underlying tensor arrays - `t.data` - are always assumed to be immutable\n",
"and for efficiency are never copied by default in `quimb`.\n",
":::\n",
"\n",
"## Modification\n",
"\n",
"Whilst many high level functions handle all the tensor modifications for you, at some\n",
"point you will likely want to directly update, for example, the data in a\n",
"{class}`~quimb.tensor.tensor_core.Tensor`. The low-level method for performing arbitrary\n",
"changes to a tensors `data`, `inds` and `tags` is\n",
"{meth}`~quimb.tensor.tensor_core.Tensor.modify`.\n",
"\n",
"The reason that this is encapsulated thus, which is something to be aware of, is so that\n",
"the tensor can let any tensor networks viewing it know of the changes. For example, often\n",
"we want to iterate over tensors in a TN and change them on this atomic level, but not have\n",
"to worry about the TN's efficient maps which keep track of all the indices and tags going\n",
"out of sync."
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "1af2f302-c742-47a0-95ca-1a24d63d7e60",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"tn = qtn.TN_rand_reg(12, 3, 3, seed=42)\n",
"tn.draw()"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "6172d29d-3dda-495b-aeff-9ea75e31a4bd",
"metadata": {},
"outputs": [
{
"data": {
"image/svg+xml": [
""
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"# add a random dangling index to each tensor\n",
"for t in tn:\n",
" t.new_ind(qtn.rand_uuid(), size=2)\n",
"\n",
"tn.draw()"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "155c4539-7be9-48b0-a17e-44c3460a7f4a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('_835204AAACJ',\n",
" '_835204AAACK',\n",
" '_835204AAACL',\n",
" '_835204AAACM',\n",
" '_835204AAACN',\n",
" '_835204AAACO',\n",
" '_835204AAACP',\n",
" '_835204AAACQ',\n",
" '_835204AAACR',\n",
" '_835204AAACS',\n",
" '_835204AAACT',\n",
" '_835204AAACU')"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# the TN efficiently keeps track of all indices and tags still\n",
"tn.outer_inds()"
]
},
{
"cell_type": "markdown",
"id": "31fd7c7e-63e0-4984-99db-31857caf6a4b",
"metadata": {
"raw_mimetype": "text/restructuredtext",
"tags": []
},
"source": [
"See the page {ref}`tensor-network-design` for more details."
]
}
],
"metadata": {
"celltoolbar": "Raw Cell Format",
"kernelspec": {
"display_name": "py312",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}