3. Drawing¶
quimb
has a lot of functionality for drawing tensor networks that can be
useful for debugging, interactive development, and producing figures etc.
This page is a general overview of various options, mostly centered around the
method TensorNetwork.draw
. Underneath this
calls networkx
which itself uses matplotlib.
%config InlineBackend.figure_formats = ['svg']
import matplotlib.pyplot as plt
import quimb.tensor as qtn
We’ll use a 3D grid tensor network as our basic example.
Lx = Ly = Lz = 4
D = 2
tn = qtn.TN3D_rand(Lx, Ly, Lz, D=D)
By default bonds are draw proportional to log2
of their dimension, whereas nodes are fixed in size.
tn.draw()
By default index names are not shown and tensor tags are only shown for small tensors, these can both be controlled manually like so:
qtn.PEPS.rand(3, 3, D).draw(show_tags=True, show_inds=True)
If you want to see inner index names (bonds) as well as the outer index names you need to use show_inds='all'
:
qtn.PEPS.rand(3, 3, D).draw(show_tags=False, show_inds='all')
3.1. Coloring¶
The first argument to draw
is color=
, which can either be a single tag or a sequence of tags:
# add the same tag to every tensor
tn.add_tag('CUBE')
# color that tag and each corner of our TN
color = ['CUBE'] + [
f'I{i},{j},{k}'
for i in (0, Lx - 1)
for j in (0, Ly - 1)
for k in (0, Lz - 1)
]
tn.draw(color)
If you have many tags or are simply only interested in the drawing the colors you can supply the legend=False
option to turn off the legend.
quimb
can show tensor which have multiple matching tags - the style is controlled by the kwarg multi_tag_style
which should be one of:
{"auto", "pie", "nest", "average", "last"}
.
t = qtn.rand_tensor([2, 3, 4, 5], 'abcd', ['W', 'X', 'Y', 'Z'])
fig, axs = plt.subplots(1, 4)
for i, multi_tag_style in enumerate(["pie", "nest", "average", "last"]):
t.draw(
['W', 'X', 'Y', 'Z'],
multi_tag_style=multi_tag_style,
ax=axs[i],
node_scale=2,
title=multi_tag_style,
legend=False,
)
Hint
quimb
tries to produce a sequence of colors that are reasonably locally distigushable
but also have some global ordering when using many colors. These are based on the palette
designed with color blindness in mind by Okabe & Ito.
You can supply custom colors with the custom_colors=
kwarg.
3.2. Highlighting indices¶
You can visualize a subset of indices by supplying a sequence of them to the highlight_inds=
kwarg like so:
# get a central tensor and its indices
tag = f"I{Lx // 2},{Ly // 2},{Lz // 2}"
t = tn[tag]
inds = t.inds
tn.draw(color=tag, highlight_inds=inds, edge_scale=2)
The color can be controlled with highlight_inds_color
.
3.3. Highlighting tids
¶
While tensors can carry arbitrary tags and can usually be identified by these, it is sometimes useful to be able to highlight tensors based on their underlying tids
- each of which is a unique integer representing a node in the hypergraph.
# get the first plane of tensor tids
tids = list(tn.tensor_map.keys())[:Lx * Ly]
tids
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
The color can be controlled with highlight_tids_color
:
tn.draw(highlight_tids=tids, highlight_tids_color=(1.0, 0.0, 0.5, 0.5))
3.4. Auto coloring edges¶
You can auto color the edges of a tensor network by supplying
color_edges=True
, the colors will match the fancy HTML repr colors:
tn2d = qtn.TN2D_rand(3, 3, 4)
tn2d
TensorNetwork2D(tensors=9, indices=12, Lx=3, Ly=3, max_bond=4)
Tensor(shape=(4, 4), inds=[_f3824dAAADM, _f3824dAAADN], tags={I0,0, X0, Y0}),
backend=numpy, dtype=float64, data=array([[ 0.31039678, 0.10804762, 0.789487 , 0.26629713], [-1.03107629, 1.25929333, 1.16748475, -0.92178433], [ 0.73763172, -0.15477961, -0.70859023, 1.79347678], [ 1.63744948, -0.73078467, 1.09752419, -0.82225463]])Tensor(shape=(4, 4, 4), inds=[_f3824dAAADM, _f3824dAAADO, _f3824dAAADP], tags={I0,1, X0, Y1}),
backend=numpy, dtype=float64, data=array([[[-1.11692572, 2.67136289, -0.5721122 , 0.71114849], [ 0.57822075, -0.97734099, -0.88606029, 0.10638291], [-0.0470848 , 0.02896842, 0.0561997 , 1.33086117], [ 0.01665425, -0.22551471, 0.47137297, 1.3455971 ]], [[ 0.24241743, -0.54409159, 0.31770564, -1.14274521], [-0.0653113 , 0.97354939, 0.89480739, 0.0969882 ], [-1.62269583, -0.35217275, 0.14575649, 0.57178404], [-1.05138362, -0.24273758, 0.54468356, -0.51384641]], [[ 1.5254311 , 0.44993436, 1.80696735, -0.84057227], [-1.56891189, -1.44989455, -0.84461763, -0.04028788], [ 0.06683562, 1.30276789, 0.92626411, 1.66357906], [ 0.38131505, -0.98536036, -1.17232613, -0.22371474]], [[ 0.42638764, -2.3540887 , 0.07497977, 0.77057886], [ 0.90092458, 1.28359783, 0.35104543, -1.02200616], [ 0.75078917, 0.08430585, -0.10443674, 0.61830646], [ 1.36728735, 0.00983683, -1.2705715 , 1.2199345 ]]])Tensor(shape=(4, 4), inds=[_f3824dAAADO, _f3824dAAADQ], tags={I0,2, X0, Y2}),
backend=numpy, dtype=float64, data=array([[-0.94616352, -1.82473032, -0.13173997, 0.16344002], [-0.41433926, -0.23387272, -0.0337632 , 0.45302933], [-0.4371203 , 0.95792175, -2.17328467, 1.85948294], [ 1.3563168 , 0.33941193, -0.15421321, -0.8989567 ]])Tensor(shape=(4, 4, 4), inds=[_f3824dAAADR, _f3824dAAADS, _f3824dAAADN], tags={I1,0, X1, Y0}),
backend=numpy, dtype=float64, data=array([[[ 0.26072736, 0.70383255, 1.18381062, 1.04482824], [ 0.10906861, -0.31094688, -1.79035598, 1.00856772], [-0.80256518, -0.96740286, 0.22900013, -0.19891635], [-0.12630192, 1.7513018 , -0.49959042, -0.19227097]], [[ 1.6565978 , -0.30892164, 1.18686664, 0.38045036], [ 0.09454306, 0.15675932, 0.76601945, 1.2454422 ], [ 1.37053282, -1.51000266, 1.34382862, -0.0588893 ], [ 0.64781505, 1.11440734, -0.53244075, -0.74167797]], [[-1.00779773, 0.57562175, 0.95762033, 0.89347988], [-0.57754934, -0.85367607, 0.77640598, 0.37736641], [ 0.70106974, -0.66155486, 0.53871301, -1.2522312 ], [ 1.50359244, -1.39437977, 1.2063327 , -1.91439794]], [[-0.53936935, -1.0573047 , 1.74205706, 0.00399926], [-0.49913802, 1.60375396, 0.45774607, -0.07973073], [ 0.70579091, 1.17174415, 0.05966148, 0.34974537], [ 0.15550703, -0.98374035, -0.07277723, -0.8231882 ]]])Tensor(shape=(4, 4, 4, 4), inds=[_f3824dAAADR, _f3824dAAADT, _f3824dAAADU, _f3824dAAADP], tags={I1,1, X1, Y1}),
backend=numpy, dtype=float64, data=...Tensor(shape=(4, 4, 4), inds=[_f3824dAAADT, _f3824dAAADV, _f3824dAAADQ], tags={I1,2, X1, Y2}),
backend=numpy, dtype=float64, data=array([[[ 4.49555108e-01, -5.41645621e-01, -9.62677362e-01, -9.30901277e-01], [-1.81677764e+00, 8.82598302e-01, 8.91854289e-01, 1.95757417e+00], [-8.89885970e-01, -6.66354371e-01, 5.73586618e-02, -1.17541805e+00], [ 1.10039301e-02, -3.46549068e-01, -9.20967403e-01, -1.10856856e-01]], [[-2.47838437e+00, 6.53763694e-01, -9.60612217e-01, -1.26912504e+00], [-8.37882852e-01, -1.20079536e+00, -3.32792807e-01, -2.13118718e-01], [-5.23382449e-01, 8.28402521e-03, -2.13490457e-01, -6.02936668e-01], [ 3.80619414e-02, 3.93775029e-01, -1.12290715e+00, -4.58240654e-01]], [[ 2.02297946e-01, -5.24572567e-01, -1.89676180e+00, 4.10490074e-01], [ 1.21290418e-01, -1.65918632e+00, -1.18855364e+00, -1.65775989e+00], [ 4.97184332e-01, -5.80055693e-01, -8.67690225e-01, 1.94084452e+00], [-8.37670430e-01, 1.88139796e+00, 1.47623059e-01, -1.61872303e+00]], [[-3.00364339e-01, -1.72449896e-03, 1.01245888e+00, 3.09577939e-01], [-8.03864889e-01, 3.77499679e-01, 1.44887111e+00, 1.64030564e+00], [-7.61485752e-01, -1.23054811e+00, 1.50318203e+00, -1.37164301e+00], [-5.08656387e-01, -3.20992344e-02, 7.47345564e-01, 9.87589060e-01]]])Tensor(shape=(4, 4), inds=[_f3824dAAADW, _f3824dAAADS], tags={I2,0, X2, Y0}),
backend=numpy, dtype=float64, data=array([[-0.34646652, -1.34776683, -0.30809073, -1.92731539], [ 1.28105197, 0.22221935, -0.25526377, -0.90571597], [-1.85516612, 2.17944761, -1.33281987, -2.82919456], [-1.50374529, -0.45701377, 1.71376002, 0.5207714 ]])Tensor(shape=(4, 4, 4), inds=[_f3824dAAADW, _f3824dAAADX, _f3824dAAADU], tags={I2,1, X2, Y1}),
backend=numpy, dtype=float64, data=array([[[-1.47803874, -3.11230487, -0.84425504, -0.54463881], [-2.83528141, -0.33477227, 1.3992323 , 0.12758513], [-0.13840099, 1.65288284, 0.40830031, 0.25717854], [-0.41736051, -0.95903438, -0.90658636, 0.26220716]], [[-0.54461913, 0.34398132, 0.25964242, 0.27573847], [-1.00640996, -2.35758161, -0.30681692, -1.17768049], [-0.48854691, -0.17983217, -0.33196391, -1.48246947], [-0.69083763, 2.20430112, -0.31588148, -1.07573369]], [[ 1.43639633, -2.21516938, 1.34505356, 1.13412198], [-0.46021861, -2.08538694, 0.08618747, 0.08239014], [ 0.68325174, 0.17919782, 1.66783036, 1.53504019], [-0.61776032, -1.94671902, -0.35731494, 1.17100175]], [[-1.11161409, 1.01972927, -1.56730191, -1.74016996], [ 0.84875307, -0.05662433, -1.76058537, 0.41470306], [-1.26247792, -0.2264761 , 1.09054659, 1.82942735], [ 0.85295128, 1.23721064, -0.42574967, -0.88308604]]])Tensor(shape=(4, 4), inds=[_f3824dAAADX, _f3824dAAADV], tags={I2,2, X2, Y2}),
backend=numpy, dtype=float64, data=array([[ 0.41761318, 0.11914732, 0.17144394, 0.587021 ], [ 0.24654774, -0.86365132, -0.12882093, 0.02554675], [-0.79297693, -0.27244477, -0.38969243, 0.59688913], [ 0.0443377 , 0.66845553, 1.2598112 , 0.09895202]])tn2d.draw(edge_color=True, show_inds='all')
3.5. Positioning tensors¶
3.5.1. Automatic layouts¶
The automatic layout strategy quimb
adopts (layout="auto"
) is to lay the tensors out using some relatively efficient scheme, before ‘relaxing’ the positions using a (slower) force repulsion algorithm into something usually more natural.
Relevant options are:
layout
: if"auto"
use the options below, else specify a layout directly (with no relaxation, i.e. setiterations=0
).initial_layout
: if using relaxation, the starting layout (the default for which is'kamada_kawai'
for small graphs and'spectral'
for large graphs).iterations
: controls the number of force repulsion steps.
Another decent networxk
choice for the initial layout that you might try if 'kamada_kawai'
isn’t producing good results is 'spectral'
. You should also be able to specify most of the networkx layout algorithms.
If you have pygraphviz
installed then you can use the layouts "neato"
, "sfdp"
and "dot"
.
fig, axs = plt.subplots(ncols=5, figsize=(12, 3))
for ax, layout in zip(
axs,
[
'kamada_kawai',
'spectral',
'spiral',
'neato',
'dot',
]
):
tn.draw(tn.site_tags, ax=ax, layout=layout)
ax.set_title(layout)
ax.set_aspect("equal")
ax.axis("off")
plt.show()
plt.close()
3.5.2. Force Repulsion options¶
For the force repulsion layout,
you can supply the spring constant k
, which can have a significant effect on the layout:
tn.draw(iterations=100, k=0.01)
You can also fix specific tensors (by either a tid
or set of tags that uniquely identifies that tensor):
fix = {
'I0,0,0': (0, 0),
'I0,0,1': (0, 1),
'I1,0,0': (1, 0),
'I1,0,1': (1, 1),
}
# when fixing tensors you often have to play with ``k``
tn.draw(k=0.001, fix=fix, color=fix.keys())
If you have forceatlas2
(fa2
) installed then you can specify to use it rather than the slower networkx force repulsion algorithm at a certain threshold of nodes (by default 1000) with the option use_forceatlas2=1000
.
3.5.3. Manually Specifying¶
You can also simply specify all positions manually using the fix
kwarg. Here’s that illustrated with a axonometric projection:
import math
def get_3d_pos(i, j, k, a=22, b=45, p=0.2):
return (
+ i * math.cos(math.pi * a / 180) + j * math.cos(math.pi * b / 180) / 2**p,
- i * math.sin(math.pi * a / 180) + j * math.sin(math.pi * b / 180) / 2**p + k
)
pos = {
f'I{i},{j},{k}': get_3d_pos(i, j, k)
for i in range(Lx)
for j in range(Ly)
for k in range(Lz)
}
tn.draw(fix=pos, color=pos.keys())
If you want to retrieve an automatic positioning, e.g. for repeated use in an animation, you can pass the get='pos'
option, which simply returns the positions as a dict mapping each tid
to a 2D coordinate:
pos = tn.draw(get='pos')
pos[0], pos[1], pos[2]
(array([-0.19272384, 0.13715879]),
array([-0.51843386, 0.13071046]),
array([-0.87322452, 0.11081561]))
3.6. Hyper-edges¶
Hyper edges (indices which appear on 3 or more tensors) are represented as separate ‘nodes’ of zero size - since they are equivalent to placing a multi-dimensional COPY-tensor is such locations.
htn = qtn.HTN3D_classical_ising_partition_function(3, 3, 3, beta=0.22)
htn.draw(edge_color=True, show_inds='all')
Another way to visualize such hyperedges, using ‘rubber bands’, is provided by hypernetx
- both the ind_map
of a tensor network and the pos
generate by draw
are directly compatible:
import hypernetx
H = hypernetx.Hypergraph(htn.ind_map)
hypernetx.draw(H, pos=htn.draw(get='pos'))
No module named 'igraph'. If you need to use hypernetx.algorithms.hypergraph_modularity, please install additional packages by running the following command: pip install .['all']
3.7. Spanning trees¶
Various algorithms in quimb
make use of a tree generated by spanning out from a particular region.
span_opts = {
'max_distance': 3,
'distance_sort': 'min',
'ndim_sort': 'max',
}
qtn.TN2D_rand(7, 7, 3).draw_tree_span(
tags=['I2,3', 'I2,2'], which='any', **span_opts
)
3.8. Interaction with matplotlib
¶
You can either add other stuff to the figure that quimb
creates, or you can supply a matplotlib
axis
to add the tensor network drawing to directly.
The return_fig=True
option allows you to modify the figure or save it to file:
fig = tn.draw(return_fig=True)
fig.patch.set_alpha(1.0)
fig.set_facecolor('yellow')
fig.suptitle("Yellow World")
Text(0.5, 0.98, 'Yellow World')
This could be saved with e.g.:
fig.savefig('my-tn-drawing.png', bbox_inches='tight', dpi=300)
The ax=ax
option allows you to add to an existing plot:
fig, axs = plt.subplots(10, 10, figsize=(8, 8))
for ax in axs.flat:
tn = qtn.TN_rand_reg(n=12, reg=3, D=2)
tn.draw(tn.tags, ax=ax, legend=False, show_tags=False)
ax.axis('off')
3.9. 3D plotting and other backends¶
A subset of the drawing functionality is supported for plotting backends other than the default 'matplotlib'
(which uses quimb.schematic.Drawing under the hood). The other backends are:
backend="matplotlib3d"
(shorthandtn.draw_3d()
) plotting a TN in 3D can be useful for debugging the geometry. If run in a script or with the appropriate notebook backend, the 3D matplotlib plot can be rotated and zoomed etc.
Plotting with the 2D or 3D plotly backend is mostly useful for allowing zooming and panning/rotating, as well as tooltips showing tags, indices and dimensions etc.:
backend="plotly"
(shorthandtn.draw_interactive()
)backend="plotly, dim=3"
(shorthandtn.draw_3d_interactive()
)
3.10. ‘Publication style’ figures¶
There are some likely settings to tweak to generate neat ‘publication style’
figures, but probably the most noticeable setting is explicitly laying out
the nodes with fix
.
Here we demonstrate drawing a PEPS with various options, in particular, embedding it in a 3D space so no edges are overlapping.
psi = qtn.PEPS.rand(6, 6, 4)
# fix the site tensors in one plane
fix = {
psi.site_tag(i, j): get_3d_pos(i, j, 1)
for i, j in psi.gen_site_coos()
}
# fix the site inidices in plane below
fix.update({
psi.site_ind(i, j): get_3d_pos(i, j, 0.5)
for i, j in psi.gen_site_coos()
})
# specific a tensor and its neighbors
tag0 = 'I2,2'
tags = ['I2,1', 'I1,2', 'I2,3', 'I3,2']
# create some arrows
psi.canonize_around_(tag0, max_distance=1)
# draw, with some manual style settings
psi.draw(
color=(tag0, *tags),
custom_colors=[(0.8, 0.3, 0.7)] + [(0.3, 0.8, 0.2)] * 4,
fix=fix,
edge_color='black',
edge_alpha=1.0,
edge_scale=1.0,
arrow_opts={
'center': 0.6,
'linewidth': 2,
'length': 0.15,
},
node_scale=1.5,
node_outline_darkness=0.0,
node_outline_size=1.5,
node_hatch={tag: '////' for tag in tags},
node_shape={
tag0: 's',
'I2,1': '^',
'I1,2': '>',
'I2,3': 'v',
'I3,2': '<',
},
legend=False,
)
If you want to manually specify all elements and positions, without any reference to an actual TensorNetwork
object,
and with support for pseudo-3D plotting, then check out the quimb.schematic.Drawing
functionality, with examples here - schematic - manual drawing.