Installation

quimb is available on both pypi and conda-forge. While quimb is pure python and has no direct dependencies itself, the recommended distribution would be mambaforge for installing the various backend array libraries and their dependencies.

Installing with pip:

pip install quimb

Installing with conda:

conda install -c conda-forge quimb

Installing with mambaforge:

mamba install quimb

Hint

Mamba is a faster version of conda, and the -forge distritbution comes pre-configured with only the conda-forge channel, which further simplifies and speeds up installing dependencies.

Installing the latest version directly from github:

If you want to checkout the latest version of features and fixes, you can install directly from the github repository:

pip install -U git+https://github.com/jcmgray/quimb.git

Installing a local, editable development version:

If you want to make changes to the source code and test them out, you can install a local editable version of the package:

git clone https://github.com/jcmgray/quimb.git
pip install --no-deps -U -e quimb/

Required Dependencies

The core packages quimb requires are:

For ease and performance (i.e. mkl compiled libraries), conda is the recommended distribution with which to install these.

In addition, the tensor network library, quimb.tensor, requires:

cotengra efficiently optimizes and performs tensor contraction expressions. It can be installed with pip or from conda-forge and is a required dependency since various bits of the core quimb module now make use tensor-network functionality behind the scenes. autoray allows backend agnostic numeric code for various tensor network operations so that many libraries other than numpy can be used. It can be installed via pip from pypi or via conda from conda-forge.

Optional Dependencies

Plotting tensor networks as colored graphs with weighted edges requires:

Fast, multi-threaded random number generation no longer (with numpy>1.17) requires randomgen though its bit generators can still be used.

Finally, fast and optionally distributed partial eigen-solving, SVD, exponentiation etc. can be accelerated with slepc4py and its dependencies:

To install these from conda-forge, with complex dtype specified for example, use:

mamba install -c conda-forge mpi4py petsc=*=*complex* petsc4py slepc=*=*complex* slepc4py

For best performance of some routines, (e.g. shift invert eigen-solving), petsc must be configured with certain options. Pip can handle this compilation and installation, for example the following script installs everything necessary on Ubuntu:

#!/bin/bash

# install build tools, OpenMPI, and OpenBLAS
sudo apt install -y openmpi-bin libopenmpi-dev gfortran bison flex cmake valgrind curl autoconf libopenblas-base libopenblas-dev

# optimization flags, e.g. for intel you might want "-O3 -xHost"
export OPTFLAGS="-O3 -march=native -s -DNDEBUG"

# petsc options, here configured for real
export PETSC_CONFIGURE_OPTIONS="--with-scalar-type=complex --download-mumps --download-scalapack --download-parmetis --download-metis --COPTFLAGS='$OPTFLAGS' --CXXOPTFLAGS='$OPTFLAGS' --FOPTFLAGS='$OPTFLAGS'"

# make sure using all the same version
export PETSC_VERSION=3.14.0
pip install petsc==$PETSC_VERSION --no-binary :all:
pip install petsc4py==$PETSC_VERSION --no-binary :all:
pip install slepc==$PETSC_VERSION --no-binary :all:
pip install slepc4py==$PETSC_VERSION --no-binary :all:

Note

For the most control and best performance it is recommended to compile and install these (apart from MPI if you are e.g. on a cluster) manually - see the PETSc instructions. It is possible to compile several versions of PETSc/SLEPc side by side, for example a --with-scalar-type=complex and/or a --with-precision=single version, naming them with different values of PETSC_ARCH. When loading PETSc/SLEPc, quimb respects PETSC_ARCH if it is set, but it cannot dynamically switch between them.