1. Installation

quimb itself is a pure python package and can now be found on PyPI:

pip install quimb

However it is recommended to first install the main dependencies using e.g. conda, as below. The code is hosted on github and if the dependencies are satisfied, a development version can be installed with pip directly from there:

pip install --no-deps -U git+git://github.com/jcmgray/[email protected]

1.1. Required Dependencies

The core packages quimb requires are:

For ease and performance (i.e. mkl compiled libraries), conda is the recommended distribution with which to install these.

In addition, the tensor network library, quimb.tensor, requires:

opt_einsum efficiently optimizes tensor contraction expressions. It can be installed with pip or from conda-forge and is a required dependency since various bits of the core quimb module now make use tensor-network functionality behind the scenes. autoray allows backend agnostic numeric code for various tensor network operations so that many libraries other than numpy can be used. It is currently only installable via pip from pypi.

1.2. Optional Dependencies

Plotting tensor networks as colored graphs with weighted edges requires:

Fast, multi-threaded random number generation no longer (with numpy>1.17) requires randomgen <https://github.com/bashtage/randomgen> though its bit generators can still be used._

Finally, fast and optionally distributed partial eigen-solving, SVD, exponentiation etc. can be accelerated with slepc4py and its dependencies:

It is recommended to compile and install these (apart from MPI if you are e.g. on a cluster) yourself (see below).

For best performance of some routines, (e.g. shift invert eigen-solving), petsc must be configured with certain options. Here is a rough overview of the steps to installing the above in a directory $SRC_DIR, with MPI and mpi4py already installed. $PATH_TO_YOUR_BLAS_LAPACK_LIB should point to e.g. OpenBLAS (libopenblas.so) or the MKL library (libmkl_rt.so). $COMPILE_FLAGS should be optimizations chosen for your compiler, e.g. for gcc "-O3 -march=native -s -DNDEBUG", or for icc "-O3 -xHost" etc.

1.2.1. Build PETSC

cd $SRC_DIR
git clone https://bitbucket.org/petsc/petsc.git

export PETSC_DIR=$SRC_DIR/petsc
export PETSC_ARCH=arch-auto-complex

cd petsc
python2 ./configure \
  --download-mumps \
  --download-scalapack \
  --download-parmetis \
  --download-metis \
  --download-ptscotch \
  --with-debugging=0 \
  --with-blas-lapack-lib=$PATH_TO_YOUR_BLAS_LAPACK_LIB \
  COPTFLAGS="$COMPILE_FLAGS" \
  CXXOPTFLAGS="$COMPILE_FLAGS" \
  FOPTFLAGS="$COMPILE_FLAGS" \
  --with-scalar-type=complex
make all
make test
make streams NPMAX=4

1.2.2. Build SLEPC

cd $SRC_DIR
git clone https://bitbucket.org/slepc/slepc.git
export SLEPC_DIR=$SRC_DIR/slepc
cd slepc
python2 ./configure
make
make test

1.2.3. Build the python interfaces

cd $SRC_DIR
git clone https://bitbucket.org/petsc/petsc4py.git
git clone https://bitbucket.org/slepc/slepc4py.git

cd $SRC_DIR/petsc4py
python setup.py build
python setup.py install

cd $SRC_DIR/slepc4py
python setup.py build
python setup.py install

Note

It is possible to compile several versions of PETSc/SLEPc side by side, for example a --with-scalar-type=real version, naming them with different values of PETSC_ARCH. When loading PETSc/SLEPc, quimb respects PETSC_ARCH if it is set, but it cannot dynamically switch bewteen them.