quimb.linalg.mpi_launcher

Manages the spawning of mpi processes to send to the various solvers.

Attributes

Classes

SyncroFuture

SynchroMPIPool

An object that looks like a concurrent.futures executor but actually

CachedPoolWithShutdown

Decorator for caching the mpi pool when called with the equivalent args,

GetMPIBeforeCall

Wrap a function to automatically get the correct communicator before

SpawnMPIProcessesFunc

Automatically wrap a function to be executed in parallel by a

Functions

eigs_slepc(A, k, *[, B, which, sigma, isherm, P, v0, ...])

Solve a matrix using the advanced eigensystem solver

svds_slepc(A[, k, ncv, return_vecs, SVDType, ...])

Find the singular values for sparse matrix a.

mfn_multiply_slepc(mat, vec[, fntype, MFNType, comm, ...])

Compute the action of func(mat) @ vec.

ssolve_slepc(A, y[, isherm, comm, maxiter, tol, ...])

can_use_mpi_pool()

Function to determine whether we are allowed to call get_mpi_pool.

bcast(result, comm, result_rank)

Broadcast a result to all workers, dispatching to proper MPI (rather

get_mpi_pool([num_workers, num_threads])

Get the MPI executor pool, with specified number of processes and

Module Contents

quimb.linalg.mpi_launcher.eigs_slepc(A, k, *, B=None, which=None, sigma=None, isherm=True, P=None, v0=None, ncv=None, return_vecs=True, sort=True, EPSType=None, return_all_conv=False, st_opts=None, tol=None, maxiter=None, l_win=None, comm=None)[source]

Solve a matrix using the advanced eigensystem solver

Parameters:
  • A (dense-matrix, sparse-matrix, LinearOperator or callable) – Operator to solve.

  • k (int, optional) – Number of requested eigenpairs.

  • B (dense-matrix, sparse-matrix, LinearOperator or callable) – The RHS operator defining a generalized eigenproblem.

  • which ({"LM": "SM", "LR", "LA", "SR", "SA", "LI", "SI", "TM", "TR", "TI"}) – Which eigenpairs to target. See scipy.sparse.linalg.eigs().

  • sigma (float, optional) – Target eigenvalue, implies which='TR' if this is not set but sigma is.

  • isherm (bool, optional) – Whether problem is hermitian or not.

  • P (dense-matrix, sparse-matrix, LinearOperator or callable) – Perform the eigensolve in the subspace defined by this projector.

  • v0 (1D-array like, optional) – Initial iteration vector, e.g., informed guess at eigenvector.

  • ncv (int, optional) – Subspace size, defaults to min(20, 2 * k).

  • return_vecs (bool, optional) – Whether to return the eigenvectors.

  • sort (bool, optional) – Whether to sort the eigenpairs in ascending real value.

  • EPSType ({"krylovschur", 'gd', 'lobpcg', 'jd', 'ciss'}, optional) – SLEPc eigensolver type to use, see slepc4py.EPSType.

  • return_all_conv (bool, optional) – Whether to return converged eigenpairs beyond requested subspace size

  • st_opts (dict, optional) – options to send to the eigensolver internal inverter.

  • tol (float, optional) – Tolerance.

  • maxiter (int, optional) – Maximum number of iterations.

  • comm (mpi4py communicator, optional) – MPI communicator, defaults to COMM_SELF for a single process solve.

Returns:

  • lk ((k,) array) – The eigenvalues.

  • vk ((m, k) array) – Corresponding eigenvectors (if return_vecs=True).

quimb.linalg.mpi_launcher.svds_slepc(A, k=6, ncv=None, return_vecs=True, SVDType='cross', return_all_conv=False, tol=None, maxiter=None, comm=None)[source]

Find the singular values for sparse matrix a.

Parameters:
  • A (sparse matrix in csr format) – The matrix to solve.

  • k (int) – Number of requested singular values.

  • method ({"cross", "cyclic", "lanczos", "trlanczos"}) – Solver method to use.

Returns:

  • U ((m, k) array) – Left singular vectors (if return_vecs=True) as columns.

  • s ((k,) array) – Singular values.

  • VH ((k, n) array) – Right singular vectors (if return_vecs=True) as rows.

quimb.linalg.mpi_launcher.mfn_multiply_slepc(mat, vec, fntype='exp', MFNType='AUTO', comm=None, isherm=False)[source]

Compute the action of func(mat) @ vec.

Parameters:
  • mat (operator) – Operator to compute function action of.

  • vec (vector-like) – Vector to compute matrix function action on.

  • func ({'exp', 'sqrt', 'log'}, optional) – Function to use.

  • MFNType ({'krylov', 'expokit'}, optional) – Method of computing the matrix function action, ‘expokit’ is only available for func=’exp’.

  • comm (mpi4py.MPI.Comm instance, optional) – The mpi communicator.

  • isherm (bool, optional) – If mat is known to be hermitian, this might speed things up in some circumstances.

Returns:

fvec – The vector output of func(mat) @ vec.

Return type:

array

quimb.linalg.mpi_launcher.ssolve_slepc(A, y, isherm=True, comm=None, maxiter=None, tol=None, KSPType='preonly', PCType='lu', PCFactorSolverType='mumps')[source]
quimb.linalg.mpi_launcher._NUM_THREAD_WORKERS
quimb.linalg.mpi_launcher.QUIMB_MPI_LAUNCHED
quimb.linalg.mpi_launcher.ALLOW_SPAWN
quimb.linalg.mpi_launcher.NUM_MPI_WORKERS
quimb.linalg.mpi_launcher.can_use_mpi_pool()[source]

Function to determine whether we are allowed to call get_mpi_pool.

quimb.linalg.mpi_launcher.bcast(result, comm, result_rank)[source]

Broadcast a result to all workers, dispatching to proper MPI (rather than pickled) communication if the result is a numpy array.

class quimb.linalg.mpi_launcher.SyncroFuture(result, result_rank, comm)[source]
result()[source]
static cancel()[source]
class quimb.linalg.mpi_launcher.SynchroMPIPool[source]

An object that looks like a concurrent.futures executor but actually distributes tasks in a round-robin fashion based to MPI workers, before broadcasting the results to each other.

submit(fn, *args, **kwargs)[source]
shutdown()[source]
class quimb.linalg.mpi_launcher.CachedPoolWithShutdown(pool_fn)[source]

Decorator for caching the mpi pool when called with the equivalent args, and shutting down previous ones when not needed.

__call__(num_workers=None, num_threads=1)[source]
quimb.linalg.mpi_launcher.get_mpi_pool(num_workers=None, num_threads=1)

Get the MPI executor pool, with specified number of processes and threads per process.

class quimb.linalg.mpi_launcher.GetMPIBeforeCall(fn)[source]

Bases: object

Wrap a function to automatically get the correct communicator before its called, and to set the comm_self kwarg to allow forced self mode.

This is called by every mpi process before the function evaluation.

__call__(*args, comm_self=False, wait_for_workers=None, **kwargs)[source]
Parameters:
  • args – Supplied to self.fn

  • comm_self (bool, optional) – Whether to force use of MPI.COMM_SELF

  • wait_for_workers (int, optional) – If set, wait for the communicator to have this many workers, this can help to catch some errors regarding expected worker numbers.

  • kwargs – Supplied to self.fn

class quimb.linalg.mpi_launcher.SpawnMPIProcessesFunc(fn)[source]

Bases: object

Automatically wrap a function to be executed in parallel by a pool of mpi workers.

This is only called by the master mpi process in manual mode, only by the (non-mpi) spawning process in automatic mode, or by all processes in syncro mode.

__call__(*args, num_workers=None, num_threads=1, mpi_pool=None, spawn_all=USE_SYNCRO or not ALREADY_RUNNING_AS_MPI, **kwargs)[source]
Parameters:
  • args – Supplied to self.fn.

  • num_workers (int, optional) – How many total process should run function in parallel.

  • num_threads (int, optional) – How many (OMP) threads each process should use

  • mpi_pool (pool-like, optional) – If not None (default), submit function to this pool.

  • spawn_all (bool, optional) – Whether all the parallel processes should be spawned (True), or num_workers - 1, so that the current process can also do work.

  • kwargs – Supplied to self.fn.

Return type:

fn output from the master process.

quimb.linalg.mpi_launcher.eigs_slepc_mpi[source]
quimb.linalg.mpi_launcher.eigs_slepc_spawn[source]
quimb.linalg.mpi_launcher.svds_slepc_mpi[source]
quimb.linalg.mpi_launcher.svds_slepc_spawn[source]
quimb.linalg.mpi_launcher.mfn_multiply_slepc_mpi[source]
quimb.linalg.mpi_launcher.mfn_multiply_slepc_spawn[source]
quimb.linalg.mpi_launcher.ssolve_slepc_mpi[source]
quimb.linalg.mpi_launcher.ssolve_slepc_spawn[source]