quimb.linalg.mpi_launcher

Manages the spawning of mpi processes to send to the various solvers.

Attributes

Classes

SyncroFuture

SynchroMPIPool

An object that looks like a concurrent.futures executor but actually

CachedPoolWithShutdown

Decorator for caching the mpi pool when called with the equivalent args,

GetMPIBeforeCall

Wrap a function to automatically get the correct communicator before

SpawnMPIProcessesFunc

Automatically wrap a function to be executed in parallel by a

Functions

can_use_mpi_pool()

Function to determine whether we are allowed to call get_mpi_pool.

bcast(result, comm, result_rank)

Broadcast a result to all workers, dispatching to proper MPI (rather

get_mpi_pool([num_workers, num_threads])

Get the MPI executor pool, with specified number of processes and

Module Contents

quimb.linalg.mpi_launcher.QUIMB_MPI_LAUNCHED
quimb.linalg.mpi_launcher.ALLOW_SPAWN
quimb.linalg.mpi_launcher.NUM_MPI_WORKERS
quimb.linalg.mpi_launcher.can_use_mpi_pool()[source]

Function to determine whether we are allowed to call get_mpi_pool.

quimb.linalg.mpi_launcher.bcast(result, comm, result_rank)[source]

Broadcast a result to all workers, dispatching to proper MPI (rather than pickled) communication if the result is a numpy array.

class quimb.linalg.mpi_launcher.SyncroFuture(result, result_rank, comm)[source]
_result
result_rank
comm
result()[source]
static cancel()[source]
class quimb.linalg.mpi_launcher.SynchroMPIPool[source]

An object that looks like a concurrent.futures executor but actually distributes tasks in a round-robin fashion based to MPI workers, before broadcasting the results to each other.

comm
size
rank
counter
_max_workers
submit(fn, *args, **kwargs)[source]
shutdown()[source]
class quimb.linalg.mpi_launcher.CachedPoolWithShutdown(pool_fn)[source]

Decorator for caching the mpi pool when called with the equivalent args, and shutting down previous ones when not needed.

_settings = '__UNINITIALIZED__'
_pool_fn
__call__(num_workers=None, num_threads=1)[source]
quimb.linalg.mpi_launcher.get_mpi_pool(num_workers=None, num_threads=1)

Get the MPI executor pool, with specified number of processes and threads per process.

class quimb.linalg.mpi_launcher.GetMPIBeforeCall(fn)[source]

Bases: object

Wrap a function to automatically get the correct communicator before its called, and to set the comm_self kwarg to allow forced self mode.

This is called by every mpi process before the function evaluation.

fn
__call__(*args, comm_self=False, wait_for_workers=None, **kwargs)[source]
Parameters:
  • args – Supplied to self.fn

  • comm_self (bool, optional) – Whether to force use of MPI.COMM_SELF

  • wait_for_workers (int, optional) – If set, wait for the communicator to have this many workers, this can help to catch some errors regarding expected worker numbers.

  • kwargs – Supplied to self.fn

class quimb.linalg.mpi_launcher.SpawnMPIProcessesFunc(fn)[source]

Bases: object

Automatically wrap a function to be executed in parallel by a pool of mpi workers.

This is only called by the master mpi process in manual mode, only by the (non-mpi) spawning process in automatic mode, or by all processes in syncro mode.

fn
__call__(*args, num_workers=None, num_threads=1, mpi_pool=None, spawn_all=USE_SYNCRO or not ALREADY_RUNNING_AS_MPI, **kwargs)[source]
Parameters:
  • args – Supplied to self.fn.

  • num_workers (int, optional) – How many total process should run function in parallel.

  • num_threads (int, optional) – How many (OMP) threads each process should use

  • mpi_pool (pool-like, optional) – If not None (default), submit function to this pool.

  • spawn_all (bool, optional) – Whether all the parallel processes should be spawned (True), or num_workers - 1, so that the current process can also do work.

  • kwargs – Supplied to self.fn.

Return type:

fn output from the master process.

quimb.linalg.mpi_launcher.eigs_slepc_mpi[source]
quimb.linalg.mpi_launcher.eigs_slepc_spawn[source]
quimb.linalg.mpi_launcher.svds_slepc_mpi[source]
quimb.linalg.mpi_launcher.svds_slepc_spawn[source]
quimb.linalg.mpi_launcher.mfn_multiply_slepc_mpi[source]
quimb.linalg.mpi_launcher.mfn_multiply_slepc_spawn[source]
quimb.linalg.mpi_launcher.ssolve_slepc_mpi[source]
quimb.linalg.mpi_launcher.ssolve_slepc_spawn[source]