quimb.tensor.belief_propagation.bp_common

Classes

RollingDiffMean

Tracker for the absolute rolling mean of diffs between values, to

BeliefPropagationCommon

Common interfaces for belief propagation algorithms.

Functions

prod(xs)

Product of all elements in xs.

initialize_hyper_messages(tn[, fill_fn, smudge_factor])

Initialize messages for belief propagation, this is equivalent to doing

combine_local_contractions(values[, backend, ...])

Combine a product of local contractions into a single value, avoiding

contract_hyper_messages(tn, messages[, ...])

Estimate the contraction of tn given messages, via the

compute_index_marginal(tn, ind, messages)

Compute the marginal for a single index given messages.

compute_tensor_marginal(tn, tid, messages)

Compute the marginal for the region surrounding a single tensor/factor

compute_all_index_marginals_from_messages(tn, messages)

Compute all index marginals from belief propagation messages.

normalize_message_pair(mi, mj)

Normalize a pair of messages such that <mi|mj> = 1 and

maybe_get_thread_pool(thread_pool)

Get a thread pool if requested.

create_lazy_community_edge_map(tn[, site_tags, ...])

For lazy BP algorithms, create the data structures describing the

auto_add_indices(tn, regions)

Make sure all indices incident to any tensor in each region are

process_loop_series_expansion_weights(weights[, ...])

Assuming a normalized BP fixed point, take a series of loop weights, and

Module Contents

quimb.tensor.belief_propagation.bp_common.prod(xs)

Product of all elements in xs.

class quimb.tensor.belief_propagation.bp_common.RollingDiffMean(size=16)

Tracker for the absolute rolling mean of diffs between values, to assess effective convergence of BP above actual message tolerance.

size = 16
diffs = []
last_x = None
dxsum = 0.0
update(x)
absmeandiff()
class quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon(tn: quimb.tensor.TensorNetwork, *, damping=0.0, update='sequential', normalize=None, distance=None, contract_every=None, inplace=False)

Common interfaces for belief propagation algorithms.

Parameters:
  • tn (TensorNetwork) – The tensor network to perform belief propagation on.

  • damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being damping * old + (1 - damping) * new. This makes convergence more reliable but slower.

  • update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.

  • normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.

  • distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.

  • contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every contract_every iterations. The resulting values are stored in zvals at corresponding points zval_its.

  • inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.

tn
backend
dtype
sign = 1.0
exponent
property damping
update = 'sequential'
property normalize
property distance
contract_every = None
n = 0
converged = False
mdiffs = []
rdiffs = []
zval_its = []
zvals = []
_maybe_contract()
run(max_iterations=1000, diis=False, tol=5e-06, tol_abs=None, tol_rolling_diff=None, info=None, progbar=False)
Parameters:
  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.

  • tol (float, optional) – The convergence tolerance for messages.

  • tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as tol.

  • tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as tol. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.

  • info (dict, optional) – If supplied, the following information will be added to it: converged (bool), iterations (int), max_mdiff (float), rolling_abs_mean_diff (float).

  • progbar (bool, optional) – Whether to show a progress bar.

plot(zvals_yscale='asinh', **kwargs)
property mdiff
abstractmethod iterate(tol=1e-06)

Perform a single iteration of belief propagation. Subclasses should implement this method, returning either max_mdiff or a dictionary containing max_mdiff and any other relevant information:

{

“nconv”: nconv, “ncheck”: ncheck, “max_mdiff”: max_mdiff,

}

abstractmethod contract(strip_exponent=False, check_zero=True, **kwargs)

Contract the tensor network and return the resulting value.

__repr__()
quimb.tensor.belief_propagation.bp_common.initialize_hyper_messages(tn, fill_fn=None, smudge_factor=1e-12)

Initialize messages for belief propagation, this is equivalent to doing a single round of belief propagation with uniform messages.

Parameters:
  • tn (TensorNetwork) – The tensor network to initialize messages for.

  • fill_fn (callable, optional) – A function to fill the messages with, of signature fill_fn(shape).

  • smudge_factor (float, optional) – A small number to add to the messages to avoid numerical issues.

Returns:

messages – The initial messages. For every index and tensor id pair, there will be a message to and from with keys (ix, tid) and (tid, ix).

Return type:

dict

quimb.tensor.belief_propagation.bp_common.combine_local_contractions(values, backend=None, strip_exponent=False, check_zero=True, mantissa=None, exponent=None)

Combine a product of local contractions into a single value, avoiding overflow/underflow by accumulating the mantissa and exponent separately.

Parameters:
  • values (sequence of (scalar, int)) – The values to combine, each with a power to be raised to.

  • backend (str, optional) – The backend to use. Infered from the first value if not given.

  • strip_exponent (bool, optional) – Whether to return the mantissa and exponent separately.

  • check_zero (bool, optional) – Whether to check for zero values and return zero early.

  • mantissa (float, optional) – The initial mantissa to accumulate into.

  • exponent (float, optional) – The initial exponent to accumulate into.

Returns:

result – The combined value, or the mantissa and exponent separately.

Return type:

float or (float, float)

quimb.tensor.belief_propagation.bp_common.contract_hyper_messages(tn, messages, strip_exponent=False, check_zero=True, backend=None)

Estimate the contraction of tn given messages, via the exponential of the Bethe free entropy.

quimb.tensor.belief_propagation.bp_common.compute_index_marginal(tn, ind, messages)

Compute the marginal for a single index given messages.

Parameters:
  • tn (TensorNetwork) – The tensor network to compute the marginal for.

  • ind (int) – The index to compute the marginal for.

  • messages (dict) – The messages to use, which should match tn.

Returns:

marginal – The marginal probability distribution for the index ind.

Return type:

array_like

quimb.tensor.belief_propagation.bp_common.compute_tensor_marginal(tn, tid, messages)

Compute the marginal for the region surrounding a single tensor/factor given messages.

Parameters:
  • tn (TensorNetwork) – The tensor network to compute the marginal for.

  • tid (int) – The tensor id to compute the marginal for.

  • messages (dict) – The messages to use, which should match tn.

Returns:

marginal – The marginal probability distribution for the tensor/factor tid.

Return type:

array_like

quimb.tensor.belief_propagation.bp_common.compute_all_index_marginals_from_messages(tn, messages)

Compute all index marginals from belief propagation messages.

Parameters:
  • tn (TensorNetwork) – The tensor network to compute marginals for.

  • messages (dict) – The belief propagation messages.

Returns:

marginals – The marginals for each index.

Return type:

dict

quimb.tensor.belief_propagation.bp_common.normalize_message_pair(mi, mj)

Normalize a pair of messages such that <mi|mj> = 1 and <mi|mi> = <mj|mj> (but in general != 1).

quimb.tensor.belief_propagation.bp_common.maybe_get_thread_pool(thread_pool)

Get a thread pool if requested.

quimb.tensor.belief_propagation.bp_common.create_lazy_community_edge_map(tn, site_tags=None, rank_simplify=True)

For lazy BP algorithms, create the data structures describing the effective graph of the lazily grouped ‘sites’ given by site_tags.

quimb.tensor.belief_propagation.bp_common.auto_add_indices(tn, regions)

Make sure all indices incident to any tensor in each region are included in the region.

quimb.tensor.belief_propagation.bp_common.process_loop_series_expansion_weights(weights, mantissa=1.0, exponent=0.0, multi_excitation_correct=True, maxiter_correction=100, tol_correction=1e-14, strip_exponent=False, return_all=False)

Assuming a normalized BP fixed point, take a series of loop weights, and iteratively compute the free energy by requiring self-consistency with exponential suppression factors. See https://arxiv.org/abs/2409.03108.