quimb.experimental.belief_propagation

Belief propagation (BP) routines. There are three potential categorizations of BP and each combination of them is potentially valid specific algorithm.

1-norm vs 2-norm BP

  • 1-norm (normal): BP runs directly on the tensor network, messages have size d where d is the size of the bond(s) connecting two tensors or regions.

  • 2-norm (quantum): BP runs on the squared tensor network, messages have size d^2 where d is the size of the bond(s) connecting two tensors or regions. Each local tensor or region is partially traced (over dangling indices) with its conjugate to create a single node.

Graph vs Hypergraph BP

  • Graph (simple): the tensor network lives on a graph, where indices either appear on two tensors (a bond), or appear on a single tensor (are outputs). In this case, messages are exchanged directly between tensors.

  • Hypergraph: the tensor network lives on a hypergraph, where indices can appear on any number of tensors. In this case, the update procedure is two parts, first all ‘tensor’ messages are computed, these are then used in the second step to compute all the ‘index’ messages, which are then fed back into the ‘tensor’ message update and so forth. For 2-norm BP one likely needs to specify which indices are outputs and should be traced over.

The hypergraph case of course includes the graph case, but since the ‘index’ message update is simply the identity, it is convenient to have a separate simpler implementation, where the standard TN bond vs physical index definitions hold.

Dense vs Vectorized vs Lazy BP

  • Dense: each node is a single tensor, or pair of tensors for 2-norm BP. If all multibonds have been fused, then each message is a vector (1-norm case) or matrix (2-norm case).

  • Vectorized: the same as the above, but all matching tensor update and message updates are stacked and performed simultaneously. This can be enormously more efficient for large numbers of small tensors.

  • Lazy: each node is potentially a tensor network itself with arbitrary inner structure and number of bonds connecting to other nodes. The message are generally tensors and each update is a lazy contraction, which is potentially much cheaper / requires less memory than forming the ‘dense’ node for large tensors.

(There is also the MPS flavor where each node has a 1D structure and the messages are matrix product states, with updates involving compression.)

Overall that gives 12 possible BP flavors, some implemented here:

  • [x] (HD1BP) hyper, dense, 1-norm - this is the standard BP algorithm

  • [x] (HD2BP) hyper, dense, 2-norm

  • [x] (HV1BP) hyper, vectorized, 1-norm

  • [ ] (HV2BP) hyper, vectorized, 2-norm

  • [ ] (HL1BP) hyper, lazy, 1-norm

  • [ ] (HL2BP) hyper, lazy, 2-norm

  • [x] (D1BP) simple, dense, 1-norm - simple BP for simple tensor networks

  • [x] (D2BP) simple, dense, 2-norm - this is the standard PEPS BP algorithm

  • [ ] (V1BP) simple, vectorized, 1-norm

  • [ ] (V2BP) simple, vectorized, 2-norm

  • [x] (L1BP) simple, lazy, 1-norm

  • [x] (L2BP) simple, lazy, 2-norm

The 2-norm methods can be used to compress bonds or estimate the 2-norm. The 1-norm methods can be used to estimate the 1-norm, i.e. contracted value. Both methods can be used to compute index marginals and thus perform sampling.

The vectorized methods can be extremely fast for large numbers of small tensors, but do currently require all dimensions to match.

The dense and lazy methods can can converge messages locally, i.e. only update messages adjacent to messages which have changed.

Submodules

Classes

D1BP

Dense (as in one tensor per site) 1-norm (as in for 'classical' systems)

D2BP

Dense (as in one tensor per site) 2-norm (as in for wavefunctions and

HD1BP

Object interface for hyper, dense, 1-norm belief propagation. This is

HV1BP

Object interface for hyper, vectorized, 1-norm, belief propagation. This

L1BP

Lazy 1-norm belief propagation. BP is run between groups of tensors

L2BP

Lazy (as in multiple uncontracted tensors per site) 2-norm (as in for

RegionGraph

Functions

initialize_hyper_messages(tn[, fill_fn, smudge_factor])

Initialize messages for belief propagation, this is equivalent to doing

contract_d1bp(tn[, max_iterations, tol, damping, ...])

Estimate the contraction of standard tensor network tn using dense

compress_d2bp(tn, max_bond[, cutoff, cutoff_mode, ...])

Compress the tensor network tn using dense 2-norm belief

contract_d2bp(tn[, messages, output_inds, optimize, ...])

Estimate the norm squared of tn using dense 2-norm belief

sample_d2bp(tn[, output_inds, messages, ...])

Sample a configuration from tn using dense 2-norm belief

contract_hd1bp(tn[, messages, max_iterations, tol, ...])

Estimate the contraction of tn with hyper, vectorized, 1-norm

sample_hd1bp(tn[, messages, output_inds, ...])

Sample all indices of a tensor network using repeated belief propagation

contract_hv1bp(tn[, messages, max_iterations, tol, ...])

Estimate the contraction of tn with hyper, vectorized, 1-norm

sample_hv1bp(tn[, messages, output_inds, ...])

Sample all indices of a tensor network using repeated belief propagation

contract_l1bp(tn[, max_iterations, tol, site_tags, ...])

Estimate the contraction of tn using lazy 1-norm belief propagation.

compress_l2bp(tn, max_bond[, cutoff, cutoff_mode, ...])

Compress tn using lazy belief propagation, producing a tensor

contract_l2bp(tn[, site_tags, damping, update, ...])

Estimate the norm squared of tn using lazy belief propagation.

Package Contents

quimb.experimental.belief_propagation.initialize_hyper_messages(tn, fill_fn=None, smudge_factor=1e-12)

Initialize messages for belief propagation, this is equivalent to doing a single round of belief propagation with uniform messages.

Parameters:
  • tn (TensorNetwork) – The tensor network to initialize messages for.

  • fill_fn (callable, optional) – A function to fill the messages with, of signature fill_fn(shape).

  • smudge_factor (float, optional) – A small number to add to the messages to avoid numerical issues.

Returns:

messages – The initial messages. For every index and tensor id pair, there will be a message to and from with keys (ix, tid) and (tid, ix).

Return type:

dict

class quimb.experimental.belief_propagation.D1BP(tn, messages=None, damping=0.0, update='sequential', local_convergence=True, message_init_function=None)

Bases: quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon

Dense (as in one tensor per site) 1-norm (as in for ‘classical’ systems) belief propagation algorithm. Allows message reuse. This version assumes no hyper indices (i.e. a standard tensor network). This is the simplest version of belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to run BP on.

  • messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.

  • damping (float, optional) – The damping factor to use, 0.0 means no damping.

  • update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • fill_fn (callable, optional) – If specified, use this function to fill in the initial messages.

tn

The target tensor network.

Type:

TensorNetwork

messages

The current messages. The key is a tuple of the index and tensor id that the message is being sent to.

Type:

dict[(str, int), array_like]

key_pairs

A dictionary mapping the key of a message to the key of the message propagating in the opposite direction.

Type:

dict[(str, int), (str, int)]

tn
damping = 0.0
local_convergence = True
update = 'sequential'
backend
_normalize
_distance
touched
key_pairs
iterate(tol=5e-06)
normalize_messages()

Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1).

get_gauged_tn()

Gauge the original TN by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor.

contract(strip_exponent=False)
quimb.experimental.belief_propagation.contract_d1bp(tn, max_iterations=1000, tol=5e-06, damping=0.0, update='sequential', local_convergence=True, strip_exponent=False, info=None, progbar=False, **contract_opts)

Estimate the contraction of standard tensor network tn using dense 1-norm belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to contract, it should have no dangling or hyper indices.

  • max_iterations (int, optional) – The maximum number of iterations to run for.

  • tol (float, optional) – The convergence tolerance for messages.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

class quimb.experimental.belief_propagation.D2BP(tn, messages=None, output_inds=None, optimize='auto-hq', damping=0.0, update='sequential', local_convergence=True, **contract_opts)

Bases: quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon

Dense (as in one tensor per site) 2-norm (as in for wavefunctions and operators) belief propagation. Allows messages reuse. This version assumes no hyper indices (i.e. a standard PEPS like tensor network).

Potential use cases for D2BP and a PEPS like tensor network are:

  • globally compressing it from bond dimension D to D'

  • eagerly applying gates and locally compressing back to D

  • sampling configurations

  • estimating the norm of the tensor network

Parameters:
  • tn (TensorNetwork) – The tensor network to form the 2-norm of and run BP on.

  • messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.

  • output_inds (set[str], optional) – The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • damping (float, optional) – The damping factor to use, 0.0 means no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • contract_opts – Other options supplied to cotengra.array_contract.

tn
contract_opts
damping = 0.0
local_convergence = True
update = 'sequential'
backend
_normalize
_distance
touch_map
touched
exprs
update_touched_from_tids(*tids)

Specify that the messages for the given tids have changed.

update_touched_from_tags(tags, which='any')

Specify that the messages for the messages touching tags have changed.

update_touched_from_inds(inds, which='any')

Specify that the messages for the messages touching inds have changed.

iterate(tol=5e-06)

Perform a single iteration of dense 2-norm belief propagation.

compute_marginal(ind)

Compute the marginal for the index ind.

contract(strip_exponent=False)

Estimate the total contraction, i.e. the 2-norm.

Parameters:

strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

Return type:

scalar or (scalar, float)

compress(max_bond, cutoff=0.0, cutoff_mode=4, renorm=0, inplace=False)

Compress the initial tensor network using the current messages.

quimb.experimental.belief_propagation.compress_d2bp(tn, max_bond, cutoff=0.0, cutoff_mode='rsum2', renorm=0, messages=None, output_inds=None, optimize='auto-hq', damping=0.0, update='sequential', local_convergence=True, max_iterations=1000, tol=5e-06, inplace=False, info=None, progbar=False, **contract_opts)

Compress the tensor network tn using dense 2-norm belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to form the 2-norm of, run BP on and then compress.

  • max_bond (int) – The maximum bond dimension to compress to.

  • cutoff (float, optional) – The cutoff to use when compressing.

  • cutoff_mode (int, optional) – The cutoff mode to use when compressing.

  • messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • output_inds (set[str], optional) – The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Other options supplied to cotengra.array_contract.

Return type:

TensorNetwork

quimb.experimental.belief_propagation.contract_d2bp(tn, messages=None, output_inds=None, optimize='auto-hq', damping=0.0, update='sequential', local_convergence=True, max_iterations=1000, tol=5e-06, strip_exponent=False, info=None, progbar=False, **contract_opts)

Estimate the norm squared of tn using dense 2-norm belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to form the 2-norm of and run BP on.

  • messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • output_inds (set[str], optional) – The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Other options supplied to cotengra.array_contract.

Return type:

scalar or (scalar, float)

quimb.experimental.belief_propagation.sample_d2bp(tn, output_inds=None, messages=None, max_iterations=100, tol=0.01, bias=None, seed=None, local_convergence=True, progbar=False, **contract_opts)

Sample a configuration from tn using dense 2-norm belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to sample from.

  • output_inds (set[str], optional) – Which indices to sample.

  • messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.

  • max_iterations (int, optional) – The maximum number of iterations to perform, per marginal.

  • tol (float, optional) – The convergence tolerance for messages.

  • bias (float, optional) – Bias the sampling towards more locally likely bit-strings. This is done by raising the probability of each bit-string to this power.

  • seed (int, optional) – A random seed for reproducibility.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Other options supplied to cotengra.array_contract.

Returns:

  • config (dict[str, int]) – The sampled configuration, a mapping of output indices to values.

  • tn_config (TensorNetwork) – The tensor network with the sampled configuration applied.

  • omega (float) – The BP probability of the sampled configuration.

class quimb.experimental.belief_propagation.HD1BP(tn, messages=None, damping=None, smudge_factor=1e-12)

Bases: quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon

Object interface for hyper, dense, 1-norm belief propagation. This is standard belief propagation in tensor network form.

Parameters:
  • tn (TensorNetwork) – The tensor network to run BP on.

  • messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.

  • smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.

tn
backend
smudge_factor = 1e-12
damping = None
messages = None
iterate(**kwargs)
get_gauged_tn()

Assuming the supplied tensor network has no hyper or dangling indices, gauge it by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor.

contract(strip_exponent=False)

Estimate the total contraction, i.e. the exponential of the ‘Bethe free entropy’.

quimb.experimental.belief_propagation.contract_hd1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, smudge_factor=1e-12, strip_exponent=False, info=None, progbar=False)

Estimate the contraction of tn with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy.

Parameters:
  • tn (TensorNetwork) – The tensor network to run BP on, can have hyper indices.

  • messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • damping (float, optional) – The damping factor to use, 0.0 means no damping.

  • smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.

  • strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

Return type:

scalar or (scalar, float)

quimb.experimental.belief_propagation.sample_hd1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, smudge_factor=1e-12, bias=False, seed=None, progbar=False)

Sample all indices of a tensor network using repeated belief propagation runs and decimation.

Parameters:
  • tn (TensorNetwork) – The tensor network to sample.

  • messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys (ix, tid) and (tid, ix). If not given, then messages are initialized as uniform.

  • output_inds (sequence of str, optional) – The indices to sample. If not given, then all indices are sampled.

  • max_iterations (int, optional) – The maximum number of iterations for each message passing run.

  • tol (float, optional) – The convergence tolerance for each message passing run.

  • smudge_factor (float, optional) – A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals.

  • bias (bool or float, optional) – Whether to bias the sampling towards the largest marginal. If False (the default), then indices are sampled proportional to their marginals. If True, then each index is ‘sampled’ to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling.

  • thread_pool (bool, int or ThreadPoolExecutor, optional) – Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If True, then the number of threads is set to the number of cores. If a ThreadPoolExecutor, then this is used directly.

  • seed (int, optional) – A random seed to use for the sampling.

  • progbar (bool, optional) – Whether to show a progress bar.

Returns:

  • config (dict[str, int]) – The sample configuration, mapping indices to values.

  • tn_config (TensorNetwork) – The tensor network with all index values (or just those in output_inds if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment.

  • omega (float) – The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling.

class quimb.experimental.belief_propagation.HV1BP(tn, messages=None, smudge_factor=1e-12, damping=0.0, thread_pool=False)

Bases: quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon

Object interface for hyper, vectorized, 1-norm, belief propagation. This is the fast version of belief propagation possible when there are many, small, matching tensor sizes.

Parameters:
  • tn (TensorNetwork) – The tensor network to run BP on.

  • messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.

  • smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.

  • thread_pool (bool or int, optional) – Whether to use a thread pool for parallelization, if True use the default number of threads, if an integer use that many threads.

tn
backend
smudge_factor = 1e-12
damping = 0.0
pool = None
iterate(**kwargs)
get_messages()

Get messages in individual form from the batched stacks.

contract(strip_exponent=False)
quimb.experimental.belief_propagation.contract_hv1bp(tn, messages=None, max_iterations=1000, tol=5e-06, smudge_factor=1e-12, damping=0.0, strip_exponent=False, info=None, progbar=False)

Estimate the contraction of tn with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy.

Parameters:
  • tn (TensorNetwork) – The tensor network to run BP on, can have hyper indices.

  • messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.

  • damping (float, optional) – The damping factor to use, 0.0 means no damping.

  • strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

Return type:

scalar or (scalar, float)

quimb.experimental.belief_propagation.sample_hv1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, smudge_factor=1e-12, bias=False, seed=None, progbar=False)

Sample all indices of a tensor network using repeated belief propagation runs and decimation.

Parameters:
  • tn (TensorNetwork) – The tensor network to sample.

  • messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys (ix, tid) and (tid, ix). If not given, then messages are initialized as uniform.

  • output_inds (sequence of str, optional) – The indices to sample. If not given, then all indices are sampled.

  • max_iterations (int, optional) – The maximum number of iterations for each message passing run.

  • tol (float, optional) – The convergence tolerance for each message passing run.

  • smudge_factor (float, optional) – A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals.

  • bias (bool or float, optional) – Whether to bias the sampling towards the largest marginal. If False (the default), then indices are sampled proportional to their marginals. If True, then each index is ‘sampled’ to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling.

  • thread_pool (bool, int or ThreadPoolExecutor, optional) – Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If True, then the number of threads is set to the number of cores. If a ThreadPoolExecutor, then this is used directly.

  • seed (int, optional) – A random seed to use for the sampling.

  • progbar (bool, optional) – Whether to show a progress bar.

Returns:

  • config (dict[str, int]) – The sample configuration, mapping indices to values.

  • tn_config (TensorNetwork) – The tensor network with all index values (or just those in output_inds if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment.

  • omega (float) – The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling.

class quimb.experimental.belief_propagation.L1BP(tn, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', message_init_function=None, **contract_opts)

Bases: quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon

Lazy 1-norm belief propagation. BP is run between groups of tensors defined by site_tags. The message updates are lazy contractions.

Parameters:
  • tn (TensorNetwork) – The tensor network to run BP on.

  • site_tags (sequence of str, optional) – The tags identifying the sites in tn, each tag forms a region, which should not overlap. If the tensor network is structured, then these are inferred automatically.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • contract_opts – Other options supplied to cotengra.array_contract.

backend
damping = 0.0
local_convergence = True
update = 'sequential'
optimize = 'auto-hq'
contract_opts
touched
_abs
_max
_sum
_norm
_normalize
_distance
messages
contraction_tns
iterate(tol=5e-06)
contract(strip_exponent=False)
normalize_messages()

Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1).

quimb.experimental.belief_propagation.contract_l1bp(tn, max_iterations=1000, tol=5e-06, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', strip_exponent=False, info=None, progbar=False, **contract_opts)

Estimate the contraction of tn using lazy 1-norm belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to contract.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • site_tags (sequence of str, optional) – The tags identifying the sites in tn, each tag forms a region. If the tensor network is structured, then these are inferred automatically.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • progbar (bool, optional) – Whether to show a progress bar.

  • strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • contract_opts – Other options supplied to cotengra.array_contract.

class quimb.experimental.belief_propagation.L2BP(tn, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', **contract_opts)

Bases: quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon

Lazy (as in multiple uncontracted tensors per site) 2-norm (as in for wavefunctions and operators) belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to form the 2-norm of and run BP on.

  • site_tags (sequence of str, optional) – The tags identifying the sites in tn, each tag forms a region, which should not overlap. If the tensor network is structured, then these are inferred automatically.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • contract_opts – Other options supplied to cotengra.array_contract.

backend
damping = 0.0
local_convergence = True
update = 'sequential'
optimize = 'auto-hq'
contract_opts
touched
_normalize
_symmetrize
_distance
messages
contraction_tns
iterate(tol=5e-06)
normalize_messages()

Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1).

contract(strip_exponent=False)

Estimate the contraction of the norm squared using the current messages.

partial_trace(site, normalized=True, optimize='auto-hq')
compress(tn, max_bond=None, cutoff=5e-06, cutoff_mode='rsum2', renorm=0, lazy=False)

Compress the state tn, assumed to matched this L2BP instance, using the messages stored.

quimb.experimental.belief_propagation.compress_l2bp(tn, max_bond, cutoff=0.0, cutoff_mode='rsum2', max_iterations=1000, tol=5e-06, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', lazy=False, inplace=False, info=None, progbar=False, **contract_opts)

Compress tn using lazy belief propagation, producing a tensor network with a single tensor per site.

Parameters:
  • tn (TensorNetwork) – The tensor network to form the 2-norm of, run BP on and then compress.

  • max_bond (int) – The maximum bond dimension to compress to.

  • cutoff (float, optional) – The cutoff to use when compressing.

  • cutoff_mode (int, optional) – The cutoff mode to use when compressing.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • site_tags (sequence of str, optional) – The tags identifying the sites in tn, each tag forms a region. If the tensor network is structured, then these are inferred automatically.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.

  • lazy (bool, optional) – Whether to perform the compression lazily, i.e. to leave the computed compression projectors uncontracted.

  • inplace (bool, optional) – Whether to perform the compression inplace.

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Other options supplied to cotengra.array_contract.

Return type:

TensorNetwork

quimb.experimental.belief_propagation.contract_l2bp(tn, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', max_iterations=1000, tol=5e-06, strip_exponent=False, info=None, progbar=False, **contract_opts)

Estimate the norm squared of tn using lazy belief propagation.

Parameters:
  • tn (TensorNetwork) – The tensor network to estimate the norm squared of.

  • site_tags (sequence of str, optional) – The tags identifying the sites in tn, each tag forms a region.

  • damping (float, optional) – The damping parameter to use, defaults to no damping.

  • update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.

  • local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.

  • optimize (str or PathOptimizer, optional) – The contraction strategy to use.

  • max_iterations (int, optional) – The maximum number of iterations to perform.

  • tol (float, optional) – The convergence tolerance for messages.

  • strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If True then the returned result is (mantissa, exponent).

  • info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.

  • progbar (bool, optional) – Whether to show a progress bar.

  • contract_opts – Other options supplied to cotengra.array_contract.

class quimb.experimental.belief_propagation.RegionGraph(regions=(), autocomplete=True)
lookup
parents
children
counts
property regions
neighbor_regions(region)

Get all regions that intersect with the given region.

add_region(region)

Add a new region and update parent-child relationships.

Parameters:

region (Sequence[Hashable]) – The new region to add.

autocomplete()

Add all missing intersecting sub-regions.

autoextend(regions=None)

Extend this region graph upwards by adding in all pairwise unions of regions. If regions is specified, take this as one set of pairs.

get_parents(region)

Get all ancestors that contain the given region, but do not contain any other regions that themselves contain the given region.

get_children(region)

Get all regions that are contained by the given region, but are not contained by any other descendents of the given region.

get_ancestors(region)

Get all regions that contain the given region, not just direct parents.

get_descendents(region)

Get all regions that are contained by the given region, not just direct children.

get_count(region)

Get the count of the given region, i.e. the correct weighting to apply when summing over all regions to avoid overcounting.

get_total_count()
get_level(region)

Get the level of the given region, i.e. the distance to an ancestor with no parents.

draw(pos=None, a=20, scale=1.0, radius=0.1, **drawing_opts)
__repr__()