quimb.experimental.belief_propagation.hd1bp
¶
Hyper dense belief propagation for arbitrary quimb tensor networks. This is the classic 1-norm version of belief propagation, which treats the tensor network directly as a factor graph. Messages are processed one at a time.
TODO:
[ ] implement ‘touching’, so that only necessary messages are updated
[ ] implement sequential update
Module Contents¶
Classes¶
Object interface for hyper, dense, 1-norm belief propagation. This is |
Functions¶
|
Given set of messages |
Given set of messages |
|
|
|
|
Given set of messages |
|
Given set of messages |
|
Run a single iteration of belief propagation. This is the basic version |
|
Estimate the contraction of |
|
Run belief propagation on a tensor network until it converges. This |
|
Sample all indices of a tensor network using repeated belief propagation |
- quimb.experimental.belief_propagation.hd1bp.compute_all_hyperind_messages_prod(ms, smudge_factor=1e-12)¶
Given set of messages
ms
incident to a single index, compute the corresponding next output messages, using the ‘product’ implementation.
- quimb.experimental.belief_propagation.hd1bp.compute_all_hyperind_messages_tree(ms)¶
Given set of messages
ms
incident to a single index, compute the corresponding next output messages, using the ‘tree’ implementation.
- quimb.experimental.belief_propagation.hd1bp.compute_all_tensor_messages_shortcuts(x, ms, ndim)¶
- quimb.experimental.belief_propagation.hd1bp.compute_all_tensor_messages_prod(x, ms, backend=None, smudge_factor=1e-12)¶
Given set of messages
ms
incident to tensor with datax
, compute the corresponding next output messages, using the ‘prod’ implementation.
- quimb.experimental.belief_propagation.hd1bp.compute_all_tensor_messages_tree(x, ms, backend=None)¶
Given set of messages
ms
incident to tensor with datax
, compute the corresponding next output messages, using the ‘tree’ implementation.
- quimb.experimental.belief_propagation.hd1bp.iterate_belief_propagation_basic(tn, messages, damping=None, smudge_factor=1e-12, tol=None)¶
Run a single iteration of belief propagation. This is the basic version that does not vectorize contractions.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
.smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
- Returns:
new_messages – The new messages.
- Return type:
- class quimb.experimental.belief_propagation.hd1bp.HD1BP(tn, messages=None, damping=None, smudge_factor=1e-12)¶
Bases:
quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon
Object interface for hyper, dense, 1-norm belief propagation. This is standard belief propagation in tensor network form.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
- iterate(**kwargs)¶
- get_gauged_tn()¶
Assuming the supplied tensor network has no hyper or dangling indices, gauge it by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor.
- contract(strip_exponent=False)¶
Estimate the total contraction, i.e. the exponential of the ‘Bethe free entropy’.
- quimb.experimental.belief_propagation.hd1bp.contract_hd1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, smudge_factor=1e-12, strip_exponent=False, info=None, progbar=False)¶
Estimate the contraction of
tn
with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy.- Parameters:
tn (TensorNetwork) – The tensor network to run BP on, can have hyper indices.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping factor to use, 0.0 means no damping.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
- Return type:
scalar or (scalar, float)
- quimb.experimental.belief_propagation.hd1bp.run_belief_propagation_hd1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, smudge_factor=1e-12, info=None, progbar=False)¶
Run belief propagation on a tensor network until it converges. This is the basic version that does not vectorize contractions.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
. If not given, then messages are initialized as uniform.max_iterations (int, optional) – The maximum number of iterations to run for.
tol (float, optional) – The convergence tolerance.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
messages (dict) – The final messages.
converged (bool) – Whether the algorithm converged.
- quimb.experimental.belief_propagation.hd1bp.sample_hd1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, smudge_factor=1e-12, bias=False, seed=None, progbar=False)¶
Sample all indices of a tensor network using repeated belief propagation runs and decimation.
- Parameters:
tn (TensorNetwork) – The tensor network to sample.
messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
. If not given, then messages are initialized as uniform.output_inds (sequence of str, optional) – The indices to sample. If not given, then all indices are sampled.
max_iterations (int, optional) – The maximum number of iterations for each message passing run.
tol (float, optional) – The convergence tolerance for each message passing run.
smudge_factor (float, optional) – A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals.
bias (bool or float, optional) – Whether to bias the sampling towards the largest marginal. If
False
(the default), then indices are sampled proportional to their marginals. IfTrue
, then each index is ‘sampled’ to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling.thread_pool (bool, int or ThreadPoolExecutor, optional) – Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If
True
, then the number of threads is set to the number of cores. If aThreadPoolExecutor
, then this is used directly.seed (int, optional) – A random seed to use for the sampling.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
config (dict[str, int]) – The sample configuration, mapping indices to values.
tn_config (TensorNetwork) – The tensor network with all index values (or just those in output_inds if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment.
omega (float) – The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling.