quimb.experimental.belief_propagation.hv1bp¶
Hyper, vectorized, 1-norm, belief propagation.
Classes¶
Object interface for hyper, vectorized, 1-norm, belief propagation. This |
Functions¶
|
Initialize batched messages for belief propagation, as the uniform |
|
|
Compute all output messages for a stacked tensor and messages. |
|
|
|
|
|
|
|
|
Given stacked messsages and tensors, compute stacked output messages. |
|
|
|
Update the stacked input messages from the stacked output messages. |
Get all messages as a dict from the batch stacked input form. |
|
|
|
|
Estimate the contraction of |
|
Run belief propagation on a tensor network until it converges. |
|
Sample all indices of a tensor network using repeated belief propagation |
Module Contents¶
- quimb.experimental.belief_propagation.hv1bp.initialize_messages_batched(tn, messages=None)¶
Initialize batched messages for belief propagation, as the uniform distribution.
- quimb.experimental.belief_propagation.hv1bp._compute_all_hyperind_messages_tree_batched(bm)¶
- quimb.experimental.belief_propagation.hv1bp._compute_all_hyperind_messages_prod_batched(bm, smudge_factor=1e-12)¶
- quimb.experimental.belief_propagation.hv1bp._compute_all_tensor_messages_tree_batched(bx, bm)¶
Compute all output messages for a stacked tensor and messages.
- quimb.experimental.belief_propagation.hv1bp._compute_all_tensor_messages_prod_batched(bx, bm, smudge_factor=1e-12)¶
- quimb.experimental.belief_propagation.hv1bp._compute_output_single_t(bm, bx, _reshape, _sum, smudge_factor=1e-12)¶
- quimb.experimental.belief_propagation.hv1bp._compute_output_single_m(bm, _reshape, _sum, smudge_factor=1e-12)¶
- quimb.experimental.belief_propagation.hv1bp._compute_outputs_batched(batched_inputs, batched_tensors=None, smudge_factor=1e-12, _pool=None)¶
Given stacked messsages and tensors, compute stacked output messages.
- quimb.experimental.belief_propagation.hv1bp._update_output_to_input_single_batched(bi, bo, maskin, maskout, _max, _sum, _abs, damping=0.0)¶
- quimb.experimental.belief_propagation.hv1bp._update_outputs_to_inputs_batched(batched_inputs, batched_outputs, masks, damping=0.0, _pool=None)¶
Update the stacked input messages from the stacked output messages.
- quimb.experimental.belief_propagation.hv1bp._extract_messages_from_inputs_batched(batched_inputs_m, batched_inputs_t, input_locs_m, input_locs_t)¶
Get all messages as a dict from the batch stacked input form.
- quimb.experimental.belief_propagation.hv1bp.iterate_belief_propagation_batched(batched_inputs_m, batched_inputs_t, batched_tensors, masks_m, masks_t, smudge_factor=1e-12, damping=0.0, _pool=None)¶
- class quimb.experimental.belief_propagation.hv1bp.HV1BP(tn, messages=None, smudge_factor=1e-12, damping=0.0, thread_pool=False)¶
Bases:
quimb.experimental.belief_propagation.bp_common.BeliefPropagationCommon
Object interface for hyper, vectorized, 1-norm, belief propagation. This is the fast version of belief propagation possible when there are many, small, matching tensor sizes.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
thread_pool (bool or int, optional) – Whether to use a thread pool for parallelization, if
True
use the default number of threads, if an integer use that many threads.
- tn¶
- backend¶
- smudge_factor = 1e-12¶
- damping = 0.0¶
- pool = None¶
- iterate(**kwargs)¶
- get_messages()¶
Get messages in individual form from the batched stacks.
- contract(strip_exponent=False)¶
- quimb.experimental.belief_propagation.hv1bp.contract_hv1bp(tn, messages=None, max_iterations=1000, tol=5e-06, smudge_factor=1e-12, damping=0.0, strip_exponent=False, info=None, progbar=False)¶
Estimate the contraction of
tn
with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy.- Parameters:
tn (TensorNetwork) – The tensor network to run BP on, can have hyper indices.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
damping (float, optional) – The damping factor to use, 0.0 means no damping.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
- Return type:
scalar or (scalar, float)
- quimb.experimental.belief_propagation.hv1bp.run_belief_propagation_hv1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, smudge_factor=1e-12, info=None, progbar=False)¶
Run belief propagation on a tensor network until it converges.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
. If not given, then messages are initialized as uniform.max_iterations (int, optional) – The maximum number of iterations to run for.
tol (float, optional) – The convergence tolerance.
damping (float, optional) – The damping factor to use, 0.0 means no damping.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
messages (dict) – The final messages.
converged (bool) – Whether the algorithm converged.
- quimb.experimental.belief_propagation.hv1bp.sample_hv1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, smudge_factor=1e-12, bias=False, seed=None, progbar=False)¶
Sample all indices of a tensor network using repeated belief propagation runs and decimation.
- Parameters:
tn (TensorNetwork) – The tensor network to sample.
messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
. If not given, then messages are initialized as uniform.output_inds (sequence of str, optional) – The indices to sample. If not given, then all indices are sampled.
max_iterations (int, optional) – The maximum number of iterations for each message passing run.
tol (float, optional) – The convergence tolerance for each message passing run.
smudge_factor (float, optional) – A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals.
bias (bool or float, optional) – Whether to bias the sampling towards the largest marginal. If
False
(the default), then indices are sampled proportional to their marginals. IfTrue
, then each index is ‘sampled’ to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling.thread_pool (bool, int or ThreadPoolExecutor, optional) – Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If
True
, then the number of threads is set to the number of cores. If aThreadPoolExecutor
, then this is used directly.seed (int, optional) – A random seed to use for the sampling.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
config (dict[str, int]) – The sample configuration, mapping indices to values.
tn_config (TensorNetwork) – The tensor network with all index values (or just those in output_inds if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment.
omega (float) – The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling.