quimb.tensor.belief_propagation.d1bp¶
Belief propagation for standard tensor networks. This:
assumes no hyper indices, only standard bonds.
assumes a single (‘dense’) tensor per site
works directly on the ‘1-norm’ i.e. scalar tensor network
This is the simplest version of belief propagation, and is useful for simple investigations.
Classes¶
Dense (as in one tensor per site) 1-norm (as in for 'classical' systems) |
Functions¶
|
|
|
Estimate the contraction of standard tensor network |
Module Contents¶
- quimb.tensor.belief_propagation.d1bp.initialize_messages(tn, fill_fn=None)¶
- class quimb.tensor.belief_propagation.d1bp.D1BP(tn: quimb.tensor.TensorNetwork, *, messages=None, damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, message_init_function=None, contract_every=None, inplace=False)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Dense (as in one tensor per site) 1-norm (as in for ‘classical’ systems) belief propagation algorithm. Allows message reuse. This version assumes no hyper indices (i.e. a standard tensor network). This is the simplest version of belief propagation.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.
damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every
contract_every
iterations. The resulting values are stored inzvals
at corresponding pointszval_its
.inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
- tn¶
The target tensor network.
- Type:
- messages¶
The current messages. The key is a tuple of the index and tensor id that the message is being sent to.
- key_pairs¶
A dictionary mapping the key of a message to the key of the message propagating in the opposite direction.
- local_convergence = True¶
- touched¶
- key_pairs¶
- iterate(tol=5e-06)¶
Perform a single iteration of belief propagation. Subclasses should implement this method, returning either max_mdiff or a dictionary containing max_mdiff and any other relevant information:
- {
“nconv”: nconv, “ncheck”: ncheck, “max_mdiff”: max_mdiff,
}
- normalize_message_pairs()¶
Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1).
- normalize_tensors(strip_exponent=True)¶
Normalize every local tensor contraction so that it equals 1. Gather the overall normalization factor into
self.exponent
and the sign intoself.sign
by default.- Parameters:
strip_exponent (bool, optional) – Whether to collect the sign and exponent. If
False
then the value of the BP contraction is set to 1.
- get_gauged_tn()¶
Gauge the original TN by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor.
- get_cluster(tids)¶
Get the region of tensors given by tids, with the messages on the border contracted in, removing those dangling indices.
- Parameters:
tids (sequence of int) – The tensor ids forming a region.
- Return type:
- get_cluster_excited(tids)¶
Get the local tensor network for
tids
with BP messages inserted on the boundary and excitation projectors inserted on the inner bonds. See https://arxiv.org/abs/2409.03108 for more details.
- contract_loop_series_expansion(gloops=None, multi_excitation_correct=True, tol_correction=1e-12, maxiter_correction=100, strip_exponent=False, optimize='auto-hq', **contract_opts)¶
Contract the tensor network using the same procedure as in https://arxiv.org/abs/2409.03108 - “Loop Series Expansions for Tensor Networks”.
- Parameters:
gloops (int or iterable of tuples, optional) – The gloop sizes to use. If an integer, then generate all gloop sizes up to this size. If a tuple, then use these gloops.
multi_excitation_correct (bool, optional) – Whether to use the multi-excitation correction. If
True
, then the free energy is refined iteratively until self consistent.tol_correction (float, optional) – The tolerance for the multi-excitation correction.
maxiter_correction (int, optional) – The maximum number of iterations for the multi-excitation correction.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
contract_opts – Other options supplied to
TensorNetwork.contract
.
- local_tensor_contract(tid)¶
Contract the messages around tensor
tid
.
- local_message_contract(ix)¶
Contract the messages at index
ix
.
- contract(strip_exponent=False, check_zero=True, **kwargs)¶
Estimate the contraction of the tensor network.
- contract_with_loops(max_loop_length=None, min_loop_length=1, optimize='auto-hq', strip_exponent=False, check_zero=True, **contract_opts)¶
Estimate the contraction of the tensor network, including loop corrections.
- contract_gloop_expand(gloops=None, autocomplete=True, strip_exponent=False, check_zero=True, optimize='auto-hq', combine='prod', **contract_opts)¶
Contract the tensor network using generalized loop cluster expansion.
- quimb.tensor.belief_propagation.d1bp.contract_d1bp(tn, *, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts)¶
Estimate the contraction of standard tensor network
tn
using dense 1-norm belief propagation.- Parameters:
tn (TensorNetwork) – The tensor network to contract, it should have no dangling or hyper indices.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping parameter to use, defaults to no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
strip_exponent (bool, optional) – Whether to return the mantissa and exponent separately.
check_zero (bool, optional) – Whether to check for zero values and return zero early.
info (dict, optional) – If supplied, the following information will be added to it:
converged
(bool),iterations
(int),max_mdiff
(float),rolling_abs_mean_diff
(float).progbar (bool, optional) – Whether to show a progress bar.