quimb.tensor.belief_propagation.bp_common ========================================= .. py:module:: quimb.tensor.belief_propagation.bp_common Classes ------- .. autoapisummary:: quimb.tensor.belief_propagation.bp_common.RollingDiffMean quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon Functions --------- .. autoapisummary:: quimb.tensor.belief_propagation.bp_common.prod quimb.tensor.belief_propagation.bp_common.initialize_hyper_messages quimb.tensor.belief_propagation.bp_common.combine_local_contractions quimb.tensor.belief_propagation.bp_common.contract_hyper_messages quimb.tensor.belief_propagation.bp_common.compute_index_marginal quimb.tensor.belief_propagation.bp_common.compute_tensor_marginal quimb.tensor.belief_propagation.bp_common.compute_all_index_marginals_from_messages quimb.tensor.belief_propagation.bp_common.normalize_message_pair quimb.tensor.belief_propagation.bp_common.maybe_get_thread_pool quimb.tensor.belief_propagation.bp_common.create_lazy_community_edge_map quimb.tensor.belief_propagation.bp_common.auto_add_indices quimb.tensor.belief_propagation.bp_common.process_loop_series_expansion_weights Module Contents --------------- .. py:function:: prod(xs) Product of all elements in ``xs``. .. py:class:: RollingDiffMean(size=16) Tracker for the absolute rolling mean of diffs between values, to assess effective convergence of BP above actual message tolerance. .. py:attribute:: size :value: 16 .. py:attribute:: diffs :value: [] .. py:attribute:: last_x :value: None .. py:attribute:: dxsum :value: 0.0 .. py:method:: update(x) .. py:method:: absmeandiff() .. py:class:: BeliefPropagationCommon(tn: quimb.tensor.TensorNetwork, *, damping=0.0, update='sequential', normalize=None, distance=None, contract_every=None, inplace=False) Common interfaces for belief propagation algorithms. :param tn: The tensor network to perform belief propagation on. :type tn: TensorNetwork :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param contract_every: If not None, 'contract' (via BP) the tensor network every ``contract_every`` iterations. The resulting values are stored in ``zvals`` at corresponding points ``zval_its``. :type contract_every: int, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional .. py:attribute:: tn .. py:attribute:: backend .. py:attribute:: dtype .. py:attribute:: sign :value: 1.0 .. py:attribute:: exponent .. py:property:: damping .. py:attribute:: update :value: 'sequential' .. py:property:: normalize .. py:property:: distance .. py:attribute:: contract_every :value: None .. py:attribute:: n :value: 0 .. py:attribute:: converged :value: False .. py:attribute:: mdiffs :value: [] .. py:attribute:: rdiffs :value: [] .. py:attribute:: zval_its :value: [] .. py:attribute:: zvals :value: [] .. py:method:: _maybe_contract() .. py:method:: run(max_iterations=1000, diis=False, tol=5e-06, tol_abs=None, tol_rolling_diff=None, info=None, progbar=False) :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param info: If supplied, the following information will be added to it: ``converged`` (bool), ``iterations`` (int), ``max_mdiff`` (float), ``rolling_abs_mean_diff`` (float). :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional .. py:method:: plot(zvals_yscale='asinh', **kwargs) .. py:property:: mdiff .. py:method:: iterate(tol=1e-06) :abstractmethod: Perform a single iteration of belief propagation. Subclasses should implement this method, returning either `max_mdiff` or a dictionary containing `max_mdiff` and any other relevant information: { "nconv": nconv, "ncheck": ncheck, "max_mdiff": max_mdiff, } .. py:method:: contract(strip_exponent=False, check_zero=True, **kwargs) :abstractmethod: Contract the tensor network and return the resulting value. .. py:method:: __repr__() .. py:function:: initialize_hyper_messages(tn, fill_fn=None, smudge_factor=1e-12) Initialize messages for belief propagation, this is equivalent to doing a single round of belief propagation with uniform messages. :param tn: The tensor network to initialize messages for. :type tn: TensorNetwork :param fill_fn: A function to fill the messages with, of signature ``fill_fn(shape)``. :type fill_fn: callable, optional :param smudge_factor: A small number to add to the messages to avoid numerical issues. :type smudge_factor: float, optional :returns: **messages** -- The initial messages. For every index and tensor id pair, there will be a message to and from with keys ``(ix, tid)`` and ``(tid, ix)``. :rtype: dict .. py:function:: combine_local_contractions(values, backend=None, strip_exponent=False, check_zero=True, mantissa=None, exponent=None) Combine a product of local contractions into a single value, avoiding overflow/underflow by accumulating the mantissa and exponent separately. :param values: The values to combine, each with a power to be raised to. :type values: sequence of (scalar, int) :param backend: The backend to use. Infered from the first value if not given. :type backend: str, optional :param strip_exponent: Whether to return the mantissa and exponent separately. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. :type check_zero: bool, optional :param mantissa: The initial mantissa to accumulate into. :type mantissa: float, optional :param exponent: The initial exponent to accumulate into. :type exponent: float, optional :returns: **result** -- The combined value, or the mantissa and exponent separately. :rtype: float or (float, float) .. py:function:: contract_hyper_messages(tn, messages, strip_exponent=False, check_zero=True, backend=None) Estimate the contraction of ``tn`` given ``messages``, via the exponential of the Bethe free entropy. .. py:function:: compute_index_marginal(tn, ind, messages) Compute the marginal for a single index given ``messages``. :param tn: The tensor network to compute the marginal for. :type tn: TensorNetwork :param ind: The index to compute the marginal for. :type ind: int :param messages: The messages to use, which should match ``tn``. :type messages: dict :returns: **marginal** -- The marginal probability distribution for the index ``ind``. :rtype: array_like .. py:function:: compute_tensor_marginal(tn, tid, messages) Compute the marginal for the region surrounding a single tensor/factor given ``messages``. :param tn: The tensor network to compute the marginal for. :type tn: TensorNetwork :param tid: The tensor id to compute the marginal for. :type tid: int :param messages: The messages to use, which should match ``tn``. :type messages: dict :returns: **marginal** -- The marginal probability distribution for the tensor/factor ``tid``. :rtype: array_like .. py:function:: compute_all_index_marginals_from_messages(tn, messages) Compute all index marginals from belief propagation messages. :param tn: The tensor network to compute marginals for. :type tn: TensorNetwork :param messages: The belief propagation messages. :type messages: dict :returns: **marginals** -- The marginals for each index. :rtype: dict .. py:function:: normalize_message_pair(mi, mj) Normalize a pair of messages such that ` = 1` and ` = ` (but in general != 1). .. py:function:: maybe_get_thread_pool(thread_pool) Get a thread pool if requested. .. py:function:: create_lazy_community_edge_map(tn, site_tags=None, rank_simplify=True) For lazy BP algorithms, create the data structures describing the effective graph of the lazily grouped 'sites' given by ``site_tags``. .. py:function:: auto_add_indices(tn, regions) Make sure all indices incident to any tensor in each region are included in the region. .. py:function:: process_loop_series_expansion_weights(weights, mantissa=1.0, exponent=0.0, multi_excitation_correct=True, maxiter_correction=100, tol_correction=1e-14, strip_exponent=False, return_all=False) Assuming a normalized BP fixed point, take a series of loop weights, and iteratively compute the free energy by requiring self-consistency with exponential suppression factors. See https://arxiv.org/abs/2409.03108.