quimb.tensor.belief_propagation =============================== .. py:module:: quimb.tensor.belief_propagation .. autoapi-nested-parse:: Belief propagation (BP) routines. There are three potential categorizations of BP and each combination of them is potentially valid specific algorithm. 1-norm vs 2-norm BP ------------------- - 1-norm (normal): BP runs directly on the tensor network, messages have size ``d`` where ``d`` is the size of the bond(s) connecting two tensors or regions. - 2-norm (quantum): BP runs on the squared tensor network, messages have size ``d^2`` where ``d`` is the size of the bond(s) connecting two tensors or regions. Each local tensor or region is partially traced (over dangling indices) with its conjugate to create a single node. Graph vs Hypergraph BP ---------------------- - Graph (simple): the tensor network lives on a graph, where indices either appear on two tensors (a bond), or appear on a single tensor (are outputs). In this case, messages are exchanged directly between tensors. - Hypergraph: the tensor network lives on a hypergraph, where indices can appear on any number of tensors. In this case, the update procedure is two parts, first all 'tensor' messages are computed, these are then used in the second step to compute all the 'index' messages, which are then fed back into the 'tensor' message update and so forth. For 2-norm BP one likely needs to specify which indices are outputs and should be traced over. The hypergraph case of course includes the graph case, but since the 'index' message update is simply the identity, it is convenient to have a separate simpler implementation, where the standard TN bond vs physical index definitions hold. Dense vs Vectorized vs Lazy BP ------------------------------ - Dense: each node is a single tensor, or pair of tensors for 2-norm BP. If all multibonds have been fused, then each message is a vector (1-norm case) or matrix (2-norm case). - Vectorized: the same as the above, but all matching tensor update and message updates are stacked and performed simultaneously. This can be enormously more efficient for large numbers of small tensors. - Lazy: each node is potentially a tensor network itself with arbitrary inner structure and number of bonds connecting to other nodes. The message are generally tensors and each update is a lazy contraction, which is potentially much cheaper / requires less memory than forming the 'dense' node for large tensors. (There is also the MPS flavor where each node has a 1D structure and the messages are matrix product states, with updates involving compression.) Overall that gives 12 possible BP flavors, some implemented here: - [x] (HD1BP) hyper, dense, 1-norm - this is the standard BP algorithm - [x] (HD2BP) hyper, dense, 2-norm - [x] (HV1BP) hyper, vectorized, 1-norm - [ ] (HV2BP) hyper, vectorized, 2-norm - [ ] (HL1BP) hyper, lazy, 1-norm - [ ] (HL2BP) hyper, lazy, 2-norm - [x] (D1BP) simple, dense, 1-norm - simple BP for simple tensor networks - [x] (D2BP) simple, dense, 2-norm - this is the standard PEPS BP algorithm - [ ] (V1BP) simple, vectorized, 1-norm - [ ] (V2BP) simple, vectorized, 2-norm - [x] (L1BP) simple, lazy, 1-norm - [x] (L2BP) simple, lazy, 2-norm The 2-norm methods can be used to compress bonds or estimate the 2-norm. The 1-norm methods can be used to estimate the 1-norm, i.e. contracted value. Both methods can be used to compute index marginals and thus perform sampling. The vectorized methods can be extremely fast for large numbers of small tensors, but do currently require all dimensions to match. The dense and lazy methods can can converge messages *locally*, i.e. only update messages adjacent to messages which have changed. Submodules ---------- .. toctree:: :maxdepth: 1 /autoapi/quimb/tensor/belief_propagation/bp_common/index /autoapi/quimb/tensor/belief_propagation/d1bp/index /autoapi/quimb/tensor/belief_propagation/d2bp/index /autoapi/quimb/tensor/belief_propagation/diis/index /autoapi/quimb/tensor/belief_propagation/hd1bp/index /autoapi/quimb/tensor/belief_propagation/hv1bp/index /autoapi/quimb/tensor/belief_propagation/l1bp/index /autoapi/quimb/tensor/belief_propagation/l2bp/index /autoapi/quimb/tensor/belief_propagation/regions/index Classes ------- .. autoapisummary:: quimb.tensor.belief_propagation.D1BP quimb.tensor.belief_propagation.D2BP quimb.tensor.belief_propagation.HD1BP quimb.tensor.belief_propagation.HV1BP quimb.tensor.belief_propagation.L1BP quimb.tensor.belief_propagation.L2BP quimb.tensor.belief_propagation.RegionGraph Functions --------- .. autoapisummary:: quimb.tensor.belief_propagation.combine_local_contractions quimb.tensor.belief_propagation.initialize_hyper_messages quimb.tensor.belief_propagation.contract_d1bp quimb.tensor.belief_propagation.compress_d2bp quimb.tensor.belief_propagation.contract_d2bp quimb.tensor.belief_propagation.sample_d2bp quimb.tensor.belief_propagation.contract_hd1bp quimb.tensor.belief_propagation.sample_hd1bp quimb.tensor.belief_propagation.contract_hv1bp quimb.tensor.belief_propagation.sample_hv1bp quimb.tensor.belief_propagation.contract_l1bp quimb.tensor.belief_propagation.compress_l2bp quimb.tensor.belief_propagation.contract_l2bp Package Contents ---------------- .. py:function:: combine_local_contractions(values, backend=None, strip_exponent=False, check_zero=True, mantissa=None, exponent=None) Combine a product of local contractions into a single value, avoiding overflow/underflow by accumulating the mantissa and exponent separately. :param values: The values to combine, each with a power to be raised to. :type values: sequence of (scalar, int) :param backend: The backend to use. Infered from the first value if not given. :type backend: str, optional :param strip_exponent: Whether to return the mantissa and exponent separately. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. :type check_zero: bool, optional :param mantissa: The initial mantissa to accumulate into. :type mantissa: float, optional :param exponent: The initial exponent to accumulate into. :type exponent: float, optional :returns: **result** -- The combined value, or the mantissa and exponent separately. :rtype: float or (float, float) .. py:function:: initialize_hyper_messages(tn, fill_fn=None, smudge_factor=1e-12) Initialize messages for belief propagation, this is equivalent to doing a single round of belief propagation with uniform messages. :param tn: The tensor network to initialize messages for. :type tn: TensorNetwork :param fill_fn: A function to fill the messages with, of signature ``fill_fn(shape)``. :type fill_fn: callable, optional :param smudge_factor: A small number to add to the messages to avoid numerical issues. :type smudge_factor: float, optional :returns: **messages** -- The initial messages. For every index and tensor id pair, there will be a message to and from with keys ``(ix, tid)`` and ``(tid, ix)``. :rtype: dict .. py:class:: D1BP(tn: quimb.tensor.TensorNetwork, *, messages=None, damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, message_init_function=None, contract_every=None, inplace=False) Bases: :py:obj:`quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon` Dense (as in one tensor per site) 1-norm (as in for 'classical' systems) belief propagation algorithm. Allows message reuse. This version assumes no hyper indices (i.e. a standard tensor network). This is the simplest version of belief propagation. :param tn: The tensor network to run BP on. :type tn: TensorNetwork :param messages: The initial messages to use, effectively defaults to all ones if not specified. :type messages: dict[(str, int), array_like], optional :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param contract_every: If not None, 'contract' (via BP) the tensor network every ``contract_every`` iterations. The resulting values are stored in ``zvals`` at corresponding points ``zval_its``. :type contract_every: int, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional .. attribute:: tn The target tensor network. :type: TensorNetwork .. attribute:: messages The current messages. The key is a tuple of the index and tensor id that the message is being sent to. :type: dict[(str, int), array_like] .. attribute:: key_pairs A dictionary mapping the key of a message to the key of the message propagating in the opposite direction. :type: dict[(str, int), (str, int)] .. py:attribute:: local_convergence :value: True .. py:attribute:: touched .. py:attribute:: key_pairs .. py:method:: iterate(tol=5e-06) Perform a single iteration of belief propagation. Subclasses should implement this method, returning either `max_mdiff` or a dictionary containing `max_mdiff` and any other relevant information: { "nconv": nconv, "ncheck": ncheck, "max_mdiff": max_mdiff, } .. py:method:: normalize_message_pairs() Normalize all messages such that for each bond ` = 1` and ` = ` (but in general != 1). .. py:method:: normalize_tensors(strip_exponent=True) Normalize every local tensor contraction so that it equals 1. Gather the overall normalization factor into ``self.exponent`` and the sign into ``self.sign`` by default. :param strip_exponent: Whether to collect the sign and exponent. If ``False`` then the value of the BP contraction is set to 1. :type strip_exponent: bool, optional .. py:method:: get_gauged_tn() Gauge the original TN by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor. .. py:method:: get_cluster(tids) Get the region of tensors given by `tids`, with the messages on the border contracted in, removing those dangling indices. :param tids: The tensor ids forming a region. :type tids: sequence of int :rtype: TensorNetwork .. py:method:: get_cluster_excited(tids) Get the local tensor network for ``tids`` with BP messages inserted on the boundary and excitation projectors inserted on the inner bonds. See https://arxiv.org/abs/2409.03108 for more details. .. py:method:: contract_loop_series_expansion(gloops=None, multi_excitation_correct=True, tol_correction=1e-12, maxiter_correction=100, strip_exponent=False, optimize='auto-hq', **contract_opts) Contract the tensor network using the same procedure as in https://arxiv.org/abs/2409.03108 - "Loop Series Expansions for Tensor Networks". :param gloops: The gloop sizes to use. If an integer, then generate all gloop sizes up to this size. If a tuple, then use these gloops. :type gloops: int or iterable of tuples, optional :param multi_excitation_correct: Whether to use the multi-excitation correction. If ``True``, then the free energy is refined iteratively until self consistent. :type multi_excitation_correct: bool, optional :param tol_correction: The tolerance for the multi-excitation correction. :type tol_correction: float, optional :param maxiter_correction: The maximum number of iterations for the multi-excitation correction. :type maxiter_correction: int, optional :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param contract_opts: Other options supplied to ``TensorNetwork.contract``. .. py:method:: local_tensor_contract(tid) Contract the messages around tensor ``tid``. .. py:method:: local_message_contract(ix) Contract the messages at index ``ix``. .. py:method:: contract(strip_exponent=False, check_zero=True, **kwargs) Estimate the contraction of the tensor network. .. py:method:: contract_with_loops(max_loop_length=None, min_loop_length=1, optimize='auto-hq', strip_exponent=False, check_zero=True, **contract_opts) Estimate the contraction of the tensor network, including loop corrections. .. py:method:: contract_gloop_expand(gloops=None, autocomplete=True, strip_exponent=False, check_zero=True, optimize='auto-hq', combine='prod', **contract_opts) Contract the tensor network using generalized loop cluster expansion. .. py:function:: contract_d1bp(tn, *, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts) Estimate the contraction of standard tensor network ``tn`` using dense 1-norm belief propagation. :param tn: The tensor network to contract, it should have no dangling or hyper indices. :type tn: TensorNetwork :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param strip_exponent: Whether to return the mantissa and exponent separately. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. :type check_zero: bool, optional :param info: If supplied, the following information will be added to it: ``converged`` (bool), ``iterations`` (int), ``max_mdiff`` (float), ``rolling_abs_mean_diff`` (float). :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional .. py:class:: D2BP(tn, *, messages=None, output_inds=None, optimize='auto-hq', damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, contract_every=None, inplace=False, **contract_opts) Bases: :py:obj:`quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon` Dense (as in one tensor per site) 2-norm (as in for wavefunctions and operators) belief propagation. Allows messages reuse. This version assumes no hyper indices (i.e. a standard PEPS like tensor network). Potential use cases for D2BP and a PEPS like tensor network are: - globally compressing it from bond dimension ``D`` to ``D'`` - eagerly applying gates and locally compressing back to ``D`` - sampling configurations - estimating the norm of the tensor network :param tn: The tensor network to form the 2-norm of and run BP on. :type tn: TensorNetwork :param messages: The initial messages to use, effectively defaults to all ones if not specified. :type messages: dict[(str, int), array_like], optional :param output_inds: The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified. :type output_inds: set[str], optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param contract_every: If not None, 'contract' (via BP) the tensor network every ``contract_every`` iterations. The resulting values are stored in ``zvals`` at corresponding points ``zval_its``. :type contract_every: int, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. .. py:attribute:: contract_opts .. py:attribute:: local_convergence :value: True .. py:attribute:: touch_map .. py:attribute:: touched .. py:attribute:: exprs .. py:method:: build_expr(ix) .. py:method:: update_touched_from_tids(*tids) Specify that the messages for the given ``tids`` have changed. .. py:method:: update_touched_from_tags(tags, which='any') Specify that the messages for the messages touching ``tags`` have changed. .. py:method:: update_touched_from_inds(inds, which='any') Specify that the messages for the messages touching ``inds`` have changed. .. py:method:: iterate(tol=5e-06) Perform a single iteration of dense 2-norm belief propagation. .. py:method:: compute_marginal(ind) Compute the marginal for the index ``ind``. .. py:method:: normalize_message_pairs() Normalize a pair of messages such that ` = 1` and ` = ` (but in general != 1). .. py:method:: local_tensor_contract(tid) Contract the local region of the tensor at ``tid``. .. py:method:: normalize_tensors(strip_exponent=True) Normalize the tensors in the tensor network such that their 2-norm is 1. If ``strip_exponent`` is ``True`` then accrue the phase and exponent (log10) into the ``sign`` and ``exponent`` attributes of the D2BP object (the default), contract methods can then reinsert these factors when returning the final result. .. py:method:: contract(strip_exponent=False, check_zero=True) Estimate the total contraction, i.e. the 2-norm. :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :rtype: scalar or (scalar, float) .. py:method:: get_cluster_excited(tids=None, partial_trace_map=(), exclude=()) Get the local norm tensor network for ``tids`` with BP messages inserted on the boundary and excitation projectors inserted on the inner bonds. See arxiv.org/abs/2409.03108 for more details. :param tids: The tensor ids to include in the cluster. :type tids: iterable of hashable :param partial_trace_map: A remapping of ket indices to bra indices to perform an effective partial trace. :type partial_trace_map: dict[str, str], optional :param exclude: A set of bond indices to exclude from inserting excitation projectors on, e.g. when forming a reduced density matrix. :type exclude: iterable of str, optional :rtype: TensorNetwork .. py:method:: contract_loop_series_expansion(gloops=None, multi_excitation_correct=True, tol_correction=1e-12, maxiter_correction=100, strip_exponent=False, optimize='auto-hq', **contract_opts) Contract the norm of the tensor network using the same procedure as in https://arxiv.org/abs/2409.03108 - "Loop Series Expansions for Tensor Networks". :param gloops: The gloop sizes to use. If an integer, then generate all gloop sizes up to this size. If a tuple, then use these gloops. :type gloops: int or iterable of tuples, optional :param multi_excitation_correct: Whether to use the multi-excitation correction. If ``True``, then the free energy is refined iteratively until self consistent. :type multi_excitation_correct: bool, optional :param tol_correction: The tolerance for the multi-excitation correction. :type tol_correction: float, optional :param maxiter_correction: The maximum number of iterations for the multi-excitation correction. :type maxiter_correction: int, optional :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param contract_opts: Other options supplied to ``TensorNetwork.contract``. .. py:method:: partial_trace_loop_series_expansion(where, gloops=None, normalized=True, grow_from='alldangle', strict_size=True, multi_excitation_correct=True, optimize='auto-hq', **contract_opts) Compute the reduced density matrix for the sites specified by ``where`` using the loop series expansion method from https://arxiv.org/abs/2409.03108 - "Loop Series Expansions for Tensor Networks". :param where: The sites to from the reduced density matrix of. :type where: sequence[hashable] :param gloops: The generalized loops to use, or an integer to automatically generate all up to a certain size. If none use the smallest non- trivial size. :type gloops: int or iterable of tuples, optional :param normalized: Whether to normalize the final density matrix. :type normalized: bool, optional :param grow_from: How to grow the generalized loops from the specified ``where``: - 'alldangle': clusters up to max size, where target sites are allowed to dangle. - 'all': clusters where loop, up to max size, has to include *all* target sites. - 'any': clusters where loop, up to max size, can include *any* of the target sites. Remaining target sites are added as extras. By default 'alldangle'. :type grow_from: {'alldangle', 'all', 'any'}, optional :param strict_size: Whether to enforce the maximum size of the generalized loops, only relevant for `grow_from="any"`. :type strict_size: bool, optional :param multi_excitation_correct: Whether to use the multi-excitation correction. If ``True``, then the free energy is refined iteratively until self consistent. :type multi_excitation_correct: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param contract_opts: Other options supplied to ``TensorNetwork.contract``. .. py:method:: contract_gloop_expand(gloops=None, autocomplete=True, optimize='auto-hq', strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts) .. py:method:: compress(max_bond, cutoff=0.0, cutoff_mode=4, renorm=0, inplace=False) Compress the initial tensor network using the current messages. .. py:method:: gauge_insert(tn, smudge=1e-12) Insert the sqrt of messages on the boundary of a part of the main BP TN. :param tn: The tensor network to insert the messages into. :type tn: TensorNetwork :param smudge: Smudge factor to avoid numerical issues, the eigenvalues of the messages are clipped to be at least the largest eigenvalue times this factor. :type smudge: float, optional :returns: The sequence of tensors, indices and inverse gauges to apply to reverse the gauges applied. :rtype: list[tuple[Tensor, str, array_like]] .. py:method:: gauge_temp(tn, ungauge_outer=True) Context manager to temporarily gauge a tensor network, presumably a subnetwork of the main BP network, using the current messages, and then un-gauge it afterwards. :param tn: The tensor network to gauge. :type tn: TensorNetwork :param ungauge_outer: Whether to un-gauge the outer indices of the tensor network. :type ungauge_outer: bool, optional .. py:method:: gate_(G, where, max_bond=None, cutoff=0.0, cutoff_mode='rsum2', renorm=0, tn=None, **gate_opts) Apply a gate to the tensor network at the specified sites, using the current messages to gauge the tensors. .. py:method:: get_cluster_norm(tids, partial_trace_map=()) Get the local norm tensor network for ``tids`` with BP messages inserted on the boundary. Optionally open some physical indices up to perform an effective partial trace. :param tids: The tensor ids to include in the cluster. :type tids: iterable of hashable :param partial_trace_map: A remapping of ket indices to bra indices to perform an effective partial trace. :type partial_trace_map: dict[str, str], optional :rtype: TensorNetwork .. py:method:: partial_trace(where, normalized=True, tids_region=None, get='matrix', bra_ind_id=None, optimize='auto-hq', **contract_opts) Get the reduced density matrix for the sites specified by ``where``, with the remaining network approximated by messages on the boundary. :param where: The sites to from the reduced density matrix of. :type where: sequence[hashable] :param get: The type of object to return. If 'tn', return the uncontracted tensor network object. If 'tensor', return the labelled density operator as a `Tensor`. If 'array', return the unfused raw array with 2 * len(where) dimensions. If 'matrix', fuse the ket and bra indices and return this 2D matrix. :type get: {'tn', 'tensor', 'array', 'matrix'}, optional :param bra_ind_id: If ``get="tn"``, how to label the bra indices. If None, use the default based on the current site_ind_id. :type bra_ind_id: str, optional :param optimize: The path optimizer to use when contracting the tensor network. :type optimize: str or PathOptimizer, optional :param contract_opts: Other options supplied to ``TensorNetwork.contract``. :rtype: TensorNetwork or Tensor or array .. py:method:: partial_trace_gloop_expand(where, gloops=None, combine='sum', normalized=True, grow_from='alldangle', strict_size=True, optimize='auto-hq', **contract_opts) Compute a reduced density matrix for the sites specified by ``where`` using the generalized loop cluster expansion. :param where: The sites to from the reduced density matrix of. :type where: sequence[hashable] :param gloops: The generalized loops to use, or an integer to automatically generate all up to a certain size. If none use the smallest non- trivial size. :type gloops: int or iterable of tuples, optional :param combine: How to combine the contributions from each generalized loop. If 'sum', use coefficient weighted addition. If 'prod', use power weighted multiplication. :type combine: {'sum', 'prod'}, optional :param normalized: Whether to normalize the density matrix. If True or "local", normalize each cluster density matrix by its trace. If "separate", normalize the final density matrix by its trace (usually less accurate). If False, do not normalize. :type normalized: bool or {"local", "separate"}, optional :param grow_from: How to grow the generalized loops from the specified ``where``: - 'alldangle': clusters up to max size, where target sites are allowed to dangle. - 'all': clusters where loop, up to max size, has to include *all* target sites. - 'any': clusters where loop, up to max size, can include *any* of the target sites. Remaining target sites are added as extras. By default 'alldangle'. :type grow_from: {'alldangle', 'all', 'any'}, optional :param strict_size: Whether to enforce the maximum size of the generalized loops, only relevant for `grow_from="any"`. :type strict_size: bool, optional :param optimize: The path optimizer to use when contracting the tensor network. :type optimize: str or PathOptimizer, optional :param contract_opts: Other options supplied to ``TensorNetwork.contract``. .. py:function:: compress_d2bp(tn, max_bond, cutoff=0.0, cutoff_mode='rsum2', renorm=0, messages=None, output_inds=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, optimize='auto-hq', inplace=False, info=None, progbar=False, **contract_opts) Compress the tensor network ``tn`` using dense 2-norm belief propagation. :param tn: The tensor network to form the 2-norm of, run BP on and then compress. :type tn: TensorNetwork :param max_bond: The maximum bond dimension to compress to. :type max_bond: int :param cutoff: The cutoff to use when compressing. :type cutoff: float, optional :param cutoff_mode: The cutoff mode to use when compressing. :type cutoff_mode: int, optional :param renorm: Whether to renormalize the singular values when compressing. :type renorm: float, optional :param messages: The initial messages to use, effectively defaults to all ones if not specified. :type messages: dict[(str, int), array_like], optional :param output_inds: The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified. :type output_inds: set[str], optional :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param inplace: Whether to perform the compression inplace. :type inplace: bool, optional :param info: If specified, update this dictionary with information about the belief propagation run. :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. :rtype: TensorNetwork .. py:function:: contract_d2bp(tn, *, messages=None, output_inds=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, optimize='auto-hq', strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts) Estimate the norm squared of ``tn`` using dense 2-norm belief propagation (no hyper indices). :param tn: The tensor network to form the 2-norm of and run BP on. :type tn: TensorNetwork :param messages: The initial messages to use, effectively defaults to all ones if not specified. :type messages: dict[(str, int), array_like], optional :param output_inds: The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified. :type output_inds: set[str], optional :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param strip_exponent: Whether to return the mantissa and exponent separately. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. :type check_zero: bool, optional :param info: If supplied, the following information will be added to it: ``converged`` (bool), ``iterations`` (int), ``max_mdiff`` (float), ``rolling_abs_mean_diff`` (float). :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. :rtype: scalar or (scalar, float) .. py:function:: sample_d2bp(tn, output_inds=None, messages=None, max_iterations=100, tol=0.01, bias=None, seed=None, optimize='auto-hq', damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, progbar=False, **contract_opts) Sample a configuration from ``tn`` using dense 2-norm belief propagation. :param tn: The tensor network to sample from. :type tn: TensorNetwork :param output_inds: Which indices to sample. :type output_inds: set[str], optional :param messages: The initial messages to use, effectively defaults to all ones if not specified. :type messages: dict[(str, int), array_like], optional :param max_iterations: The maximum number of iterations to perform, per marginal. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param bias: Bias the sampling towards more locally likely bit-strings. This is done by raising the probability of each bit-string to this power. :type bias: float, optional :param seed: A random seed for reproducibility. :type seed: int, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. :returns: * **config** (*dict[str, int]*) -- The sampled configuration, a mapping of output indices to values. * **tn_config** (*TensorNetwork*) -- The tensor network with the sampled configuration applied. * **omega** (*float*) -- The BP probability of the sampled configuration. .. py:class:: HD1BP(tn, *, messages=None, damping=0.0, update='sequential', normalize=None, distance=None, smudge_factor=1e-12, inplace=False) Bases: :py:obj:`quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon` Object interface for hyper, dense, 1-norm belief propagation. This is standard belief propagation in tensor network form. :param tn: The tensor network to run BP on. :type tn: TensorNetwork :param messages: Initial messages to use, if not given then uniform messages are used. :type messages: dict, optional :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param smudge_factor: A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero. :type smudge_factor: float, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional .. py:attribute:: smudge_factor :value: 1e-12 .. py:attribute:: messages :value: None .. py:method:: iterate(tol=None) Perform a single iteration of belief propagation. Subclasses should implement this method, returning either `max_mdiff` or a dictionary containing `max_mdiff` and any other relevant information: { "nconv": nconv, "ncheck": ncheck, "max_mdiff": max_mdiff, } .. py:method:: get_gauged_tn() Assuming the supplied tensor network has no hyper or dangling indices, gauge it by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor. .. py:method:: contract(strip_exponent=False, check_zero=True) Estimate the total contraction, i.e. the exponential of the 'Bethe free entropy'. .. py:method:: normalize_messages() Normalize all messages such that the 'region contraction' of a single hyper index is 1. .. py:method:: get_cluster(r, virtual=True, autocomplete=True) Get the tensor network of a region ``r``, with all boundary messages attached. :param r: The region to get, given as a sequence of indices or tensor ids. :type r: sequence of int or str :param virtual: Whether the view the original tensors (`virtual=True`, the default) or take copies (`virtual=False`). :type virtual: bool, optional :param autocomplete: Whether to automatically include all indices attached to the tensors in the region, or just the ones given in ``r``. :type autocomplete: bool, optional :rtype: TensorNetwork .. py:method:: contract_gloop_expand(gloops=None, strip_exponent=False, check_zero=True, optimize='auto-hq', progbar=False, **contract_otps) .. py:function:: contract_hd1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, smudge_factor=1e-12, strip_exponent=False, check_zero=True, info=None, progbar=False) Estimate the contraction of ``tn`` with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy. :param tn: The tensor network to run BP on, can have hyper indices. :type tn: TensorNetwork :param messages: Initial messages to use, if not given then uniform messages are used. :type messages: dict, optional :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param smudge_factor: A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero. :type smudge_factor: float, optional :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. :type check_zero: bool, optional :param info: If specified, update this dictionary with information about the belief propagation run. :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :rtype: scalar or (scalar, float) .. py:function:: sample_hd1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, smudge_factor=1e-12, bias=False, seed=None, progbar=False) Sample all indices of a tensor network using repeated belief propagation runs and decimation. :param tn: The tensor network to sample. :type tn: TensorNetwork :param messages: The current messages. For every index and tensor id pair, there should be a message to and from with keys ``(ix, tid)`` and ``(tid, ix)``. If not given, then messages are initialized as uniform. :type messages: dict, optional :param output_inds: The indices to sample. If not given, then all indices are sampled. :type output_inds: sequence of str, optional :param max_iterations: The maximum number of iterations for each message passing run. :type max_iterations: int, optional :param tol: The convergence tolerance for each message passing run. :type tol: float, optional :param smudge_factor: A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals. :type smudge_factor: float, optional :param bias: Whether to bias the sampling towards the largest marginal. If ``False`` (the default), then indices are sampled proportional to their marginals. If ``True``, then each index is 'sampled' to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling. :type bias: bool or float, optional :param thread_pool: Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If ``True``, then the number of threads is set to the number of cores. If a ``ThreadPoolExecutor``, then this is used directly. :type thread_pool: bool, int or ThreadPoolExecutor, optional :param seed: A random seed to use for the sampling. :type seed: int, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :returns: * **config** (*dict[str, int]*) -- The sample configuration, mapping indices to values. * **tn_config** (*TensorNetwork*) -- The tensor network with all index values (or just those in `output_inds` if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment. * **omega** (*float*) -- The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling. .. py:class:: HV1BP(tn, *, messages=None, damping=0.0, update='parallel', normalize='L2', distance='L2', smudge_factor=1e-12, thread_pool=False, contract_every=None, inplace=False) Bases: :py:obj:`quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon` Object interface for hyper, vectorized, 1-norm, belief propagation. This is the fast version of belief propagation possible when there are many, small, matching tensor sizes. :param tn: The tensor network to run BP on. :type tn: TensorNetwork :param messages: Initial messages to use, if not given then uniform messages are used. :type messages: dict, optional :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param smudge_factor: A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero. :type smudge_factor: float, optional :param thread_pool: Whether to use a thread pool for parallelization, if ``True`` use the default number of threads, if an integer use that many threads. :type thread_pool: bool or int, optional :param contract_every: If not None, 'contract' (via BP) the tensor network every ``contract_every`` iterations. The resulting values are stored in ``zvals`` at corresponding points ``zval_its``. :type contract_every: int, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional .. py:attribute:: smudge_factor :value: 1e-12 .. py:attribute:: pool :value: None .. py:property:: normalize .. py:property:: distance .. py:method:: initialize_messages_batched(messages=None) .. py:property:: messages .. py:method:: _compute_outputs_batched(batched_inputs, batched_tensors=None) Given stacked messsages and optionally tensors, compute stacked output messages, possibly using parallel pool. .. py:method:: _update_outputs_to_inputs_batched(batched_inputs, batched_outputs, masks) Update the stacked input messages from the stacked output messages. .. py:method:: iterate(tol=None) Perform a single iteration of belief propagation. Subclasses should implement this method, returning either `max_mdiff` or a dictionary containing `max_mdiff` and any other relevant information: { "nconv": nconv, "ncheck": ncheck, "max_mdiff": max_mdiff, } .. py:method:: get_messages_dense() Get messages in individual form from the batched stacks. .. py:method:: get_messages() .. py:method:: contract(strip_exponent=False, check_zero=False) Estimate the contraction of the tensor network using the current messages. Uses batched vectorized contractions for speed. :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. Currently ``True`` is not implemented for HV1BP. :type check_zero: bool, optional :rtype: scalar or (scalar, float) .. py:method:: contract_dense(strip_exponent=False, check_zero=True) Slow contraction via explicit extranting individual dense messages. This supports check_zero=True and may be useful for debugging. .. py:function:: contract_hv1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='parallel', normalize='L2', distance='L2', tol_abs=None, tol_rolling_diff=None, smudge_factor=1e-12, strip_exponent=False, check_zero=False, info=None, progbar=False) Estimate the contraction of ``tn`` with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy. :param tn: The tensor network to run BP on, can have hyper indices. :type tn: TensorNetwork :param messages: Initial messages to use, if not given then uniform messages are used. :type messages: dict, optional :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param damping: The damping factor to use, 0.0 means no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param smudge_factor: A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero. :type smudge_factor: float, optional :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param check_zero: Whether to check for zero values and return zero early. :type check_zero: bool, optional :param info: If specified, update this dictionary with information about the belief propagation run. :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :rtype: scalar or (scalar, float) .. py:function:: sample_hv1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, diis=False, update='parallel', normalize='L2', distance='L2', tol_abs=None, tol_rolling_diff=None, smudge_factor=1e-12, bias=False, seed=None, progbar=False) Sample all indices of a tensor network using repeated belief propagation runs and decimation. :param tn: The tensor network to sample. :type tn: TensorNetwork :param messages: The current messages. For every index and tensor id pair, there should be a message to and from with keys ``(ix, tid)`` and ``(tid, ix)``. If not given, then messages are initialized as uniform. :type messages: dict, optional :param output_inds: The indices to sample. If not given, then all indices are sampled. :type output_inds: sequence of str, optional :param max_iterations: The maximum number of iterations for each message passing run. :type max_iterations: int, optional :param tol: The convergence tolerance for each message passing run. :type tol: float, optional :param damping: The damping factor to use, 0.0 means no damping. :type damping: float, optional :param diis: Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {`max_history`, `beta`, `rcond`}. :type diis: bool or dict, optional :param update: Whether to update messages sequentially or in parallel. :type update: {'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param tol_abs: The absolute convergence tolerance for maximum message update distance, if not given then taken as ``tol``. :type tol_abs: float, optional :param tol_rolling_diff: The rolling mean convergence tolerance for maximum message update distance, if not given then taken as ``tol``. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking. :type tol_rolling_diff: float, optional :param smudge_factor: A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals. :type smudge_factor: float, optional :param bias: Whether to bias the sampling towards the largest marginal. If ``False`` (the default), then indices are sampled proportional to their marginals. If ``True``, then each index is 'sampled' to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling. :type bias: bool or float, optional :param thread_pool: Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If ``True``, then the number of threads is set to the number of cores. If a ``ThreadPoolExecutor``, then this is used directly. :type thread_pool: bool, int or ThreadPoolExecutor, optional :param seed: A random seed to use for the sampling. :type seed: int, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :returns: * **config** (*dict[str, int]*) -- The sample configuration, mapping indices to values. * **tn_config** (*TensorNetwork*) -- The tensor network with all index values (or just those in `output_inds` if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment. * **omega** (*float*) -- The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling. .. py:class:: L1BP(tn, site_tags=None, *, damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, optimize='auto-hq', message_init_function=None, contract_every=None, inplace=False, **contract_opts) Bases: :py:obj:`quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon` Lazy 1-norm belief propagation. BP is run between groups of tensors defined by ``site_tags``. The message updates are lazy contractions. :param tn: The tensor network to run BP on. :type tn: TensorNetwork :param site_tags: The tags identifying the sites in ``tn``, each tag forms a region, which should not overlap. If the tensor network is structured, then these are inferred automatically. :type site_tags: sequence of str, optional :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param contract_every: If not None, 'contract' (via BP) the tensor network every ``contract_every`` iterations. The resulting values are stored in ``zvals`` at corresponding points ``zval_its``. :type contract_every: int, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. .. py:attribute:: local_convergence :value: True .. py:attribute:: optimize :value: 'auto-hq' .. py:attribute:: contract_opts .. py:attribute:: touched .. py:attribute:: messages .. py:attribute:: contraction_tns .. py:method:: iterate(tol=5e-06) Perform a single iteration of belief propagation. Subclasses should implement this method, returning either `max_mdiff` or a dictionary containing `max_mdiff` and any other relevant information: { "nconv": nconv, "ncheck": ncheck, "max_mdiff": max_mdiff, } .. py:method:: contract(strip_exponent=False, check_zero=True) Contract the tensor network and return the resulting value. .. py:method:: normalize_message_pairs() Normalize all messages such that for each bond ` = 1` and ` = ` (but in general != 1). .. py:function:: contract_l1bp(tn, max_iterations=1000, tol=5e-06, site_tags=None, damping=0.0, update='sequential', diis=False, local_convergence=True, optimize='auto-hq', strip_exponent=False, info=None, progbar=False, **contract_opts) Estimate the contraction of ``tn`` using lazy 1-norm belief propagation. :param tn: The tensor network to contract. :type tn: TensorNetwork :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param site_tags: The tags identifying the sites in ``tn``, each tag forms a region. If the tensor network is structured, then these are inferred automatically. :type site_tags: sequence of str, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param update: Whether to update all messages in parallel or sequentially. :type update: {'parallel', 'sequential'}, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param info: If specified, update this dictionary with information about the belief propagation run. :type info: dict, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. .. py:class:: L2BP(tn, site_tags=None, *, damping=0.0, update='sequential', normalize=None, distance=None, symmetrize=True, local_convergence=True, optimize='auto-hq', contract_every=None, inplace=False, **contract_opts) Bases: :py:obj:`quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon` Lazy (as in multiple uncontracted tensors per site) 2-norm (as in for wavefunctions and operators) belief propagation. :param tn: The tensor network to form the 2-norm of and run BP on. :type tn: TensorNetwork :param site_tags: The tags identifying the sites in ``tn``, each tag forms a region, which should not overlap. If the tensor network is structured, then these are inferred automatically. :type site_tags: sequence of str, optional :param damping: The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being ``damping * old + (1 - damping) * new``. This makes convergence more reliable but slower. :type damping: float or callable, optional :param update: Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions. :type update: {'sequential', 'parallel'}, optional :param normalize: How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phase of the message, by default used for complex dtypes. :type normalize: {'L1', 'L2', 'L2phased', 'Linf', callable}, optional :param distance: How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of 'L1', 'L2', 'L2phased', 'Linf', or 'cosine' for the corresponding norms. 'L2phased' is like 'L2' but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used. :type distance: {'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional :param symmetrize: Whether to symmetrize the messages, i.e. for each message ensure that it is hermitian with respect to its bra and ket indices. If a callable it should take a message and return the symmetrized message. :type symmetrize: bool or callable, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param contract_every: If not None, 'contract' (via BP) the tensor network every ``contract_every`` iterations. The resulting values are stored in ``zvals`` at corresponding points ``zval_its``. :type contract_every: int, optional :param inplace: Whether to perform any operations inplace on the input tensor network. :type inplace: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. .. py:attribute:: local_convergence :value: True .. py:attribute:: optimize :value: 'auto-hq' .. py:attribute:: contract_opts .. py:attribute:: touched .. py:property:: symmetrize .. py:attribute:: messages .. py:attribute:: contraction_tns .. py:method:: iterate(tol=5e-06) Perform a single iteration of belief propagation. Subclasses should implement this method, returning either `max_mdiff` or a dictionary containing `max_mdiff` and any other relevant information: { "nconv": nconv, "ncheck": ncheck, "max_mdiff": max_mdiff, } .. py:method:: normalize_message_pairs() Normalize all messages such that for each bond ` = 1` and ` = ` (but in general != 1). This is different to normalizing each message. .. py:method:: contract(strip_exponent=False, check_zero=True) Estimate the contraction of the norm squared using the current messages. .. py:method:: partial_trace(site, normalized=True, optimize='auto-hq') .. py:method:: compress(tn, max_bond=None, cutoff=5e-06, cutoff_mode='rsum2', renorm=0, lazy=False) Compress the state ``tn``, assumed to matched this L2BP instance, using the messages stored. .. py:function:: compress_l2bp(tn, max_bond, cutoff=0.0, cutoff_mode='rsum2', max_iterations=1000, tol=5e-06, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', lazy=False, inplace=False, info=None, progbar=False, **contract_opts) Compress ``tn`` using lazy belief propagation, producing a tensor network with a single tensor per site. :param tn: The tensor network to form the 2-norm of, run BP on and then compress. :type tn: TensorNetwork :param max_bond: The maximum bond dimension to compress to. :type max_bond: int :param cutoff: The cutoff to use when compressing. :type cutoff: float, optional :param cutoff_mode: The cutoff mode to use when compressing. :type cutoff_mode: int, optional :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param site_tags: The tags identifying the sites in ``tn``, each tag forms a region. If the tensor network is structured, then these are inferred automatically. :type site_tags: sequence of str, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param update: Whether to update all messages in parallel or sequentially. :type update: {'parallel', 'sequential'}, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The path optimizer to use when contracting the messages. :type optimize: str or PathOptimizer, optional :param lazy: Whether to perform the compression lazily, i.e. to leave the computed compression projectors uncontracted. :type lazy: bool, optional :param inplace: Whether to perform the compression inplace. :type inplace: bool, optional :param info: If specified, update this dictionary with information about the belief propagation run. :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. :rtype: TensorNetwork .. py:function:: contract_l2bp(tn, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', max_iterations=1000, tol=5e-06, strip_exponent=False, info=None, progbar=False, **contract_opts) Estimate the norm squared of ``tn`` using lazy belief propagation. :param tn: The tensor network to estimate the norm squared of. :type tn: TensorNetwork :param site_tags: The tags identifying the sites in ``tn``, each tag forms a region. :type site_tags: sequence of str, optional :param damping: The damping parameter to use, defaults to no damping. :type damping: float, optional :param update: Whether to update all messages in parallel or sequentially. :type update: {'parallel', 'sequential'}, optional :param local_convergence: Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them. :type local_convergence: bool, optional :param optimize: The contraction strategy to use. :type optimize: str or PathOptimizer, optional :param max_iterations: The maximum number of iterations to perform. :type max_iterations: int, optional :param tol: The convergence tolerance for messages. :type tol: float, optional :param strip_exponent: Whether to strip the exponent from the final result. If ``True`` then the returned result is ``(mantissa, exponent)``. :type strip_exponent: bool, optional :param info: If specified, update this dictionary with information about the belief propagation run. :type info: dict, optional :param progbar: Whether to show a progress bar. :type progbar: bool, optional :param contract_opts: Other options supplied to ``cotengra.array_contract``. .. py:class:: RegionGraph(regions=(), autocomplete=True, autoprune=True) A graph of regions, where each region is a set of nodes. For generalized belief propagation or cluster expansion methods. :param regions: Generating regions. :type regions: Iterable[Sequence[Hashable]] :param autocomplete: Whether to automatically add all intersecting sub-regions, to guarantee a complete region graph. :type autocomplete: bool, optional :param autoprune: Whether to automatically remove all regions with a count of zero. :type autoprune: bool, optional .. py:attribute:: lookup .. py:attribute:: parents .. py:attribute:: children .. py:attribute:: info .. py:method:: reset_info() Remove all cached region properties. .. py:property:: regions .. py:method:: get_overlapping(region) Get all regions that intersect with the given region. .. py:method:: add_region(region) Add a new region and update parent-child relationships. :param region: The new region to add. :type region: Sequence[Hashable] .. py:method:: remove_region(region) Remove a region and update parent-child relationships. .. py:method:: autocomplete() Add all missing intersecting sub-regions. .. py:method:: autoprune() Remove all regions with a count of zero. .. py:method:: autoextend(regions=None) Extend this region graph upwards by adding in all pairwise unions of regions. If regions is specified, take this as one set of pairs. .. py:method:: get_parents(region) Get all ancestors that contain the given region, but do not contain any other regions that themselves contain the given region. .. py:method:: get_children(region) Get all regions that are contained by the given region, but are not contained by any other descendents of the given region. .. py:method:: get_ancestors(region) Get all regions that contain the given region, not just direct parents. .. py:method:: get_descendents(region) Get all regions that are contained by the given region, not just direct children. .. py:method:: get_coparent_pairs(region) Get all regions which are direct parents of any descendant of the given region, but not themselves descendants of the given region. .. py:method:: get_count(region) Get the count of the given region, i.e. the correct weighting to apply when summing over all regions to avoid overcounting. .. py:method:: get_total_count() Get the total count of all regions. .. py:method:: get_level(region) Get the level of the given region, i.e. the distance to an ancestor with no parents. .. py:method:: get_message_parts(pair) Get the three contribution groups for a GBP message from region `source` to region `target`. 1. The part of region `source` that is not part of target, i.e. the factors to include. 2. The messages that appear in the numerator of the update equation. 3. The messages that appear in the denominator of the update equation. :param source: The source region, should be a parent of `target`. :type source: Region :param target: The target region, should be a child of `source`. :type target: Region :returns: * **factors** (*Region*) -- The difference of `source` and `target`, which will include the factors to appear in the numerator of the update equation. * **pairs_mul** (*set[(Region, Region)]*) -- The messages that appear in the numerator of the update equation, after cancelling out those that appear in the denominator. * **pairs_div** (*set[(Region, Region)]*) -- The messages that appear in the denominator of the update equation, after cancelling out those that appear in the numerator. .. py:method:: check() Run some basic consistency checks on the region graph. .. py:method:: draw(pos=None, a=20, scale=1.0, radius=0.1, **drawing_opts) .. py:method:: __repr__()