quimb.experimental.belief_propagation¶
Classes¶
Dense (as in one tensor per site) 1-norm (as in for 'classical' systems) |
|
Dense (as in one tensor per site) 2-norm (as in for wavefunctions and |
|
Object interface for hyper, dense, 1-norm belief propagation. This is |
|
Object interface for hyper, vectorized, 1-norm, belief propagation. This |
|
Lazy 1-norm belief propagation. BP is run between groups of tensors |
|
Lazy (as in multiple uncontracted tensors per site) 2-norm (as in for |
|
A graph of regions, where each region is a set of nodes. For generalized |
Functions¶
|
Combine a product of local contractions into a single value, avoiding |
|
Compress the tensor network |
|
Compress |
|
Estimate the contraction of standard tensor network |
|
Estimate the norm squared of |
|
Estimate the contraction of |
|
Estimate the contraction of |
|
Estimate the contraction of |
|
Estimate the norm squared of |
|
Initialize messages for belief propagation, this is equivalent to doing |
|
Sample a configuration from |
|
Sample all indices of a tensor network using repeated belief propagation |
|
Sample all indices of a tensor network using repeated belief propagation |
Package Contents¶
- quimb.experimental.belief_propagation.combine_local_contractions(values, backend=None, strip_exponent=False, check_zero=True, mantissa=None, exponent=None)¶
Combine a product of local contractions into a single value, avoiding overflow/underflow by accumulating the mantissa and exponent separately.
- Parameters:
values (sequence of (scalar, int)) – The values to combine, each with a power to be raised to.
backend (str, optional) – The backend to use. Infered from the first value if not given.
strip_exponent (bool, optional) – Whether to return the mantissa and exponent separately.
check_zero (bool, optional) – Whether to check for zero values and return zero early.
mantissa (float, optional) – The initial mantissa to accumulate into.
exponent (float, optional) – The initial exponent to accumulate into.
- Returns:
result – The combined value, or the mantissa and exponent separately.
- Return type:
- quimb.experimental.belief_propagation.compress_d2bp(tn, max_bond, cutoff=0.0, cutoff_mode='rsum2', renorm=0, messages=None, output_inds=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, optimize='auto-hq', inplace=False, info=None, progbar=False, **contract_opts)¶
Compress the tensor network
tn
using dense 2-norm belief propagation.- Parameters:
tn (TensorNetwork) – The tensor network to form the 2-norm of, run BP on and then compress.
max_bond (int) – The maximum bond dimension to compress to.
cutoff (float, optional) – The cutoff to use when compressing.
cutoff_mode (int, optional) – The cutoff mode to use when compressing.
renorm (float, optional) – Whether to renormalize the singular values when compressing.
messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.
output_inds (set[str], optional) – The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping parameter to use, defaults to no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
inplace (bool, optional) – Whether to perform the compression inplace.
info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Other options supplied to
cotengra.array_contract
.
- Return type:
- quimb.experimental.belief_propagation.compress_l2bp(tn, max_bond, cutoff=0.0, cutoff_mode='rsum2', max_iterations=1000, tol=5e-06, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', lazy=False, inplace=False, info=None, progbar=False, **contract_opts)¶
Compress
tn
using lazy belief propagation, producing a tensor network with a single tensor per site.- Parameters:
tn (TensorNetwork) – The tensor network to form the 2-norm of, run BP on and then compress.
max_bond (int) – The maximum bond dimension to compress to.
cutoff (float, optional) – The cutoff to use when compressing.
cutoff_mode (int, optional) – The cutoff mode to use when compressing.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
site_tags (sequence of str, optional) – The tags identifying the sites in
tn
, each tag forms a region. If the tensor network is structured, then these are inferred automatically.damping (float, optional) – The damping parameter to use, defaults to no damping.
update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
lazy (bool, optional) – Whether to perform the compression lazily, i.e. to leave the computed compression projectors uncontracted.
inplace (bool, optional) – Whether to perform the compression inplace.
info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Other options supplied to
cotengra.array_contract
.
- Return type:
- quimb.experimental.belief_propagation.contract_d1bp(tn, *, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts)¶
Estimate the contraction of standard tensor network
tn
using dense 1-norm belief propagation.- Parameters:
tn (TensorNetwork) – The tensor network to contract, it should have no dangling or hyper indices.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping parameter to use, defaults to no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
strip_exponent (bool, optional) – Whether to return the mantissa and exponent separately.
check_zero (bool, optional) – Whether to check for zero values and return zero early.
info (dict, optional) – If supplied, the following information will be added to it:
converged
(bool),iterations
(int),max_mdiff
(float),rolling_abs_mean_diff
(float).progbar (bool, optional) – Whether to show a progress bar.
- quimb.experimental.belief_propagation.contract_d2bp(tn, *, messages=None, output_inds=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, optimize='auto-hq', strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts)¶
Estimate the norm squared of
tn
using dense 2-norm belief propagation (no hyper indices).- Parameters:
tn (TensorNetwork) – The tensor network to form the 2-norm of and run BP on.
messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.
output_inds (set[str], optional) – The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping parameter to use, defaults to no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
strip_exponent (bool, optional) – Whether to return the mantissa and exponent separately.
check_zero (bool, optional) – Whether to check for zero values and return zero early.
info (dict, optional) – If supplied, the following information will be added to it:
converged
(bool),iterations
(int),max_mdiff
(float),rolling_abs_mean_diff
(float).progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Other options supplied to
cotengra.array_contract
.
- Return type:
scalar or (scalar, float)
- quimb.experimental.belief_propagation.contract_hd1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, smudge_factor=1e-12, strip_exponent=False, check_zero=True, info=None, progbar=False)¶
Estimate the contraction of
tn
with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy.- Parameters:
tn (TensorNetwork) – The tensor network to run BP on, can have hyper indices.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping parameter to use, defaults to no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.check_zero (bool, optional) – Whether to check for zero values and return zero early.
info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
- Return type:
scalar or (scalar, float)
- quimb.experimental.belief_propagation.contract_hv1bp(tn, messages=None, max_iterations=1000, tol=5e-06, damping=0.0, diis=False, update='parallel', normalize='L2', distance='L2', tol_abs=None, tol_rolling_diff=None, smudge_factor=1e-12, strip_exponent=False, check_zero=False, info=None, progbar=False)¶
Estimate the contraction of
tn
with hyper, vectorized, 1-norm belief propagation, via the exponential of the Bethe free entropy.- Parameters:
tn (TensorNetwork) – The tensor network to run BP on, can have hyper indices.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
damping (float, optional) – The damping factor to use, 0.0 means no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.check_zero (bool, optional) – Whether to check for zero values and return zero early.
info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
- Return type:
scalar or (scalar, float)
- quimb.experimental.belief_propagation.contract_l1bp(tn, max_iterations=1000, tol=5e-06, site_tags=None, damping=0.0, update='sequential', diis=False, local_convergence=True, optimize='auto-hq', strip_exponent=False, info=None, progbar=False, **contract_opts)¶
Estimate the contraction of
tn
using lazy 1-norm belief propagation.- Parameters:
tn (TensorNetwork) – The tensor network to contract.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
site_tags (sequence of str, optional) – The tags identifying the sites in
tn
, each tag forms a region. If the tensor network is structured, then these are inferred automatically.damping (float, optional) – The damping parameter to use, defaults to no damping.
update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
progbar (bool, optional) – Whether to show a progress bar.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
contract_opts – Other options supplied to
cotengra.array_contract
.
- quimb.experimental.belief_propagation.contract_l2bp(tn, site_tags=None, damping=0.0, update='sequential', local_convergence=True, optimize='auto-hq', max_iterations=1000, tol=5e-06, strip_exponent=False, info=None, progbar=False, **contract_opts)¶
Estimate the norm squared of
tn
using lazy belief propagation.- Parameters:
tn (TensorNetwork) – The tensor network to estimate the norm squared of.
site_tags (sequence of str, optional) – The tags identifying the sites in
tn
, each tag forms a region.damping (float, optional) – The damping parameter to use, defaults to no damping.
update ({'parallel', 'sequential'}, optional) – Whether to update all messages in parallel or sequentially.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The contraction strategy to use.
max_iterations (int, optional) – The maximum number of iterations to perform.
tol (float, optional) – The convergence tolerance for messages.
strip_exponent (bool, optional) – Whether to strip the exponent from the final result. If
True
then the returned result is(mantissa, exponent)
.info (dict, optional) – If specified, update this dictionary with information about the belief propagation run.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Other options supplied to
cotengra.array_contract
.
- class quimb.experimental.belief_propagation.D1BP(tn, *, messages=None, damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, message_init_function=None, contract_every=None, inplace=False)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Dense (as in one tensor per site) 1-norm (as in for ‘classical’ systems) belief propagation algorithm. Allows message reuse. This version assumes no hyper indices (i.e. a standard tensor network). This is the simplest version of belief propagation.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.
damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every
contract_every
iterations. The resulting values are stored inzvals
at corresponding pointszval_its
.inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
- tn¶
The target tensor network.
- Type:
- messages¶
The current messages. The key is a tuple of the index and tensor id that the message is being sent to.
- key_pairs¶
A dictionary mapping the key of a message to the key of the message propagating in the opposite direction.
- local_convergence = True¶
- touched¶
- key_pairs¶
- iterate(tol=5e-06)¶
- normalize_message_pairs()¶
Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1).
- normalize_tensors(strip_exponent=True)¶
Normalize every local tensor contraction so that it equals 1. Gather the overall normalization factor into
self.exponent
and the sign intoself.sign
by default.- Parameters:
strip_exponent (bool, optional) – Whether to collect the sign and exponent. If
False
then the value of the BP contraction is set to 1.
- get_gauged_tn()¶
Gauge the original TN by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor.
- get_cluster(tids)¶
Get the region of tensors given by tids, with the messages on the border contracted in, removing those dangling indices.
- Parameters:
tids (sequence of int) – The tensor ids forming a region.
- Return type:
- local_tensor_contract(tid)¶
Contract the messages around tensor
tid
.
- local_message_contract(ix)¶
Contract the messages at index
ix
.
- contract(strip_exponent=False, check_zero=True, **kwargs)¶
Estimate the contraction of the tensor network.
- contract_with_loops(max_loop_length=None, min_loop_length=1, optimize='auto-hq', strip_exponent=False, check_zero=True, **contract_opts)¶
Estimate the contraction of the tensor network, including loop corrections.
- contract_cluster_expansion(clusters=None, autocomplete=True, strip_exponent=False, check_zero=True, optimize='auto-hq', **contract_opts)¶
- class quimb.experimental.belief_propagation.D2BP(tn, *, messages=None, output_inds=None, optimize='auto-hq', damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, contract_every=None, inplace=False, **contract_opts)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Dense (as in one tensor per site) 2-norm (as in for wavefunctions and operators) belief propagation. Allows messages reuse. This version assumes no hyper indices (i.e. a standard PEPS like tensor network).
Potential use cases for D2BP and a PEPS like tensor network are:
globally compressing it from bond dimension
D
toD'
eagerly applying gates and locally compressing back to
D
sampling configurations
estimating the norm of the tensor network
- Parameters:
tn (TensorNetwork) – The tensor network to form the 2-norm of and run BP on.
messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.
output_inds (set[str], optional) – The indices to consider as output (dangling) indices of the tn. Computed automatically if not specified.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every
contract_every
iterations. The resulting values are stored inzvals
at corresponding pointszval_its
.inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
contract_opts – Other options supplied to
cotengra.array_contract
.
- contract_opts¶
- local_convergence = True¶
- touch_map¶
- touched¶
- exprs¶
- build_expr(ix)¶
- update_touched_from_tids(*tids)¶
Specify that the messages for the given
tids
have changed.
- update_touched_from_tags(tags, which='any')¶
Specify that the messages for the messages touching
tags
have changed.
- update_touched_from_inds(inds, which='any')¶
Specify that the messages for the messages touching
inds
have changed.
- iterate(tol=5e-06)¶
Perform a single iteration of dense 2-norm belief propagation.
- compute_marginal(ind)¶
Compute the marginal for the index
ind
.
- normalize_message_pairs()¶
Normalize a pair of messages such that <mi|mj> = 1 and <mi|mi> = <mj|mj> (but in general != 1).
- contract(strip_exponent=False, check_zero=True)¶
Estimate the total contraction, i.e. the 2-norm.
- contract_cluster_expansion(clusters=None, autocomplete=True, optimize='auto-hq', strip_exponent=False, check_zero=True, info=None, progbar=False, **contract_opts)¶
- compress(max_bond, cutoff=0.0, cutoff_mode=4, renorm=0, inplace=False)¶
Compress the initial tensor network using the current messages.
- gauge_insert(tn, smudge=1e-12)¶
Insert the sqrt of messages on the boundary of a part of the main BP TN.
- Parameters:
tn (TensorNetwork) – The tensor network to insert the messages into.
smudge (float, optional) – Smudge factor to avoid numerical issues, the eigenvalues of the messages are clipped to be at least the largest eigenvalue times this factor.
- Returns:
The sequence of tensors, indices and inverse gauges to apply to reverse the gauges applied.
- Return type:
- gauge_temp(tn, ungauge_outer=True)¶
Context manager to temporarily gauge a tensor network, presumably a subnetwork of the main BP network, using the current messages, and then un-gauge it afterwards.
- Parameters:
tn (TensorNetwork) – The tensor network to gauge.
ungauge_outer (bool, optional) – Whether to un-gauge the outer indices of the tensor network.
- gate_(G, where, max_bond=None, cutoff=0.0, cutoff_mode='rsum2', renorm=0, tn=None, **gate_opts)¶
Apply a gate to the tensor network at the specified sites, using the current messages to gauge the tensors.
- class quimb.experimental.belief_propagation.HD1BP(tn, *, messages=None, damping=0.0, update='sequential', normalize=None, distance=None, smudge_factor=1e-12, inplace=False)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Object interface for hyper, dense, 1-norm belief propagation. This is standard belief propagation in tensor network form.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
- smudge_factor = 1e-12¶
- messages = None¶
- iterate(tol=None)¶
- get_gauged_tn()¶
Assuming the supplied tensor network has no hyper or dangling indices, gauge it by inserting the BP-approximated transfer matrix eigenvectors, which may be complex. The BP-contraction of this gauged network is then simply the product of zeroth entries of each tensor.
- contract(strip_exponent=False, check_zero=True)¶
Estimate the total contraction, i.e. the exponential of the ‘Bethe free entropy’.
- class quimb.experimental.belief_propagation.HV1BP(tn, *, messages=None, damping=0.0, update='parallel', normalize='L2', distance='L2', smudge_factor=1e-12, thread_pool=False, contract_every=None, inplace=False)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Object interface for hyper, vectorized, 1-norm, belief propagation. This is the fast version of belief propagation possible when there are many, small, matching tensor sizes.
- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
messages (dict, optional) – Initial messages to use, if not given then uniform messages are used.
damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
smudge_factor (float, optional) – A small number to add to the denominator of messages to avoid division by zero. Note when this happens the numerator will also be zero.
thread_pool (bool or int, optional) – Whether to use a thread pool for parallelization, if
True
use the default number of threads, if an integer use that many threads.contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every
contract_every
iterations. The resulting values are stored inzvals
at corresponding pointszval_its
.inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
- smudge_factor = 1e-12¶
- pool = None¶
- property normalize¶
- property distance¶
- initialize_messages_batched(messages=None)¶
- property messages¶
- _compute_outputs_batched(batched_inputs, batched_tensors=None)¶
Given stacked messsages and optionally tensors, compute stacked output messages, possibly using parallel pool.
- _update_outputs_to_inputs_batched(batched_inputs, batched_outputs, masks)¶
Update the stacked input messages from the stacked output messages.
- iterate(tol=None)¶
- get_messages_dense()¶
Get messages in individual form from the batched stacks.
- get_messages()¶
- contract(strip_exponent=False, check_zero=False)¶
Estimate the contraction of the tensor network using the current messages. Uses batched vectorized contractions for speed.
- Parameters:
- Return type:
scalar or (scalar, float)
- contract_dense(strip_exponent=False, check_zero=True)¶
Slow contraction via explicit extranting individual dense messages. This supports check_zero=True and may be useful for debugging.
- quimb.experimental.belief_propagation.initialize_hyper_messages(tn, fill_fn=None, smudge_factor=1e-12)¶
Initialize messages for belief propagation, this is equivalent to doing a single round of belief propagation with uniform messages.
- Parameters:
tn (TensorNetwork) – The tensor network to initialize messages for.
fill_fn (callable, optional) – A function to fill the messages with, of signature
fill_fn(shape)
.smudge_factor (float, optional) – A small number to add to the messages to avoid numerical issues.
- Returns:
messages – The initial messages. For every index and tensor id pair, there will be a message to and from with keys
(ix, tid)
and(tid, ix)
.- Return type:
- class quimb.experimental.belief_propagation.L1BP(tn, site_tags=None, *, damping=0.0, update='sequential', normalize=None, distance=None, local_convergence=True, optimize='auto-hq', message_init_function=None, contract_every=None, inplace=False, **contract_opts)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Lazy 1-norm belief propagation. BP is run between groups of tensors defined by
site_tags
. The message updates are lazy contractions.- Parameters:
tn (TensorNetwork) – The tensor network to run BP on.
site_tags (sequence of str, optional) – The tags identifying the sites in
tn
, each tag forms a region, which should not overlap. If the tensor network is structured, then these are inferred automatically.damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every
contract_every
iterations. The resulting values are stored inzvals
at corresponding pointszval_its
.inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
contract_opts – Other options supplied to
cotengra.array_contract
.
- local_convergence = True¶
- optimize = 'auto-hq'¶
- contract_opts¶
- touched¶
- messages¶
- contraction_tns¶
- iterate(tol=5e-06)¶
- contract(strip_exponent=False, check_zero=True)¶
- normalize_message_pairs()¶
Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1).
- class quimb.experimental.belief_propagation.L2BP(tn, site_tags=None, *, damping=0.0, update='sequential', normalize=None, distance=None, symmetrize=True, local_convergence=True, optimize='auto-hq', contract_every=None, inplace=False, **contract_opts)¶
Bases:
quimb.tensor.belief_propagation.bp_common.BeliefPropagationCommon
Lazy (as in multiple uncontracted tensors per site) 2-norm (as in for wavefunctions and operators) belief propagation.
- Parameters:
tn (TensorNetwork) – The tensor network to form the 2-norm of and run BP on.
site_tags (sequence of str, optional) – The tags identifying the sites in
tn
, each tag forms a region, which should not overlap. If the tensor network is structured, then these are inferred automatically.damping (float or callable, optional) – The damping factor to apply to messages. This simply mixes some part of the old message into the new one, with the final message being
damping * old + (1 - damping) * new
. This makes convergence more reliable but slower.update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially (newly computed messages are immediately used for other updates in the same iteration round) or in parallel (all messages are comptued using messages from the previous round only). Sequential generally helps convergence but parallel can possibly converge to differnt solutions.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
symmetrize (bool or callable, optional) – Whether to symmetrize the messages, i.e. for each message ensure that it is hermitian with respect to its bra and ket indices. If a callable it should take a message and return the symmetrized message.
local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
contract_every (int, optional) – If not None, ‘contract’ (via BP) the tensor network every
contract_every
iterations. The resulting values are stored inzvals
at corresponding pointszval_its
.inplace (bool, optional) – Whether to perform any operations inplace on the input tensor network.
contract_opts – Other options supplied to
cotengra.array_contract
.
- local_convergence = True¶
- optimize = 'auto-hq'¶
- contract_opts¶
- touched¶
- property symmetrize¶
- messages¶
- contraction_tns¶
- iterate(tol=5e-06)¶
- normalize_message_pairs()¶
Normalize all messages such that for each bond <m_i|m_j> = 1 and <m_i|m_i> = <m_j|m_j> (but in general != 1). This is different to normalizing each message.
- contract(strip_exponent=False, check_zero=True)¶
Estimate the contraction of the norm squared using the current messages.
- partial_trace(site, normalized=True, optimize='auto-hq')¶
- compress(tn, max_bond=None, cutoff=5e-06, cutoff_mode='rsum2', renorm=0, lazy=False)¶
Compress the state
tn
, assumed to matched this L2BP instance, using the messages stored.
- class quimb.experimental.belief_propagation.RegionGraph(regions=(), autocomplete=True, autoprune=True)¶
A graph of regions, where each region is a set of nodes. For generalized belief propagation or cluster expansion methods.
- lookup¶
- parents¶
- children¶
- info¶
- reset_info()¶
Remove all cached region properties.
- property regions¶
- get_overlapping(region)¶
Get all regions that intersect with the given region.
- add_region(region)¶
Add a new region and update parent-child relationships.
- Parameters:
region (Sequence[Hashable]) – The new region to add.
- remove_region(region)¶
Remove a region and update parent-child relationships.
- autocomplete()¶
Add all missing intersecting sub-regions.
- autoprune()¶
Remove all regions with a count of zero.
- autoextend(regions=None)¶
Extend this region graph upwards by adding in all pairwise unions of regions. If regions is specified, take this as one set of pairs.
- get_parents(region)¶
Get all ancestors that contain the given region, but do not contain any other regions that themselves contain the given region.
- get_children(region)¶
Get all regions that are contained by the given region, but are not contained by any other descendents of the given region.
- get_ancestors(region)¶
Get all regions that contain the given region, not just direct parents.
- get_descendents(region)¶
Get all regions that are contained by the given region, not just direct children.
- get_coparent_pairs(region)¶
Get all regions which are direct parents of any descendant of the given region, but not themselves descendants of the given region.
- get_count(region)¶
Get the count of the given region, i.e. the correct weighting to apply when summing over all regions to avoid overcounting.
- get_total_count()¶
Get the total count of all regions.
- get_level(region)¶
Get the level of the given region, i.e. the distance to an ancestor with no parents.
- get_message_parts(pair)¶
Get the three contribution groups for a GBP message from region source to region target. 1. The part of region source that is not part of target, i.e. the factors to include. 2. The messages that appear in the numerator of the update equation. 3. The messages that appear in the denominator of the update equation.
- Parameters:
source (Region) – The source region, should be a parent of target.
target (Region) – The target region, should be a child of source.
- Returns:
factors (Region) – The difference of source and target, which will include the factors to appear in the numerator of the update equation.
pairs_mul (set[(Region, Region)]) – The messages that appear in the numerator of the update equation, after cancelling out those that appear in the denominator.
pairs_div (set[(Region, Region)]) – The messages that appear in the denominator of the update equation, after cancelling out those that appear in the numerator.
- check()¶
Run some basic consistency checks on the region graph.
- draw(pos=None, a=20, scale=1.0, radius=0.1, **drawing_opts)¶
- __repr__()¶
- quimb.experimental.belief_propagation.sample_d2bp(tn, output_inds=None, messages=None, max_iterations=100, tol=0.01, bias=None, seed=None, optimize='auto-hq', damping=0.0, diis=False, update='sequential', normalize=None, distance=None, tol_abs=None, tol_rolling_diff=None, local_convergence=True, progbar=False, **contract_opts)¶
Sample a configuration from
tn
using dense 2-norm belief propagation.- Parameters:
tn (TensorNetwork) – The tensor network to sample from.
messages (dict[(str, int), array_like], optional) – The initial messages to use, effectively defaults to all ones if not specified.
max_iterations (int, optional) – The maximum number of iterations to perform, per marginal.
tol (float, optional) – The convergence tolerance for messages.
bias (float, optional) – Bias the sampling towards more locally likely bit-strings. This is done by raising the probability of each bit-string to this power.
seed (int, optional) – A random seed for reproducibility.
optimize (str or PathOptimizer, optional) – The path optimizer to use when contracting the messages.
damping (float, optional) – The damping parameter to use, defaults to no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'sequential', 'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.local_convergence (bool, optional) – Whether to allow messages to locally converge - i.e. if all their input messages have converged then stop updating them.
progbar (bool, optional) – Whether to show a progress bar.
contract_opts – Other options supplied to
cotengra.array_contract
.
- Returns:
config (dict[str, int]) – The sampled configuration, a mapping of output indices to values.
tn_config (TensorNetwork) – The tensor network with the sampled configuration applied.
omega (float) – The BP probability of the sampled configuration.
- quimb.experimental.belief_propagation.sample_hd1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, smudge_factor=1e-12, bias=False, seed=None, progbar=False)¶
Sample all indices of a tensor network using repeated belief propagation runs and decimation.
- Parameters:
tn (TensorNetwork) – The tensor network to sample.
messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
. If not given, then messages are initialized as uniform.output_inds (sequence of str, optional) – The indices to sample. If not given, then all indices are sampled.
max_iterations (int, optional) – The maximum number of iterations for each message passing run.
tol (float, optional) – The convergence tolerance for each message passing run.
smudge_factor (float, optional) – A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals.
bias (bool or float, optional) – Whether to bias the sampling towards the largest marginal. If
False
(the default), then indices are sampled proportional to their marginals. IfTrue
, then each index is ‘sampled’ to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling.thread_pool (bool, int or ThreadPoolExecutor, optional) – Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If
True
, then the number of threads is set to the number of cores. If aThreadPoolExecutor
, then this is used directly.seed (int, optional) – A random seed to use for the sampling.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
config (dict[str, int]) – The sample configuration, mapping indices to values.
tn_config (TensorNetwork) – The tensor network with all index values (or just those in output_inds if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment.
omega (float) – The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling.
- quimb.experimental.belief_propagation.sample_hv1bp(tn, messages=None, output_inds=None, max_iterations=1000, tol=0.01, damping=0.0, diis=False, update='parallel', normalize='L2', distance='L2', tol_abs=None, tol_rolling_diff=None, smudge_factor=1e-12, bias=False, seed=None, progbar=False)¶
Sample all indices of a tensor network using repeated belief propagation runs and decimation.
- Parameters:
tn (TensorNetwork) – The tensor network to sample.
messages (dict, optional) – The current messages. For every index and tensor id pair, there should be a message to and from with keys
(ix, tid)
and(tid, ix)
. If not given, then messages are initialized as uniform.output_inds (sequence of str, optional) – The indices to sample. If not given, then all indices are sampled.
max_iterations (int, optional) – The maximum number of iterations for each message passing run.
tol (float, optional) – The convergence tolerance for each message passing run.
damping (float, optional) – The damping factor to use, 0.0 means no damping.
diis (bool or dict, optional) – Whether to use direct inversion in the iterative subspace to help converge the messages by extrapolating to low error guesses. If a dict, should contain options for the DIIS algorithm. The relevant options are {max_history, beta, rcond}.
update ({'parallel'}, optional) – Whether to update messages sequentially or in parallel.
normalize ({'L1', 'L2', 'L2phased', 'Linf', callable}, optional) – How to normalize messages after each update. If None choose automatically. If a callable, it should take a message and return the normalized message. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phase of the message, by default used for complex dtypes.
distance ({'L1', 'L2', 'L2phased', 'Linf', 'cosine', callable}, optional) – How to compute the distance between messages to check for convergence. If None choose automatically. If a callable, it should take two messages and return the distance. If a string, it should be one of ‘L1’, ‘L2’, ‘L2phased’, ‘Linf’, or ‘cosine’ for the corresponding norms. ‘L2phased’ is like ‘L2’ but also normalizes the phases of the messages, by default used for complex dtypes if phased normalization is not already being used.
tol_abs (float, optional) – The absolute convergence tolerance for maximum message update distance, if not given then taken as
tol
.tol_rolling_diff (float, optional) – The rolling mean convergence tolerance for maximum message update distance, if not given then taken as
tol
. This is used to stop running when the messages are just bouncing around the same level, without any overall upward or downward trends, roughly speaking.smudge_factor (float, optional) – A small number to add to each message to avoid zeros. Making this large is similar to adding a temperature, which can aid convergence but likely produces less accurate marginals.
bias (bool or float, optional) – Whether to bias the sampling towards the largest marginal. If
False
(the default), then indices are sampled proportional to their marginals. IfTrue
, then each index is ‘sampled’ to be its largest weight value always. If a float, then the local probability distribution is raised to this power before sampling.thread_pool (bool, int or ThreadPoolExecutor, optional) – Whether to use a thread pool for parallelization. If an integer, then this is the number of threads to use. If
True
, then the number of threads is set to the number of cores. If aThreadPoolExecutor
, then this is used directly.seed (int, optional) – A random seed to use for the sampling.
progbar (bool, optional) – Whether to show a progress bar.
- Returns:
config (dict[str, int]) – The sample configuration, mapping indices to values.
tn_config (TensorNetwork) – The tensor network with all index values (or just those in output_inds if supllied) selected. Contracting this tensor network (which will just be a sequence of scalars if all index values have been sampled) gives the weight of the sample, e.g. should be 1 for a SAT problem and valid assignment.
omega (float) – The probability of choosing this sample (i.e. product of marginal values). Useful possibly for importance sampling.