pypesto.sample

Sample

Draw samples from the distribution, with support for various samplers.

class pypesto.sample.AdaptiveMetropolisSampler(options: Dict | None = None)[source]

Bases: MetropolisSampler

Metropolis-Hastings sampler with adaptive proposal covariance.

__init__(options: Dict | None = None)[source]
classmethod default_options()[source]

Return the default options for the sampler.

initialize(problem: Problem, x0: ndarray)[source]

Initialize the sampler.

class pypesto.sample.AdaptiveParallelTemperingSampler(internal_sampler: InternalSampler, betas: Sequence[float] | None = None, n_chains: int | None = None, options: Dict | None = None)[source]

Bases: ParallelTemperingSampler

Parallel tempering sampler with adaptive temperature adaptation.

adjust_betas(i_sample: int, swapped: Sequence[bool])[source]

Update temperatures as in Vousden2016.

classmethod default_options() Dict[source]

Get default options for sampler.

class pypesto.sample.DynestySampler(sampler_args: dict | None = None, run_args: dict | None = None, dynamic: bool = True)[source]

Bases: Sampler

Use dynesty for sampling.

NB: get_samples returns MCMC-like samples, by resampling original dynesty samples according to their importance weights. This is because the original samples contain many low-likelihood samples. To work with the original samples, modify the results object with pypesto_result.sample_result = sampler.get_original_samples(), where sampler is an instance of pypesto.sample.DynestySampler. The original dynesty results object is available at sampler.results.

NB: the dynesty samplers can be customized significantly, by providing sampler_args and run_args to your pypesto.sample.DynestySampler() call. For example, code to parallelize dynesty is provided in pyPESTO’s sampler_study.ipynb notebook.

Wrapper around https://dynesty.readthedocs.io/en/stable/, see there for details.

__init__(sampler_args: dict | None = None, run_args: dict | None = None, dynamic: bool = True)[source]

Initialize sampler.

Parameters:
  • sampler_args – Further keyword arguments that are passed on to the __init__ method of the dynesty sampler.

  • run_args – Further keyword arguments that are passed on to the run_nested method of the dynesty sampler.

  • dynamic – Whether to use dynamic or static nested sampling.

get_original_samples() McmcPtResult[source]

Get the samples into the fitting pypesto format.

Return type:

The pyPESTO sample result.

get_samples() McmcPtResult[source]

Get MCMC-like samples into the fitting pypesto format.

Return type:

The pyPESTO sample result.

initialize(problem: Problem, x0: ndarray | List[ndarray]) None[source]

Initialize the sampler.

loglikelihood(x)[source]

Log-probability density function.

prior_transform(prior_sample: ndarray) ndarray[source]

Transform prior sample from unit cube to pyPESTO prior.

TODO support priors that are not uniform.

raise warning in self.initialize for now.

Parameters:
  • prior_sample – The prior sample, provided by dynesty.

  • problem – The pyPESTO problem.

Return type:

The transformed prior sample.

restore_internal_sampler(filename: str) None[source]

Restore the state of the internal dynesty sampler.

Parameters:

filename – The internal sampler will be saved here.

sample(n_samples: int, beta: float | None = None) None[source]

Return the most recent sample state.

save_internal_sampler(filename: str) None[source]

Save the state of the internal dynesty sampler.

This makes it easier to analyze the original dynesty samples, after sampling, with restore_internal.

Parameters:

filename – The internal sampler will be saved here.

class pypesto.sample.EmceeSampler(nwalkers: int = 1, sampler_args: dict | None = None, run_args: dict | None = None)[source]

Bases: Sampler

Use emcee for sampling.

Wrapper around https://emcee.readthedocs.io/en/stable/, see there for details.

__init__(nwalkers: int = 1, sampler_args: dict | None = None, run_args: dict | None = None)[source]

Initialize sampler.

Parameters:
  • nwalkers – The number of walkers in the ensemble.

  • sampler_args – Further keyword arguments that are passed on to emcee.EnsembleSampler.__init__.

  • run_args – Further keyword arguments that are passed on to emcee.EnsembleSampler.run_mcmc.

get_epsilon_ball_initial_state(center: ndarray, problem: Problem, epsilon: float = 0.001)[source]

Get walker initial positions as samples from an epsilon ball.

The ball is scaled in each direction according to the magnitude of the center in that direction.

It is assumed that, because vectors are generated near a good point, all generated vectors are evaluable, so evaluability is not checked.

Points that are generated outside the problem bounds will get shifted to lie on the edge of the problem bounds.

Parameters:
  • center – The center of the epsilon ball. The dimension should match the full dimension of the pyPESTO problem. This will be returned as the first position.

  • problem – The pyPESTO problem.

  • epsilon – The relative radius of the ball. e.g., if epsilon=0.5 and the center of the first dimension is at 100, then the upper and lower bounds of the epsilon ball in the first dimension will be 150 and 50, respectively.

get_samples() McmcPtResult[source]

Get the samples into the fitting pypesto format.

initialize(problem: Problem, x0: ndarray | List[ndarray]) None[source]

Initialize the sampler.

It is recommended to initialize walkers

Parameters:

x0 – The “a priori preferred position”. e.g., an optimized parameter vector. https://emcee.readthedocs.io/en/stable/user/faq/ The position of the first walker will be this, the remaining walkers will be assigned positions uniformly in a smaller ball around this vector. Alternatively, a set of vectors can be provided, which will be used to initialize walkers. In this case, any remaining walkers will be initialized at points sampled uniformly within the problem bounds.

sample(n_samples: int, beta: float = 1.0) None[source]

Return the most recent sample state.

class pypesto.sample.InternalSampler(options: Dict | None = None)[source]

Bases: Sampler

Sampler to be used inside a parallel tempering sampler.

The last sample can be obtained via get_last_sample and set via set_last_sample.

abstract get_last_sample() InternalSample[source]

Get the last sample in the chain.

Returns:

The last sample in the chain in the exchange format.

Return type:

internal_sample

make_internal(temper_lpost: bool)[source]

Allow the inner samplers to be used as inner samplers.

Can be called by parallel tempering samplers during initialization. Default: Do nothing.

Parameters:

temper_lpost – Whether to temperate the posterior or only the likelihood.

abstract set_last_sample(sample: InternalSample)[source]

Set the last sample in the chain to the passed value.

Parameters:

sample – The sample that will replace the last sample in the chain.

class pypesto.sample.MetropolisSampler(options: Dict | None = None)[source]

Bases: InternalSampler

Simple Metropolis-Hastings sampler with fixed proposal variance.

__init__(options: Dict | None = None)[source]
classmethod default_options()[source]

Return the default options for the sampler.

get_last_sample() InternalSample[source]

Get the last sample in the chain.

Returns:

The last sample in the chain in the exchange format.

Return type:

internal_sample

get_samples() McmcPtResult[source]

Get the samples into the fitting pypesto format.

initialize(problem: Problem, x0: ndarray)[source]

Initialize the sampler.

make_internal(temper_lpost: bool)[source]

Allow the inner samplers to be used as inner samplers.

Can be called by parallel tempering samplers during initialization. Default: Do nothing.

Parameters:

temper_lpost – Whether to temperate the posterior or only the likelihood.

sample(n_samples: int, beta: float = 1.0)[source]

Load last recorded particle.

set_last_sample(sample: InternalSample)[source]

Set the last sample in the chain to the passed value.

Parameters:

sample – The sample that will replace the last sample in the chain.

class pypesto.sample.ParallelTemperingSampler(internal_sampler: InternalSampler, betas: Sequence[float] | None = None, n_chains: int | None = None, options: Dict | None = None)[source]

Bases: Sampler

Simple parallel tempering sampler.

__init__(internal_sampler: InternalSampler, betas: Sequence[float] | None = None, n_chains: int | None = None, options: Dict | None = None)[source]
adjust_betas(i_sample: int, swapped: Sequence[bool])[source]

Adjust temperature values. Default: Do nothing.

classmethod default_options() Dict[source]

Return the default options for the sampler.

get_samples() McmcPtResult[source]

Concatenate all chains.

initialize(problem: Problem, x0: ndarray | List[ndarray])[source]

Initialize all samplers.

sample(n_samples: int, beta: float = 1.0)[source]

Sample and swap in between samplers.

swap_samples() Sequence[bool][source]

Swap samples as in Vousden2016.

class pypesto.sample.Sampler(options: Dict | None = None)[source]

Bases: ABC

Sampler base class, not functional on its own.

The sampler maintains an internal chain, which is initialized in initialize, and updated in sample.

__init__(options: Dict | None = None)[source]
classmethod default_options() Dict[source]

Set/Get default options.

Returns:

Default sampler options.

Return type:

default_options

abstract get_samples() McmcPtResult[source]

Get the generated samples.

abstract initialize(problem: Problem, x0: ndarray | List[ndarray])[source]

Initialize the sampler.

Parameters:
  • problem – The problem for which to sample.

  • x0 – Should, but is not required to, be used as initial parameter.

abstract sample(n_samples: int, beta: float = 1.0)[source]

Perform sampling.

Parameters:
  • n_samples – Number of samples to generate.

  • beta – Inverse of the temperature to which the system is elevated.

classmethod translate_options(options)[source]

Translate options and fill in defaults.

Parameters:

options – Options configuring the sampler.

pypesto.sample.auto_correlation(result: Result) float[source]

Calculate the autocorrelation of the MCMC chains.

Parameters:

result – The pyPESTO result object with filled sample result.

Returns:

Estimate of the integrated autocorrelation time of the MCMC chains.

Return type:

auto_correlation

pypesto.sample.calculate_ci_mcmc_sample(result: Result, ci_level: float = 0.95, exclude_burn_in: bool = True) Tuple[ndarray, ndarray][source]

Calculate parameter credibility intervals based on MCMC samples.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • ci_level – Lower tail probability, defaults to 95% interval.

Returns:

Bounds of the MCMC percentile-based confidence interval.

Return type:

lb, ub

pypesto.sample.calculate_ci_mcmc_sample_prediction(simulated_values: ndarray, ci_level: float = 0.95) Tuple[ndarray, ndarray][source]

Calculate prediction credibility intervals based on MCMC samples.

Parameters:
  • simulated_values – Simulated model states or model observables.

  • ci_level – Lower tail probability, defaults to 95% interval.

Returns:

Bounds of the MCMC-based prediction confidence interval.

Return type:

lb, ub

pypesto.sample.effective_sample_size(result: Result) float[source]

Calculate the effective sample size of the MCMC chains.

Parameters:

result – The pyPESTO result object with filled sample result.

Returns:

Estimate of the effective sample size of the MCMC chains.

Return type:

ess

pypesto.sample.geweke_test(result: Result, zscore: float = 2.0) int[source]

Calculate the burn-in of MCMC chains.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • zscore – The Geweke test threshold.

Returns:

Iteration where the first and the last fraction of the chain do not differ significantly regarding Geweke test -> Burn-In

Return type:

burn_in

pypesto.sample.sample(problem: Problem, n_samples: int | None, sampler: Sampler | None = None, x0: ndarray | List[ndarray] | None = None, result: Result | None = None, filename: str | Callable | None = None, overwrite: bool = False) Result[source]

Call to do parameter sampling.

Parameters:
  • problem – The problem to be solved. If None is provided, a pypesto.AdaptiveMetropolisSampler is used.

  • n_samples – Number of samples to generate. None can be used if the sampler does not use n_samples.

  • sampler – The sampler to perform the actual sampling.

  • x0 – Initial parameter for the Markov chain. If None, the best parameter found in optimization is used. Note that some samplers require an initial parameter, some may ignore it. x0 can also be a list, to have separate starting points for parallel tempering chains.

  • result – A result to write to. If None provided, one is created from the problem.

  • filename – Name of the hdf5 file, where the result will be saved. Default is None, which deactivates automatic saving. If set to “Auto” it will automatically generate a file named year_month_day_profiling_result.hdf5. Optionally a method, see docs for pypesto.store.auto.autosave.

  • overwrite – Whether to overwrite result/sampling in the autosave file if it already exists.

Returns:

A result with filled in sample_options part.

Return type:

result