A sampler study¶
In this notebook, we perform a short study of how various samplers implemented in pyPESTO perform.
The pipeline¶
First, we show a typical workflow, fully integrating the samplers with a PEtab problem, using a toy example of a conversion reaction.
[1]:
import pypesto
import pypesto.petab
import pypesto.optimize as optimize
import pypesto.sample as sample
import pypesto.visualize as visualize
import petab
# import to petab
petab_problem = petab.Problem.from_yaml(
"conversion_reaction/conversion_reaction.yaml")
# import to pypesto
importer = pypesto.petab.PetabImporter(petab_problem)
# create problem
problem = importer.create_problem()
Commonly, as a first step, optimization is performed, in order to find good parameter point estimates.
[2]:
%%time
result = optimize.minimize(problem, n_starts=10)
CPU times: user 2.43 s, sys: 319 ms, total: 2.75 s
Wall time: 3.09 s
[3]:
ax = visualize.waterfall(result, size=(4,4))

Next, we perform sampling. Here, we employ a pypesto.sample.AdaptiveParallelTemperingSampler
sampler, which runs Markov Chain Monte Carlo (MCMC) chains on different temperatures. For each chain, we employ a pypesto.sample.AdaptiveMetropolisSampler
. For more on the samplers see below or the API documentation.
[4]:
sampler = sample.AdaptiveParallelTemperingSampler(
internal_sampler=sample.AdaptiveMetropolisSampler(),
n_chains=3)
For the actual sampling, we call the pypesto.sample.sample
function. By passing the result object to the function, the previously found global optimum is used as starting point for the MCMC sampling.
[5]:
%%time
result = sample.sample(problem, n_samples=10000, sampler=sampler, result=result)
100%|██████████| 10000/10000 [01:32<00:00, 108.33it/s]
CPU times: user 1min 3s, sys: 6.2 s, total: 1min 10s
Wall time: 1min 32s
When the sampling is finished, we can analyse our results. A first thing to do is to analyze the sampling burn-in:
[6]:
sample.geweke_test(result)
[6]:
0
pyPESTO provides functions to analyse both the sampling process as well as the obtained sampling result. Visualizing the traces e.g. allows to detect burn-in phases, or fine-tune hyperparameters. First, the parameter trajectories can be visualized:
[7]:
sample.geweke_test(result)
ax = visualize.sampling_parameters_trace(result, use_problem_bounds=False)

Next, also the log posterior trace can be visualized:
[8]:
ax = visualize.sampling_fval_trace(result)

To visualize the result, there are various options. The scatter plot shows histograms of 1-dim parameter marginals and scatter plots of 2-dimensional parameter combinations:
[9]:
ax = visualize.sampling_scatter(result, size=[13,6])

sampling_1d_marginals
allows to plot e.g. kernel density estimates or histograms (internally using seaborn):
[10]:
for i_chain in range(len(result.sample_result.betas)):
visualize.sampling_1d_marginals(
result, i_chain=i_chain, suptitle=f"Chain: {i_chain}")



That’s it for the moment on using the sampling pipeline.
1-dim test problem¶
To compare and test the various implemented samplers, we first study a 1-dimensional test problem of a gaussian mixture density, together with a flat prior.
[11]:
import numpy as np
from scipy.stats import multivariate_normal
import seaborn as sns
import pypesto
def density(x):
return 0.3*multivariate_normal.pdf(x, mean=-1.5, cov=0.1) + \
0.7*multivariate_normal.pdf(x, mean=2.5, cov=0.2)
def nllh(x):
return - np.log(density(x))
objective = pypesto.Objective(fun=nllh)
problem = pypesto.Problem(
objective=objective, lb=-4, ub=5, x_names=['x'])
The likelihood has two separate modes:
[12]:
xs = np.linspace(-4, 5, 100)
ys = [density(x) for x in xs]
ax = sns.lineplot(xs, ys, color='C1')
Metropolis sampler¶
For this problem, let us try out the simplest sampler, the pypesto.sample.MetropolisSampler
.
[13]:
%%time
sampler = sample.MetropolisSampler({'std': 0.5})
result = sample.sample(problem, 1e4, sampler, x0=np.array([0.5]))
100%|██████████| 10000/10000 [00:04<00:00, 2011.51it/s]
CPU times: user 4.85 s, sys: 183 ms, total: 5.04 s
Wall time: 5 s

[14]:
sample.geweke_test(result)
ax = visualize.sampling_1d_marginals(result)
ax[0][0].plot(xs, ys)
[14]:
[<matplotlib.lines.Line2D at 0x7f5715552640>]

The obtained posterior does not accurately represent the distribution, often only capturing one mode. This is because it is hard for the Markov chain to jump between the distribution’s two modes. This can be fixed by choosing a higher proposal variation std
:
[15]:
%%time
sampler = sample.MetropolisSampler({'std': 1})
result = sample.sample(problem, 1e4, sampler, x0=np.array([0.5]))
100%|██████████| 10000/10000 [00:04<00:00, 2026.01it/s]
CPU times: user 4.87 s, sys: 145 ms, total: 5.01 s
Wall time: 4.95 s
[16]:
sample.geweke_test(result)
ax = visualize.sampling_1d_marginals(result)
ax[0][0].plot(xs, ys)
[16]:
[<matplotlib.lines.Line2D at 0x7f5714aa5f10>]

In general, MCMC have difficulties exploring multimodel landscapes. One way to overcome this is to used parallel tempering. There, various chains are run, lifting the densities to different temperatures. At high temperatures, proposed steps are more likely to get accepted and thus jumps between modes more likely.
Parallel tempering sampler¶
In pyPESTO, the most basic parallel tempering algorithm is the pypesto.sample.ParallelTemperingSampler
. It takes an internal_sampler
parameter, to specify what sampler to use for performing sampling the different chains. Further, we can directly specify what inverse temperatures betas
to use. When not specifying the betas
explicitly but just the number of chains n_chains
, an established near-exponential decay scheme is used.
[17]:
%%time
sampler = sample.ParallelTemperingSampler(
internal_sampler=sample.MetropolisSampler(),
betas=[1, 1e-1, 1e-2])
result = sample.sample(problem, 1e4, sampler, x0=np.array([0.5]))
100%|██████████| 10000/10000 [00:17<00:00, 575.00it/s]
CPU times: user 17.3 s, sys: 297 ms, total: 17.6 s
Wall time: 17.4 s
[18]:
sample.geweke_test(result)
for i_chain in range(len(result.sample_result.betas)):
visualize.sampling_1d_marginals(
result, i_chain=i_chain, suptitle=f"Chain: {i_chain}")



Of interest is here finally the first chain at index i_chain=0
, which approximates the posterior well.
Adaptive Metropolis sampler¶
The problem of having to specify the proposal step variation manually can be overcome by using the pypesto.sample.AdaptiveMetropolisSampler
, which iteratively adjusts the proposal steps to the function landscape.
[19]:
%%time
sampler = sample.AdaptiveMetropolisSampler()
result = sample.sample(problem, 1e4, sampler, x0=np.array([0.5]))
100%|██████████| 10000/10000 [00:06<00:00, 1526.08it/s]
CPU times: user 6.47 s, sys: 119 ms, total: 6.59 s
Wall time: 6.58 s
[20]:
sample.geweke_test(result)
ax = visualize.sampling_1d_marginals(result)

Adaptive parallel tempering sampler¶
The pypesto.sample.AdaptiveParallelTemperingSampler
iteratively adjusts the temperatures to obtain good swapping rates between chains.
[21]:
%%time
sampler = sample.AdaptiveParallelTemperingSampler(
internal_sampler=sample.AdaptiveMetropolisSampler(), n_chains=3)
result = sample.sample(problem, 1e4, sampler, x0=np.array([0.5]))
100%|██████████| 10000/10000 [00:20<00:00, 494.22it/s]
CPU times: user 20.1 s, sys: 210 ms, total: 20.3 s
Wall time: 20.3 s
[22]:
sample.geweke_test(result)
for i_chain in range(len(result.sample_result.betas)):
visualize.sampling_1d_marginals(
result, i_chain=i_chain, suptitle=f"Chain: {i_chain}")



[23]:
result.sample_result.betas
[23]:
array([1.0000000e+00, 2.2121804e-01, 2.0000000e-05])
Pymc3 sampler¶
[24]:
%%time
sampler = sample.Pymc3Sampler()
result = sample.sample(problem, 1e4, sampler, x0=np.array([0.5]))
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Initializing NUTS failed. Falling back to elementwise auto-assignment.
Sequential sampling (1 chains in 1 job)
Slice: [x]
Sampling 1 chain for 1_000 tune and 10_000 draw iterations (1_000 + 10_000 draws total) took 31 seconds.
Only one chain was sampled, this makes it impossible to run some convergence checks
CPU times: user 37 s, sys: 797 ms, total: 37.8 s
Wall time: 39.8 s
[25]:
sample.geweke_test(result)
for i_chain in range(len(result.sample_result.betas)):
visualize.sampling_1d_marginals(
result, i_chain=i_chain, suptitle=f"Chain: {i_chain}")

If not specified, pymc3 chooses an adequate sampler automatically.
2-dim test problem: Rosenbrock banana¶
The adaptive parallel tempering sampler with chains running adaptive Metropolis samplers is also able to sample from more challenging posterior distributions. To illustrates this shortly, we use the Rosenbrock function.
[26]:
import scipy.optimize as so
import pypesto
# first type of objective
objective = pypesto.Objective(fun=so.rosen)
dim_full = 4
lb = -5 * np.ones((dim_full, 1))
ub = 5 * np.ones((dim_full, 1))
problem = pypesto.Problem(objective=objective, lb=lb, ub=ub)
[27]:
%%time
sampler = sample.AdaptiveParallelTemperingSampler(
internal_sampler=sample.AdaptiveMetropolisSampler(), n_chains=10)
result =sample.sample(problem, 1e4, sampler, x0=np.zeros(dim_full))
100%|██████████| 10000/10000 [00:40<00:00, 244.08it/s]
CPU times: user 40.9 s, sys: 238 ms, total: 41.2 s
Wall time: 41 s
[28]:
ax = visualize.sampling_scatter(result)
ax = visualize.sampling_1d_marginals(result)
Burn in index not found in the results, the full chain will be shown.
You may want to use, e.g., 'pypesto.sample.geweke_test'.
Burn in index not found in the results, the full chain will be shown.
You may want to use, e.g., 'pypesto.sample.geweke_test'.


[ ]: