API reference

Engines

The execution of the multistarts can be parallelized in different ways, e.g. multi-threaded or cluster-based. Note that it is not checked whether a single task itself is internally parallelized.

class pypesto.engine.Engine[source]

Bases: ABC

Abstract engine base class.

__init__()[source]
abstract execute(tasks: List[Task], progress_bar: bool = True)[source]

Execute tasks.

Parameters:
  • tasks – List of tasks to execute.

  • progress_bar – Whether to display a progress bar.

class pypesto.engine.MultiProcessEngine(n_procs: Optional[int] = None)[source]

Bases: Engine

Parallelize the task execution using multiprocessing.

Parameters:

n_procs – The maximum number of processes to use in parallel. Defaults to the number of CPUs available on the system according to os.cpu_count(). The effectively used number of processes will be the minimum of n_procs and the number of tasks submitted.

__init__(n_procs: Optional[int] = None)[source]
execute(tasks: List[Task], progress_bar: bool = True)[source]

Pickle tasks and distribute work over parallel processes.

Parameters:
  • tasks – List of tasks to execute.

  • progress_bar – Whether to display a progress bar.

class pypesto.engine.MultiThreadEngine(n_threads: Optional[int] = None)[source]

Bases: Engine

Parallelize the task execution using multithreading.

Parameters:

n_threads – The maximum number of threads to use in parallel. Defaults to the number of CPUs available on the system according to os.cpu_count(). The effectively used number of threads will be the minimum of n_threads and the number of tasks submitted.

__init__(n_threads: Optional[int] = None)[source]
execute(tasks: List[Task], progress_bar: bool = True)[source]

Deepcopy tasks and distribute work over parallel threads.

Parameters:
  • tasks – List of tasks to execute.

  • progress_bar – Whether to display a progress bar.

class pypesto.engine.SingleCoreEngine[source]

Bases: Engine

Dummy engine for sequential execution on one core.

Note that the objective itself may be multithreaded.

__init__()[source]
execute(tasks: List[Task], progress_bar: bool = True)[source]

Execute all tasks in a simple for loop sequentially.

Parameters:
  • tasks – List of tasks to execute.

  • progress_bar – Whether to display a progress bar.

class pypesto.engine.Task[source]

Bases: ABC

Abstract Task class.

A task is one of a list of independent execution tasks that are submitted to the execution engine to be executed using the execute() method, commonly in parallel.

__init__()[source]
abstract execute()[source]

Execute the task and return its results.

Ensemble

class pypesto.ensemble.Ensemble(x_vectors: ndarray, x_names: Optional[Sequence[str]] = None, vector_tags: Optional[Sequence[Tuple[int, int]]] = None, ensemble_type: Optional[EnsembleType] = None, predictions: Optional[Sequence[EnsemblePrediction]] = None, lower_bound: Optional[ndarray] = None, upper_bound: Optional[ndarray] = None)[source]

Bases: object

An ensemble is a wrapper around a numpy array.

It comes with some convenience functionality: It allows to map parameter values via identifiers to the correct parameters, it allows to compute summaries of the parameter vectors (mean, standard deviation, median, percentiles) more easily, and it can store predictions made by pyPESTO, such that the parameter ensemble and the predictions are linked to each other.

__init__(x_vectors: ndarray, x_names: Optional[Sequence[str]] = None, vector_tags: Optional[Sequence[Tuple[int, int]]] = None, ensemble_type: Optional[EnsembleType] = None, predictions: Optional[Sequence[EnsemblePrediction]] = None, lower_bound: Optional[ndarray] = None, upper_bound: Optional[ndarray] = None)[source]

Initialize.

Parameters:
  • x_vectors – parameter vectors of the ensemble, in the format n_parameter x n_vectors

  • x_names – Names or identifiers of the parameters

  • vector_tags – Additional tag, which adds information about the the parameter vectors of the form (optimization_run, optimization_step) if the ensemble is created from an optimization result or (sampling_chain, sampling_step) if the ensemble is created from a sampling result.

  • ensemble_type – Type of ensemble: Ensemble (default), sample, or unprocessed_chain Samples are meant to be representative, ensembles can be any ensemble of parameters, and unprocessed chains still have burn-ins

  • predictions – List of EnsemblePrediction objects

  • lower_bound – array of potential lower bounds for the parameters

  • upper_bound – array of potential upper bounds for the parameters

check_identifiability() DataFrame[source]

Check identifiability of ensemble.

Use ensemble mean and standard deviation to assess (in a rudimentary way) whether or not parameters are identifiable. Returns a dataframe with tuples, which specify whether or not the lower and the upper bounds are violated.

Returns:

DataFrame indicating parameter identifiability based on mean plus/minus standard deviations and parameter bounds

Return type:

parameter_identifiability

compute_summary(percentiles_list: Sequence[int] = (5, 20, 80, 95))[source]

Compute summary for the parameters of the ensemble.

Summary includes the mean, the median, the standard deviation and possibly percentiles. Those summary results are added as a data member to the EnsemblePrediction object.

Parameters:

percentiles_list – List or tuple of percent numbers for the percentiles

Returns:

Dict with mean, std, median, and percentiles of parameter vectors

Return type:

summary

static from_optimization_endpoints(result: Result, rel_cutoff: Optional[float] = None, max_size: int = inf, percentile: Optional[float] = None, **kwargs)[source]

Construct an ensemble from an optimization result.

Parameters:
  • result – A pyPESTO result that contains an optimization result.

  • rel_cutoff – Relative cutoff. Exclude parameter vectors, for which the objective value difference to the best vector is greater than cutoff, i.e. include all vectors such that fval(vector) <= fval(opt_vector) + rel_cutoff.

  • max_size – The maximum size the ensemble should be.

  • percentile – Percentile of a chi^2 distribution. Used to determine the cutoff value.

Return type:

The ensemble.

static from_optimization_history(result: Result, rel_cutoff: Optional[float] = None, max_size: int = inf, max_per_start: int = inf, distribute: bool = True, percentile: Optional[float] = None, **kwargs)[source]

Construct an ensemble from the history of an optimization.

Parameters:
  • result – A pyPESTO result that contains an optimization result with history recorded.

  • rel_cutoff – Relative cutoff. Exclude parameter vectors, for which the objective value difference to the best vector is greater than cutoff, i.e. include all vectors such that fval(vector) <= fval(opt_vector) + rel_cutoff.

  • max_size – The maximum size the ensemble should be.

  • max_per_start – The maximum number of vectors to be included from a single optimization start.

  • distribute – Boolean flag, whether the best (False) values from the start should be taken or whether the indices should be more evenly distributed.

  • percentile – Percentile of a chi^2 distribution. Used to determinie the cutoff value.

Return type:

The ensemble.

static from_sample(result: Result, remove_burn_in: bool = True, chain_slice: Optional[slice] = None, x_names: Optional[Sequence[str]] = None, lower_bound: Optional[ndarray] = None, upper_bound: Optional[ndarray] = None, **kwargs)[source]

Construct an ensemble from a sample.

Parameters:
  • result – A pyPESTO result that contains a sample result.

  • remove_burn_in – Exclude parameter vectors from the ensemble if they are in the “burn-in”.

  • chain_slice – Subset the chain with a slice. Any “burn-in” removal occurs first.

  • x_names – Names or identifiers of the parameters

  • lower_bound – array of potential lower bounds for the parameters

  • upper_bound – array of potential upper bounds for the parameters

Return type:

The ensemble.

predict(predictor: Callable, prediction_id: Optional[str] = None, sensi_orders: Tuple = (0,), default_value: Optional[float] = None, mode: Literal['mode_fun', 'mode_res'] = 'mode_fun', include_llh_weights: bool = False, include_sigmay: bool = False, engine: Optional[Engine] = None, progress_bar: bool = True) EnsemblePrediction[source]

Run predictions for a full ensemble.

User needs to hand over a predictor function and settings, then all results are grouped as EnsemblePrediction for the whole ensemble

Parameters:
  • predictor – Prediction function, e.g., an AmiciPredictor

  • prediction_id – Identifier for the predictions

  • sensi_orders – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad

  • default_value – If parameters are needed in the mapping, which are not found in the parameter source, it can make sense to fill them up with this default value (e.g. np.nan) in some cases (to be used with caution though).

  • mode – Whether to compute function values or residuals.

  • include_llh_weights – Whether to include weights in the output of the predictor.

  • include_sigmay – Whether to include standard deviations in the output of the predictor.

  • engine – Parallelization engine. Defaults to sequential execution on a SingleCoreEngine.

  • progress_bar – Whether to display a progress bar.

Return type:

The prediction of the ensemble.

class pypesto.ensemble.EnsemblePrediction(predictor: Optional[Callable[[Sequence], PredictionResult]] = None, prediction_id: Optional[str] = None, prediction_results: Optional[Sequence[PredictionResult]] = None, lower_bound: Optional[Sequence[ndarray]] = None, upper_bound: Optional[Sequence[ndarray]] = None)[source]

Bases: object

Class of ensemble prediction.

An ensemble prediction consists of an ensemble, i.e., a set of parameter vectors and their identifiers such as a sample, and a prediction function. It can be attached to a ensemble-type object

__init__(predictor: Optional[Callable[[Sequence], PredictionResult]] = None, prediction_id: Optional[str] = None, prediction_results: Optional[Sequence[PredictionResult]] = None, lower_bound: Optional[Sequence[ndarray]] = None, upper_bound: Optional[Sequence[ndarray]] = None)[source]

Initialize.

Parameters:
  • predictor – Prediction function, e.g., an AmiciPredictor, which takes a parameter vector as input and outputs a PredictionResult object

  • prediction_id – Identifier for the predictions

  • prediction_results – List of Prediction results

  • lower_bound – Array of potential lower bounds for the predictions, should have the same shape as the output of the predictions, i.e., a list of numpy array (one list entry per condition), with the arrays having the shape of n_timepoints x n_outputs for each condition.

  • upper_bound – array of potential upper bounds for the parameters

compute_chi2(amici_objective: AmiciObjective)[source]

Compute the chi^2 error of the weighted mean trajectory.

Parameters:

amici_objective – The objective function of the model, the parameter ensemble was created from.

Return type:

The chi^2 error.

compute_summary(percentiles_list: Sequence[int] = (5, 20, 80, 95), weighting: bool = False, compute_weighted_sigma: bool = False) Dict[source]

Compute summary from the ensemble prediction results.

Summary includes the mean, the median, the standard deviation and possibly percentiles. Those summary results are added as a data member to the EnsemblePrediction object.

Parameters:
  • percentiles_list – List or tuple of percent numbers for the percentiles

  • weighting – Whether weights should be used for trajectory.

  • compute_weighted_sigma – Whether weighted standard deviation of the ensemble mean trajectory should be computed. Defaults to False.

Returns:

dictionary of predictions results with the keys mean, std, median, percentiles, …

Return type:

summary

condense_to_arrays()[source]

Add prediction result to EnsemblePrediction object.

Reshape the prediction results to an array and add them as a member to the EnsemblePrediction objects. It’s meant to be used only if all conditions of a prediction have the same observables, as this is often the case for large-scale data sets taken from online databases or similar.

pypesto.ensemble.get_covariance_matrix_parameters(ens: Ensemble) ndarray[source]

Compute the covariance of ensemble parameters.

Parameters:

ens – Ensemble object containing a set of parameter vectors

Returns:

covariance matrix of ensemble parameters

Return type:

covariance_matrix

pypesto.ensemble.get_covariance_matrix_predictions(ens: Union[Ensemble, EnsemblePrediction], prediction_index: int = 0) ndarray[source]

Compute the covariance of ensemble predictions.

Parameters:
  • ens – Ensemble object containing a set of parameter vectors and a set of predictions or EnsemblePrediction object containing only predictions

  • prediction_index – index telling which prediction from the list should be analyzed

Returns:

covariance matrix of ensemble predictions

Return type:

covariance_matrix

pypesto.ensemble.get_pca_representation_parameters(ens: Ensemble, n_components: int = 2, rescale_data: bool = True, rescaler: Optional[Callable] = None) Tuple[source]

PCA of parameter ensemble.

Compute the representation with reduced dimensionality via principal component analysis (with a given number of principal components) of the parameter ensemble.

Parameters:
  • ens – Ensemble objects containing a set of parameter vectors

  • n_components – number of components for the dimension reduction

  • rescale_data – flag indicating whether the principal components should be rescaled using a rescaler function (e.g., an arcsinh function)

  • rescaler – callable function to rescale the output of the PCA (defaults to numpy.arcsinh)

Returns:

  • principal_components – principal components of the parameter vector ensemble

  • pca_object – returned fitted pca object from sklearn.decomposition.PCA()

pypesto.ensemble.get_pca_representation_predictions(ens: Union[Ensemble, EnsemblePrediction], prediction_index: int = 0, n_components: int = 2, rescale_data: bool = True, rescaler: Optional[Callable] = None) Tuple[source]

PCA of ensemble prediction.

Compute the representation with reduced dimensionality via principal component analysis (with a given number of principal components) of the ensemble prediction.

Parameters:
  • ens – Ensemble objects containing a set of parameter vectors and a set of predictions or EnsemblePrediction object containing only predictions

  • prediction_index – index telling which prediction from the list should be analyzed

  • n_components – number of components for the dimension reduction

  • rescale_data – flag indicating whether the principal components should be rescaled using a rescaler function (e.g., an arcsinh function)

  • rescaler – callable function to rescale the output of the PCA (defaults to numpy.arcsinh)

Returns:

  • principal_components – principal components of the parameter vector ensemble

  • pca_object – returned fitted pca object from sklearn.decomposition.PCA()

pypesto.ensemble.get_percentile_label(percentile: Union[float, int, str]) str[source]

Convert a percentile to a label.

Labels for percentiles are used at different locations (e.g. ensemble prediction code, and visualization code). This method ensures that the same percentile is labeled identically everywhere.

The percentile is rounded to two decimal places in the label representation if it is specified to more decimal places. This is for readability in plotting routines, and to avoid float to string conversion issues related to float precision.

Parameters:

percentile – The percentile value that will be used to generate a label.

Return type:

The label of the (possibly rounded) percentile.

pypesto.ensemble.get_spectral_decomposition_lowlevel(matrix: ndarray, normalize: bool = False, only_separable_directions: bool = False, cutoff_absolute_separable: float = 1e-16, cutoff_relative_separable: float = 1e-16, only_identifiable_directions: bool = False, cutoff_absolute_identifiable: float = 1e-16, cutoff_relative_identifiable: float = 1e-16) Tuple[ndarray, ndarray][source]

Compute the spectral decomposition of ensemble parameters or predictions.

Parameters:
  • matrix – symmetric matrix (typically a covariance matrix) of parameters or predictions

  • normalize – flag indicating whether the returned Eigenvalues should be normalized with respect to the largest Eigenvalue

  • only_separable_directions – return only separable directions according to cutoff_[absolute/relative]_separable

  • cutoff_absolute_separable – Consider only eigenvalues of the covariance matrix above this cutoff (only applied when only_separable_directions is True)

  • cutoff_relative_separable – Consider only eigenvalues of the covariance matrix above this cutoff, when rescaled with the largest eigenvalue (only applied when only_separable_directions is True)

  • only_identifiable_directions – return only identifiable directions according to cutoff_[absolute/relative]_identifiable

  • cutoff_absolute_identifiable – Consider only low eigenvalues of the covariance matrix with inverses above of this cutoff (only applied when only_identifiable_directions is True)

  • cutoff_relative_identifiable – Consider only low eigenvalues of the covariance matrix when rescaled with the largest eigenvalue with inverses above of this cutoff (only applied when only_identifiable_directions is True)

Returns:

  • eigenvalues – Eigenvalues of the covariance matrix

  • eigenvectors – Eigenvectors of the covariance matrix

pypesto.ensemble.get_spectral_decomposition_parameters(ens: Ensemble, normalize: bool = False, only_separable_directions: bool = False, cutoff_absolute_separable: float = 1e-16, cutoff_relative_separable: float = 1e-16, only_identifiable_directions: bool = False, cutoff_absolute_identifiable: float = 1e-16, cutoff_relative_identifiable: float = 1e-16) Tuple[ndarray, ndarray][source]

Compute the spectral decomposition of ensemble parameters.

Parameters:
  • ens – Ensemble object containing a set of parameter vectors

  • normalize – flag indicating whether the returned Eigenvalues should be normalized with respect to the largest Eigenvalue

  • only_separable_directions – return only separable directions according to cutoff_[absolute/relative]_separable

  • cutoff_absolute_separable – Consider only eigenvalues of the covariance matrix above this cutoff (only applied when only_separable_directions is True)

  • cutoff_relative_separable – Consider only eigenvalues of the covariance matrix above this cutoff, when rescaled with the largest eigenvalue (only applied when only_separable_directions is True)

  • only_identifiable_directions – return only identifiable directions according to cutoff_[absolute/relative]_identifiable

  • cutoff_absolute_identifiable – Consider only low eigenvalues of the covariance matrix with inverses above of this cutoff (only applied when only_identifiable_directions is True)

  • cutoff_relative_identifiable – Consider only low eigenvalues of the covariance matrix when rescaled with the largest eigenvalue with inverses above of this cutoff (only applied when only_identifiable_directions is True)

Returns:

  • eigenvalues – Eigenvalues of the covariance matrix

  • eigenvectors – Eigenvectors of the covariance matrix

pypesto.ensemble.get_spectral_decomposition_predictions(ens: Ensemble, normalize: bool = False, only_separable_directions: bool = False, cutoff_absolute_separable: float = 1e-16, cutoff_relative_separable: float = 1e-16, only_identifiable_directions: bool = False, cutoff_absolute_identifiable: float = 1e-16, cutoff_relative_identifiable: float = 1e-16) Tuple[ndarray, ndarray][source]

Compute the spectral decomposition of ensemble predictions.

Parameters:
  • ens – Ensemble object containing a set of parameter vectors and a set of predictions or EnsemblePrediction object containing only predictions

  • normalize – flag indicating whether the returned Eigenvalues should be normalized with respect to the largest Eigenvalue

  • only_separable_directions – return only separable directions according to cutoff_[absolute/relative]_separable

  • cutoff_absolute_separable – Consider only eigenvalues of the covariance matrix above this cutoff (only applied when only_separable_directions is True)

  • cutoff_relative_separable – Consider only eigenvalues of the covariance matrix above this cutoff, when rescaled with the largest eigenvalue (only applied when only_separable_directions is True)

  • only_identifiable_directions – return only identifiable directions according to cutoff_[absolute/relative]_identifiable

  • cutoff_absolute_identifiable – Consider only low eigenvalues of the covariance matrix with inverses above of this cutoff (only applied when only_identifiable_directions is True)

  • cutoff_relative_identifiable – Consider only low eigenvalues of the covariance matrix when rescaled with the largest eigenvalue with inverses above of this cutoff (only applied when only_identifiable_directions is True)

Returns:

  • eigenvalues – Eigenvalues of the covariance matrix

  • eigenvectors – Eigenvectors of the covariance matrix

pypesto.ensemble.get_umap_representation_parameters(ens: Ensemble, n_components: int = 2, normalize_data: bool = False, **kwargs) Tuple[source]

UMAP of parameter ensemble.

Compute the representation with reduced dimensionality via umap (with a given number of umap components) of the parameter ensemble. Allows to pass on additional keyword arguments to the umap routine.

Parameters:
  • ens – Ensemble objects containing a set of parameter vectors

  • n_components – number of components for the dimension reduction

  • normalize_data – flag indicating whether the parameter ensemble should be rescaled with mean and standard deviation

Returns:

  • umap_components – first components of the umap embedding

  • umap_object – returned fitted umap object from umap.UMAP()

pypesto.ensemble.get_umap_representation_predictions(ens: Union[Ensemble, EnsemblePrediction], prediction_index: int = 0, n_components: int = 2, normalize_data: bool = False, **kwargs) Tuple[source]

UMAP of ensemble prediction.

Compute the representation with reduced dimensionality via umap (with a given number of umap components) of the ensemble predictions. Allows to pass on additional keyword arguments to the umap routine.

Parameters:
  • ens – Ensemble objects containing a set of parameter vectors and a set of predictions or EnsemblePrediction object containing only predictions

  • prediction_index – index telling which prediction from the list should be analyzed

  • n_components – number of components for the dimension reduction

  • normalize_data – flag indicating whether the parameter ensemble should be rescaled with mean and standard deviation

Returns:

  • umap_components – first components of the umap embedding

  • umap_object – returned fitted umap object from umap.UMAP()

pypesto.ensemble.read_ensemble_prediction_from_h5(predictor: Optional[Callable[[Sequence], PredictionResult]], input_file: str)[source]

Read an ensemble prediction from an HDF5 File.

pypesto.ensemble.read_from_csv(path: str, sep: str = '\t', index_col: int = 0, headline_parser: Optional[Callable] = None, ensemble_type: Optional[EnsembleType] = None, lower_bound: Optional[ndarray] = None, upper_bound: Optional[ndarray] = None)[source]

Create an ensemble from a csv file.

Parameters:
  • path – path to csv file to read in parameter ensemble

  • sep – separator in csv file

  • index_col – index column in csv file

  • headline_parser – A function which reads in the headline of the csv file and converts it into vector_tags (see constructor of Ensemble for more details)

  • ensemble_type – Ensemble type: representative sample or random ensemble

  • lower_bound – array of potential lower bounds for the parameters

  • upper_bound – array of potential upper bounds for the parameters

Returns:

Ensemble object of parameter vectors

Return type:

result

pypesto.ensemble.read_from_df(dataframe: DataFrame, headline_parser: Optional[Callable] = None, ensemble_type: Optional[EnsembleType] = None, lower_bound: Optional[ndarray] = None, upper_bound: Optional[ndarray] = None)[source]

Create an ensemble from a csv file.

Parameters:
  • dataframe – pandas.DataFrame to read in parameter ensemble

  • headline_parser – A function which reads in the headline of the csv file and converts it into vector_tags (see constructor of Ensemble for more details)

  • ensemble_type – Ensemble type: representative sample or random ensemble

  • lower_bound – array of potential lower bounds for the parameters

  • upper_bound – array of potential upper bounds for the parameters

Returns:

Ensemble object of parameter vectors

Return type:

result

pypesto.ensemble.write_ensemble_prediction_to_h5(ensemble_prediction: EnsemblePrediction, output_file: str, base_path: Optional[str] = None)[source]

Write an EnsemblePrediction to hdf5.

Parameters:
  • ensemble_prediction – The prediciton to be saved.

  • output_file – The filename of the hdf5 file.

  • base_path – An optional filepath where the file should be saved to.

History

Objetive function call history. The history tracks and stores function evaluations performed by e.g. the optimizer and other routines, allowing to e.g. recover results from failed runs, fill in further details, and evaluate performance.

class pypesto.history.CountHistory(options: Optional[Union[HistoryOptions, Dict]] = None)[source]

Bases: CountHistoryBase

History that can only count, other functions cannot be invoked.

get_fval_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Return function values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_grad_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Return gradients.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_hess_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Return hessians.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_res_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Residuals.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_sres_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Residual sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_time_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Cumulative execution times.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_x_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[ndarray], ndarray][source]

Return parameters.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

class pypesto.history.CountHistoryBase(options: Optional[Union[HistoryOptions, Dict]] = None)[source]

Bases: HistoryBase

Abstract class tracking counts of function evaluations.

Needs a separate implementation of trace.

__init__(options: Optional[Union[HistoryOptions, Dict]] = None)[source]
property n_fval: int

Return number of function evaluations.

property n_grad: int

Return number of gradient evaluations.

property n_hess: int

Return number of Hessian evaluations.

property n_res: int

Return number of residual evaluations.

property n_sres: int

Return number or residual sensitivity evaluations.

property start_time: float

Return start time.

update(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], result: Dict[str, Union[float, ndarray]]) None[source]

Update history after a function evaluation.

Parameters:
  • x – The parameter vector.

  • sensi_orders – The sensitivity orders computed.

  • mode – The objective function mode computed (function value or residuals).

  • result – The objective function values for parameters x, sensitivities sensi_orders and mode mode.

class pypesto.history.CsvHistory(file: str, x_names: Optional[Sequence[str]] = None, options: Optional[Union[HistoryOptions, Dict]] = None, load_from_file: bool = False)[source]

Bases: CountHistoryBase

Stores a representation of the history in a CSV file.

Parameters:
  • file – CSV file name.

  • x_names – Parameter names.

  • options – History options.

  • load_from_file – If True, history will be initialized from data in the specified file

__init__(file: str, x_names: Optional[Sequence[str]] = None, options: Optional[Union[HistoryOptions, Dict]] = None, load_from_file: bool = False)[source]
finalize(message: Optional[str] = None, exitflag: Optional[str] = None)[source]

See HistoryBase docstring.

get_fval_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return function values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_grad_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return gradients.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_hess_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return hessians.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_res_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Residuals.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_sres_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Residual sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_time_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Cumulative execution times.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_x_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return parameters.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

update(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], result: Dict[str, Union[float, ndarray]]) None[source]

See History docstring.

exception pypesto.history.CsvHistoryTemplateError(storage_file: str)[source]

Bases: ValueError

Error raised when no template is given for CSV history.

__init__(storage_file: str)[source]
class pypesto.history.Hdf5History(id: str, file: str, options: Optional[Union[HistoryOptions, Dict]] = None)[source]

Bases: HistoryBase

Stores a representation of the history in an HDF5 file.

Parameters:
  • id – Id of the history

  • file – HDF5 file name.

  • options – History options.

__init__(id: str, file: str, options: Optional[Union[HistoryOptions, Dict]] = None)[source]
property exitflag
finalize(*args, **kwargs)[source]

Finalize history. Called after a run. Default: Do nothing.

Parameters:
  • message – Optimizer message to be saved.

  • exitflag – Optimizer exitflag to be saved.

get_fval_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return function values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_grad_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return gradients.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_hess_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return hessians.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_res_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Residuals.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_sres_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Residual sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_time_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Cumulative execution times.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_x_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return parameters.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

static load(id: str, file: str, options: Optional[Union[HistoryOptions, Dict]] = None) Hdf5History[source]

Load the History object from memory.

property message
property n_fval

Return number of function evaluations.

property n_grad

Return number of gradient evaluations.

property n_hess

Return number of Hessian evaluations.

property n_res

Return number of residual evaluations.

property n_sres

Return number or residual sensitivity evaluations.

recover_options(file: str)[source]

Recover options when loading the hdf5 history from memory.

Done by testing which entries were recorded.

property start_time

Return start time.

property trace_save_iter
update(*args, **kwargs)[source]

Update history after a function evaluation.

Parameters:
  • x – The parameter vector.

  • sensi_orders – The sensitivity orders computed.

  • mode – The objective function mode computed (function value or residuals).

  • result – The objective function values for parameters x, sensitivities sensi_orders and mode mode.

class pypesto.history.HistoryBase(options: Optional[HistoryOptions] = None)[source]

Bases: ABC

Abstract base class for histories.

ALL_KEYS = ('x', 'fval', 'grad', 'hess', 'res', 'sres', 'time')
RESULT_KEYS = ('fval', 'grad', 'hess', 'res', 'sres')
__init__(options: Optional[HistoryOptions] = None)[source]
finalize(message: Optional[str] = None, exitflag: Optional[str] = None) None[source]

Finalize history. Called after a run. Default: Do nothing.

Parameters:
  • message – Optimizer message to be saved.

  • exitflag – Optimizer exitflag to be saved.

get_chi2_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Chi2 values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

abstract get_fval_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Return function values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

abstract get_grad_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Return gradients.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

abstract get_hess_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Return hessians.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

abstract get_res_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Residuals.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_schi2_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Chi2 sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

abstract get_sres_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Residual sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

abstract get_time_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Cumulative execution times.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_trimmed_indices() ndarray[source]

Get indices for a monotonically decreasing history.

abstract get_x_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[ndarray], ndarray][source]

Return parameters.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

implements_trace() bool[source]

Check whether the history has a trace that can be queried.

abstract property n_fval: int

Return number of function evaluations.

abstract property n_grad: int

Return number of gradient evaluations.

abstract property n_hess: int

Return number of Hessian evaluations.

abstract property n_res: int

Return number of residual evaluations.

abstract property n_sres: int

Return number or residual sensitivity evaluations.

abstract property start_time: float

Return start time.

abstract update(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], result: Dict[str, Union[float, ndarray]]) None[source]

Update history after a function evaluation.

Parameters:
  • x – The parameter vector.

  • sensi_orders – The sensitivity orders computed.

  • mode – The objective function mode computed (function value or residuals).

  • result – The objective function values for parameters x, sensitivities sensi_orders and mode mode.

class pypesto.history.HistoryOptions(trace_record: bool = False, trace_record_grad: bool = True, trace_record_hess: bool = True, trace_record_res: bool = True, trace_record_sres: bool = True, trace_save_iter: int = 10, storage_file: Optional[str] = None)[source]

Bases: dict

Options for what values to record.

In addition implements a factory pattern to generate history objects.

Parameters:
  • trace_record – Flag indicating whether to record the trace of function calls. The trace_record_* flags only become effective if trace_record is True.

  • trace_record_grad – Flag indicating whether to record the gradient in the trace.

  • trace_record_hess – Flag indicating whether to record the Hessian in the trace.

  • trace_record_res – Flag indicating whether to record the residual in the trace.

  • trace_record_sres – Flag indicating whether to record the residual sensitivities in the trace.

  • trace_save_iter – After how many iterations to store the trace.

  • storage_file – File to save the history to. Can be any of None, a “{filename}.csv”, or a “{filename}.hdf5” file. Depending on the values, the create_history method creates the appropriate object. Occurrences of “{id}” in the file name are replaced by the id upon creation of a history, if applicable.

__init__(trace_record: bool = False, trace_record_grad: bool = True, trace_record_hess: bool = True, trace_record_res: bool = True, trace_record_sres: bool = True, trace_save_iter: int = 10, storage_file: Optional[str] = None)[source]
static assert_instance(maybe_options: Union[HistoryOptions, Dict]) HistoryOptions[source]

Return a valid options object.

Parameters:

maybe_options (HistoryOptions or dict) –

exception pypesto.history.HistoryTypeError(history_type: str)[source]

Bases: ValueError

Error raised when an unsupported history type is requested.

__init__(history_type: str)[source]
class pypesto.history.MemoryHistory(options: Optional[Union[HistoryOptions, Dict]] = None)[source]

Bases: CountHistoryBase

Class for optimization history stored in memory.

Tracks number of function evaluations and keeps an in-memory trace of function evaluations.

Parameters:

options (pypesto.history.options.HistoryOptions) – History options.

__init__(options: Optional[Union[HistoryOptions, Dict]] = None)[source]
get_fval_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return function values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_grad_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return gradients.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_hess_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return hessians.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_res_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Residuals.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_sres_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Residual sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_time_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Cumulative execution times.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_x_trace(ix: Optional[Union[Sequence[int], int]] = None, trim: bool = False) Union[Sequence[Union[float, ndarray, np.nan]], float, ndarray, np.nan]

Return parameters.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

update(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], result: Dict[str, Union[float, ndarray]]) None[source]

See History docstring.

class pypesto.history.NoHistory(options: Optional[HistoryOptions] = None)[source]

Bases: HistoryBase

Dummy history that does not do anything.

Can be used whenever a history object is needed, but no history is desired. Can be created, but not queried.

get_fval_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Return function values.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_grad_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Return gradients.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_hess_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Return hessians.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_res_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Residuals.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_sres_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[Union[ndarray, np.nan]], ndarray, np.nan][source]

Residual sensitivities.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_time_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[float], float][source]

Cumulative execution times.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

get_x_trace(ix: Optional[Union[int, Sequence[int]]] = None, trim: bool = False) Union[Sequence[ndarray], ndarray][source]

Return parameters.

Takes as parameter an index or indices and returns corresponding trace values. If only a single value is requested, the list is flattened.

property n_fval: int

Return number of function evaluations.

property n_grad: int

Return number of gradient evaluations.

property n_hess: int

Return number of Hessian evaluations.

property n_res: int

Return number of residual evaluations.

property n_sres: int

Return number or residual sensitivity evaluations.

property start_time: float

Return start time.

update(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], result: Dict[str, Union[float, ndarray]]) None[source]

Update history after a function evaluation.

Parameters:
  • x – The parameter vector.

  • sensi_orders – The sensitivity orders computed.

  • mode – The objective function mode computed (function value or residuals).

  • result – The objective function values for parameters x, sensitivities sensi_orders and mode mode.

class pypesto.history.OptimizerHistory(history: HistoryBase, x0: ndarray, lb: ndarray, ub: ndarray, generate_from_history: bool = False)[source]

Bases: object

Optimizer objective call history.

Container around a History object, additionally keeping track of optimal values.

fval0, fval_min

Initial and best function value found.

x0, x_min

Initial and best parameters found.

grad_min

gradient for best parameters

hess_min

hessian (approximation) for best parameters

res_min

residuals for best parameters

sres_min

residual sensitivities for best parameters

Parameters:
  • history – History object to attach to this container. This history object implements the storage of the actual history.

  • x0 – Initial values for optimization.

  • lb – Lower and upper bound. Used for checking validity of optimal points.

  • ub – Lower and upper bound. Used for checking validity of optimal points.

  • generate_from_history – If set to true, this function will try to fill attributes of this function based on the provided history.

MIN_KEYS = ('x', 'fval', 'grad', 'hess', 'res', 'sres')
__init__(history: HistoryBase, x0: ndarray, lb: ndarray, ub: ndarray, generate_from_history: bool = False) None[source]
finalize(message: Optional[str] = None, exitflag: Optional[int] = None)[source]

Finalize history.

Parameters:
  • message – Optimizer message to be saved.

  • exitflag – Optimizer exitflag to be saved.

update(x: ndarray, sensi_orders: Tuple[int], mode: Literal['mode_fun', 'mode_res'], result: Dict[str, Union[float, ndarray]]) None[source]

Update history and best found value.

pypesto.history.create_history(id: str, x_names: Sequence[str], options: HistoryOptions) HistoryBase[source]

Create a HistoryBase object; Factory method.

Parameters:
  • id – Identifier for the history.

  • x_names – Parameter names.

  • options – History options.

Returns:

A history object corresponding to the inputs.

Return type:

history

Logging

Logging convenience functions.

pypesto.logging.log(name: str = 'pypesto', level: int = 20, console: bool = True, filename: str = '')[source]

Log messages from name with level to any combination of console/file.

Parameters:
  • name – The name of the logger.

  • level – The output level to use.

  • console – If True, messages are logged to console.

  • filename – If specified, messages are logged to a file with this name.

pypesto.logging.log_level_active(logger: Logger, level: int) bool[source]

Check whether the requested log level is active in any handler.

This is useful in case log expressions are costly.

Parameters:
  • logger – The logger.

  • level – The requested log level.

Returns:

Whether there is a handler registered that handles events of importance at least level and higher.

Return type:

active

pypesto.logging.log_to_console(level: int = 20)[source]

Log to console.

Parameters:

method. (See the log) –

pypesto.logging.log_to_file(level: int = 20, filename: str = '.pypesto_logging.log')[source]

Log to file.

Parameters:

method. (See the log) –

Objective

class pypesto.objective.AggregatedObjective(objectives: Sequence[ObjectiveBase], x_names: Optional[Sequence[str]] = None)[source]

Bases: ObjectiveBase

Aggregates multiple objectives into one objective.

__init__(objectives: Sequence[ObjectiveBase], x_names: Optional[Sequence[str]] = None)[source]

Initialize objective.

Parameters:
  • objectives – Sequence of pypesto.ObjectiveBase instances

  • x_names – Sequence of names of the (optimized) parameters. (Details see documentation of x_names in pypesto.ObjectiveBase)

call_unprocessed(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], **kwargs) Dict[str, Union[float, ndarray, Dict]][source]

See ObjectiveBase for more documentation.

Main method to overwrite from the base class. It handles and delegates the actual objective evaluation.

check_mode(mode: Literal['mode_fun', 'mode_res']) bool[source]

See ObjectiveBase documentation.

check_sensi_orders(sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res']) bool[source]

See ObjectiveBase documentation.

get_config() dict[source]

Return basic information of the objective configuration.

initialize()[source]

See ObjectiveBase documentation.

class pypesto.objective.AmiciObjective(amici_model: Union[amici.Model, amici.ModelPtr], amici_solver: Union[amici.Solver, amici.SolverPtr], edatas: Union[Sequence[amici.ExpData], amici.ExpData], max_sensi_order: Optional[int] = None, x_ids: Optional[Sequence[str]] = None, x_names: Optional[Sequence[str]] = None, parameter_mapping: Optional[ParameterMapping] = None, guess_steadystate: Optional[bool] = None, n_threads: Optional[int] = 1, fim_for_hess: Optional[bool] = True, amici_object_builder: Optional[AmiciObjectBuilder] = None, calculator: Optional[AmiciCalculator] = None, amici_reporting: Optional[amici.RDataReporting] = None)[source]

Bases: ObjectiveBase

Allows to create an objective directly from an amici model.

__call__(x: ndarray, sensi_orders: Tuple[int, ...] = (0,), mode: Literal['mode_fun', 'mode_res'] = 'mode_fun', return_dict: bool = False, **kwargs) Union[float, ndarray, Tuple, Dict[str, Union[float, ndarray, Dict]]][source]

See ObjectiveBase documentation.

__init__(amici_model: Union[amici.Model, amici.ModelPtr], amici_solver: Union[amici.Solver, amici.SolverPtr], edatas: Union[Sequence[amici.ExpData], amici.ExpData], max_sensi_order: Optional[int] = None, x_ids: Optional[Sequence[str]] = None, x_names: Optional[Sequence[str]] = None, parameter_mapping: Optional[ParameterMapping] = None, guess_steadystate: Optional[bool] = None, n_threads: Optional[int] = 1, fim_for_hess: Optional[bool] = True, amici_object_builder: Optional[AmiciObjectBuilder] = None, calculator: Optional[AmiciCalculator] = None, amici_reporting: Optional[amici.RDataReporting] = None)[source]

Initialize objective.

Parameters:
  • amici_model – The amici model.

  • amici_solver – The solver to use for the numeric integration of the model.

  • edatas – The experimental data. If a list is passed, its entries correspond to multiple experimental conditions.

  • max_sensi_order – Maximum sensitivity order supported by the model. Defaults to 2 if the model was compiled with o2mode, otherwise 1.

  • x_ids – Ids of optimization parameters. In the simplest case, this will be the AMICI model parameters (default).

  • x_names – Names of optimization parameters.

  • parameter_mapping – Mapping of optimization parameters to model parameters. Format as created by amici.petab_objective.create_parameter_mapping. The default is just to assume that optimization and simulation parameters coincide.

  • guess_steadystate – Whether to guess steadystates based on previous steadystates and respective derivatives. This option may lead to unexpected results for models with conservation laws and should accordingly be deactivated for those models.

  • n_threads – Number of threads that are used for parallelization over experimental conditions. If amici was not installed with openMP support this option will have no effect.

  • fim_for_hess – Whether to use the FIM whenever the Hessian is requested. This only applies with forward sensitivities. With adjoint sensitivities, the true Hessian will be used, if available. FIM or Hessian will only be exposed if max_sensi_order>1.

  • amici_object_builder – AMICI object builder. Allows recreating the objective for pickling, required in some parallelization schemes.

  • calculator – Performs the actual calculation of the function values and derivatives.

  • amici_reporting – Determines which quantities will be computed by AMICI, see amici.Solver.setReturnDataReportingMode. Set to None to compute only the minimum required information.

apply_custom_timepoints() None[source]

Apply custom timepoints, if applicable.

See the set_custom_timepoints method for more information.

apply_steadystate_guess(condition_ix: int, x_dct: Dict) None[source]

Apply steady state guess to edatas[condition_ix].x0.

Use the stored steadystate as well as the respective sensitivity ( if available) and parameter value to approximate the steadystate at the current parameters using a zeroth or first order taylor approximation: x_ss(x’) = x_ss(x) [+ dx_ss/dx(x)*(x’-x)]

call_unprocessed(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], edatas: Sequence[amici.ExpData] = None, parameter_mapping: ParameterMapping = None, amici_reporting: Optional[amici.RDataReporting] = None)[source]

Call objective function without pre- or post-processing and formatting.

Returns:

A dict containing the results.

Return type:

result

check_gradients_match_finite_differences(x: Optional[ndarray] = None, *args, **kwargs) bool[source]

Check if gradients match finite differences (FDs).

Parameters:

x (The parameters for which to evaluate the gradient.) –

Returns:

Indicates whether gradients match (True) FDs or not (False)

Return type:

bool

check_mode(mode: Literal['mode_fun', 'mode_res']) bool[source]

See ObjectiveBase documentation.

check_sensi_orders(sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res']) bool[source]

See ObjectiveBase documentation.

get_config() dict[source]

Return basic information of the objective configuration.

initialize()[source]

See ObjectiveBase documentation.

par_arr_to_dct(x: Sequence[float]) Dict[str, float][source]

Create dict from parameter vector.

reset_steadystate_guesses() None[source]

Reset all steadystate guess data.

set_custom_timepoints(timepoints: Optional[Sequence[Sequence[Union[float, int]]]] = None, timepoints_global: Optional[Sequence[Union[float, int]]] = None) AmiciObjective[source]

Create a copy of this objective that is evaluated at custom timepoints.

The intended use is to aid in predictions at unmeasured timepoints.

Parameters:
  • timepoints – The outer sequence should contain a sequence of timepoints for each experimental condition.

  • timepoints_global – A sequence of timepoints that will be used for all experimental conditions.

Return type:

The customized copy of this objective.

store_steadystate_guess(condition_ix: int, x_dct: Dict, rdata: amici.ReturnData) None[source]

Store condition parameter, steadystate and steadystate sensitivity.

Stored in steadystate_guesses if steadystate guesses are enabled for this condition.

class pypesto.objective.FD(obj: ObjectiveBase, grad: Optional[bool] = None, hess: Optional[bool] = None, sres: Optional[bool] = None, hess_via_fval: bool = True, delta_fun: Union[FDDelta, ndarray, float, str] = 1e-06, delta_grad: Union[FDDelta, ndarray, float, str] = 1e-06, delta_res: Union[FDDelta, float, ndarray, str] = 1e-06, method: str = 'central', x_names: Optional[List[str]] = None)[source]

Bases: ObjectiveBase

Finite differences (FDs) for derivatives.

Given an objective that gives function values and/or residuals, this class allows to flexibly obtain all derivatives calculated via FDs.

For the parameters grad, hess, sres, a value of None means that the objective derivative is used if available, otherwise resorting to FDs. True means that FDs are used in any case, False means that the derivative is not exported.

Note that the step sizes should be carefully chosen. They should be small enough to provide an accurate linear approximation, but large enough to be robust against numerical inaccuracies, in particular if the objective relies on numerical approximations, such as an ODE.

Parameters:
  • grad – Derivative method for the gradient (see above).

  • hess – Derivative method for the Hessian (see above).

  • sres – Derivative method for the residual sensitivities (see above).

  • hess_via_fval – If the Hessian is to be calculated via finite differences: whether to employ 2nd order FDs via fval even if the objective can provide a gradient.

  • delta_fun – FD step sizes for function values. Can be either a float, or a np.ndarray of shape (n_par,) for different step sizes for different coordinates.

  • delta_grad – FD step sizes for gradients, if the Hessian is calculated via 1st order sensitivities from the gradients. Similar to delta_fun.

  • delta_res – FD step sizes for residuals. Similar to delta_fun.

  • method – Method to calculate FDs. Can be any of FD.METHODS: central, forward or backward differences. The latter two require only roughly half as many function evaluations, are however less accurate than central (O(x) vs O(x**2)).

  • x_names – Parameter names that can be optionally used in, e.g., history or gradient checks.

Examples

Define residuals and objective function, and obtain all derivatives via FDs:

>>> from pypesto import Objective, FD
>>> import numpy as np
>>> x_obs = np.array([11, 12, 13])
>>> res = lambda x: x - x_obs
>>> fun = lambda x: 0.5 * sum(res(x)**2)
>>> obj = FD(Objective(fun=fun, res=res))
BACKWARD = 'backward'
CENTRAL = 'central'
FORWARD = 'forward'
METHODS = ['central', 'forward', 'backward']
__init__(obj: ObjectiveBase, grad: Optional[bool] = None, hess: Optional[bool] = None, sres: Optional[bool] = None, hess_via_fval: bool = True, delta_fun: Union[FDDelta, ndarray, float, str] = 1e-06, delta_grad: Union[FDDelta, ndarray, float, str] = 1e-06, delta_res: Union[FDDelta, float, ndarray, str] = 1e-06, method: str = 'central', x_names: Optional[List[str]] = None)[source]
call_unprocessed(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], **kwargs) Dict[str, Union[float, ndarray, Dict]][source]

See ObjectiveBase for more documentation.

Main method to overwrite from the base class. It handles and delegates the actual objective evaluation.

property has_fun: bool

Check whether function is defined.

property has_grad: bool

Check whether gradient is defined.

property has_hess: bool

Check whether Hessian is defined.

property has_res: bool

Check whether residuals are defined.

property has_sres: bool

Check whether residual sensitivities are defined.

class pypesto.objective.FDDelta(delta: Optional[Union[ndarray, float]] = None, test_deltas: Optional[ndarray] = None, update_condition: str = 'constant', max_distance: float = 0.5, max_steps: int = 30)[source]

Bases: object

Finite difference step size with automatic updating.

Reference implementation: https://github.com/ICB-DCM/PESTO/blob/master/private/getStepSizeFD.m

Parameters:
  • delta – (Initial) step size, either a float, or a vector of size (n_par,). If not None, this is used as initial step size.

  • test_deltas – Step sizes to try out in step size selection. If None, a range [1e-1, 1e-2, …, 1e-8] is considered.

  • update_condition – A “good” step size may be a local property. Thus, this class allows updating the step size if certain criteria are met, in the pypesto.objective.finite_difference.FDDelta.update() function. FDDelta.CONSTANT means that the step size is only initially selected. FDDelta.DISTANCE means that the step size is updated if the current evaluation point is sufficiently far away from the last training point. FDDelta.STEPS means that the step size is updated max_steps evaluations after the last update. FDDelta.ALWAYS mean that the step size is selected in every call.

  • max_distance – Coefficient on the distance between current and reference point beyond which to update, in the FDDelta.DISTANCE update condition.

  • max_steps – Number of steps after which to update in the FDDelta.STEPS update condition.

ALWAYS = 'always'
CONSTANT = 'constant'
DISTANCE = 'distance'
STEPS = 'steps'
UPDATE_CONDITIONS = ['constant', 'distance', 'steps', 'always']
__init__(delta: Optional[Union[ndarray, float]] = None, test_deltas: Optional[ndarray] = None, update_condition: str = 'constant', max_distance: float = 0.5, max_steps: int = 30)[source]
get() ndarray[source]

Get delta vector.

update(x: ndarray, fval: Optional[Union[float, ndarray]], fun: Callable, fd_method: str) None[source]

Update delta if update conditions are met.

Parameters:
  • x – Current parameter vector, shape (n_par,).

  • fval – fun(x), to avoid re-evaluation. Scalar- or vector-valued.

  • fun – Function whose 1st-order derivative to approximate. Scalar- or vector-valued.

  • fd_method – FD method employed by pypesto.objective.finite_difference.FD, see there.

class pypesto.objective.NegLogParameterPriors(prior_list: List[Dict], x_names: Optional[Sequence[str]] = None)[source]

Bases: ObjectiveBase

Implements Negative Log Priors on Parameters.

Contains a list of prior dictionaries for the individual parameters of the format

{‘index’: [int],

‘density_fun’: [Callable], ‘density_dx’: [Callable], ‘density_ddx’: [Callable]}

A prior instance can be added to e.g. an objective, that gives the likelihood, by an AggregatedObjective.

Notes

All callables should correspond to log-densities. That is, they return log-densities and their corresponding derivatives. Internally, values are multiplied by -1, since pyPESTO expects the Objective function to be of a negative log-density type.

__init__(prior_list: List[Dict], x_names: Optional[Sequence[str]] = None)[source]

Initialize.

Parameters:
  • prior_list – List of dicts containing the individual parameter priors. Format see above.

  • x_names – Sequence of parameter names (optional).

call_unprocessed(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], **kwargs) Dict[str, Union[float, ndarray, Dict]][source]

Call objective function without pre- or post-processing and formatting.

Returns:

A dict containing the results.

Return type:

result

check_mode(mode: Literal['mode_fun', 'mode_res']) bool[source]

See ObjectiveBase documentation.

check_sensi_orders(sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res']) bool[source]

See ObjectiveBase documentation.

gradient_neg_log_density(x)[source]

Evaluate the gradient of the negative log-density at x.

hessian_neg_log_density(x)[source]

Evaluate the hessian of the negative log-density at x.

hessian_vp_neg_log_density(x, p)[source]

Compute vector product of the hessian at x with a vector p.

neg_log_density(x)[source]

Evaluate the negative log-density at x.

residual(x)[source]

Evaluate the residual representation of the prior at x.

residual_jacobian(x)[source]

Evaluate residual Jacobian.

Evaluate the Jacobian of the residual representation of the prior for a parameter vector x w.r.t. x, if available.

class pypesto.objective.NegLogPriors(objectives: Sequence[ObjectiveBase], x_names: Optional[Sequence[str]] = None)[source]

Bases: AggregatedObjective

Aggregates different forms of negative log-prior distributions.

Allows to distinguish priors from the likelihood by testing the type of an objective.

Consists basically of a list of individual negative log-priors, given in self.objectives.

class pypesto.objective.Objective(fun: Optional[Callable] = None, grad: Optional[Union[Callable, bool]] = None, hess: Optional[Callable] = None, hessp: Optional[Callable] = None, res: Optional[Callable] = None, sres: Optional[Union[Callable, bool]] = None, x_names: Optional[Sequence[str]] = None)[source]

Bases: ObjectiveBase

Objective class.

The objective class allows the user explicitly specify functions that compute the function value and/or residuals as well as respective derivatives.

Denote dimensions n = parameters, m = residuals.

Parameters:
  • fun

    The objective function to be minimized. If it only computes the objective function value, it should be of the form

    fun(x) -> float

    where x is an 1-D array with shape (n,), and n is the parameter space dimension.

  • grad

    Method for computing the gradient vector. If it is a callable, it should be of the form

    grad(x) -> array_like, shape (n,).

    If its value is True, then fun should return the gradient as a second output.

  • hess

    Method for computing the Hessian matrix. If it is a callable, it should be of the form

    hess(x) -> array, shape (n, n).

    If its value is True, then fun should return the gradient as a second, and the Hessian as a third output, and grad should be True as well.

  • hessp

    Method for computing the Hessian vector product, i.e.

    hessp(x, v) -> array_like, shape (n,)

    computes the product H*v of the Hessian of fun at x with v.

  • res

    Method for computing residuals, i.e.

    res(x) -> array_like, shape(m,).

  • sres

    Method for computing residual sensitivities. If it is a callable, it should be of the form

    sres(x) -> array, shape (m, n).

    If its value is True, then res should return the residual sensitivities as a second output.

  • x_names – Parameter names. None if no names provided, otherwise a list of str, length dim_full (as in the Problem class). Can be read by the problem.

__init__(fun: Optional[Callable] = None, grad: Optional[Union[Callable, bool]] = None, hess: Optional[Callable] = None, hessp: Optional[Callable] = None, res: Optional[Callable] = None, sres: Optional[Union[Callable, bool]] = None, x_names: Optional[Sequence[str]] = None)[source]
call_unprocessed(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], **kwargs) Dict[str, Union[float, ndarray, Dict]][source]

Call objective function without pre- or post-processing and formatting.

Returns:

A dict containing the results.

Return type:

result

get_config() dict[source]

Return basic information of the objective configuration.

property has_fun: bool

Check whether function is defined.

property has_grad: bool

Check whether gradient is defined.

property has_hess: bool

Check whether Hessian is defined.

property has_hessp: bool

Check whether Hessian vector product is defined.

property has_res: bool

Check whether residuals are defined.

property has_sres: bool

Check whether residual sensitivities are defined.

class pypesto.objective.ObjectiveBase(x_names: Optional[Sequence[str]] = None)[source]

Bases: ABC

Abstract objective class.

The objective class is a simple wrapper around the objective function, giving a standardized way of calling. Apart from that, it manages several things including fixing of parameters and history.

The objective function is assumed to be in the format of a cost function, log-likelihood function, or log-posterior function. These functions are subject to minimization. For profiling and sampling, the sign is internally flipped, all returned and stored values are however given as returned by this objective function. If maximization is to be performed, the sign should be flipped before creating the objective function.

Parameters:

x_names – Parameter names that can be optionally used in, e.g., history or gradient checks.

history

For storing the call history. Initialized by the methods, e.g. the optimizer, in initialize_history().

pre_post_processor

Preprocess input values to and postprocess output values from __call__. Configured in update_from_problem().

__call__(x: ndarray, sensi_orders: Tuple[int, ...] = (0,), mode: Literal['mode_fun', 'mode_res'] = 'mode_fun', return_dict: bool = False, **kwargs) Union[float, ndarray, Tuple, Dict[str, Union[float, ndarray, Dict]]][source]

Obtain arbitrary sensitivities.

This is the central method which is always called, also by the get_* methods.

There are different ways in which an optimizer calls the objective function, and in how the objective function provides information (e.g. derivatives via separate functions or along with the function values). The different calling modes increase efficiency in space and time and make the objective flexible.

Parameters:
  • x – The parameters for which to evaluate the objective function.

  • sensi_orders – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode – Whether to compute function values or residuals.

  • return_dict – If False (default), the result is a Tuple of the requested values in the requested order. Tuples of length one are flattened. If True, instead a dict is returned which can carry further information.

Returns:

By default, this is a tuple of the requested function values and derivatives in the requested order (if only 1 value, the tuple is flattened). If return_dict, then instead a dict is returned with function values and derivatives indicated by ids.

Return type:

result

__init__(x_names: Optional[Sequence[str]] = None)[source]
abstract call_unprocessed(x: ndarray, sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], **kwargs) Dict[str, Union[float, ndarray, Dict]][source]

Call objective function without pre- or post-processing and formatting.

Parameters:
  • x – The parameters for which to evaluate the objective function.

  • sensi_orders – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode – Whether to compute function values or residuals.

Returns:

A dict containing the results.

Return type:

result

check_grad(x: ndarray, x_indices: Optional[Sequence[int]] = None, eps: float = 1e-05, verbosity: int = 1, mode: Literal['mode_fun', 'mode_res'] = 'mode_fun', order: int = 0, detailed: bool = False) DataFrame[source]

Compare gradient evaluation.

Firstly approximate via finite differences, and secondly use the objective gradient.

Parameters:
  • x – The parameters for which to evaluate the gradient.

  • x_indices – Indices for which to compute gradients. Default: all.

  • eps – Finite differences step size.

  • verbosity – Level of verbosity for function output. 0: no output, 1: summary for all parameters, 2: summary for individual parameters.

  • mode – Residual (MODE_RES) or objective function value (MODE_FUN) computation mode.

  • order – Derivative order, either gradient (0) or Hessian (1).

  • detailed – Toggle whether additional values are returned. Additional values are function values, and the central difference weighted by the difference in output from all methods (standard deviation and mean).

Returns:

gradient, finite difference approximations and error estimates.

Return type:

result

check_grad_multi_eps(*args, multi_eps: Optional[Iterable] = None, label: str = 'rel_err', **kwargs)[source]

Compare gradient evaluation.

Equivalent to the ObjectiveBase.check_grad method, except multiple finite difference step sizes are tested. The result contains the lowest finite difference for each parameter, and the corresponding finite difference step size.

Parameters:
  • parameters. (All ObjectiveBase.check_grad method) –

  • multi_eps – The finite difference step sizes to be tested.

  • label – The label of the column that will be minimized for each parameter. Valid options are the column labels of the dataframe returned by the ObjectiveBase.check_grad method.

check_gradients_match_finite_differences(*args, x: Optional[ndarray] = None, x_free: Optional[Sequence[int]] = None, rtol: float = 0.01, atol: float = 0.001, mode: Optional[Literal['mode_fun', 'mode_res']] = None, order: int = 0, multi_eps=None, **kwargs) bool[source]

Check if gradients match finite differences (FDs).

Parameters:
  • rtol (relative error tolerance) –

  • x (The parameters for which to evaluate the gradient) –

  • x_free (Indices for which to compute gradients) –

  • rtol

  • atol (absolute error tolerance) –

  • mode (function values or residuals) –

  • order (gradient order, 0 for gradient, 1 for hessian) –

  • multi_eps (multiple test step width for FDs) –

Returns:

Indicates whether gradients match (True) FDs or not (False)

Return type:

bool

check_mode(mode: Literal['mode_fun', 'mode_res']) bool[source]

Check if the objective is able to compute in the requested mode.

Either check_mode or the fun_… functions must be overwritten in derived classes.

Parameters:

mode – Whether to compute function values or residuals.

Returns:

Boolean indicating whether mode is supported

Return type:

flag

check_sensi_orders(sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res']) bool[source]

Check if the objective is able to compute the requested sensitivities.

Either check_sensi_orders or the fun_… functions must be overwritten in derived classes.

Parameters:
  • sensi_orders – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode – Whether to compute function values or residuals.

Returns:

Boolean indicating whether combination of sensi_orders and mode is supported

Return type:

flag

get_config() dict[source]

Get the configuration information of the objective function.

Return it as a dictionary.

get_fval(x: ndarray) float[source]

Get the function value at x.

get_grad(x: ndarray) ndarray[source]

Get the gradient at x.

get_hess(x: ndarray) ndarray[source]

Get the Hessian at x.

get_res(x: ndarray) ndarray[source]

Get the residuals at x.

get_sres(x: ndarray) ndarray[source]

Get the residual sensitivities at x.

property has_fun: bool

Check whether function is defined.

property has_grad: bool

Check whether gradient is defined.

property has_hess: bool

Check whether Hessian is defined.

property has_hessp: bool

Check whether Hessian-vector product is defined.

property has_res: bool

Check whether residuals are defined.

property has_sres: bool

Check whether residual sensitivities are defined.

initialize()[source]

Initialize the objective function.

This function is used at the beginning of an analysis, e.g. optimization, and can e.g. reset the objective memory. By default does nothing.

static output_to_tuple(sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], **kwargs: Union[float, ndarray]) Tuple[source]

Return values as requested by the caller.

Usually only a subset of outputs is demanded. One output is returned as-is, more than one output are returned as a tuple in order (fval, grad, hess).

update_from_problem(dim_full: int, x_free_indices: Sequence[int], x_fixed_indices: Sequence[int], x_fixed_vals: Sequence[float])[source]

Handle fixed parameters.

Later, the objective will be given parameter vectors x of dimension dim, which have to be filled up with fixed parameter values to form a vector of dimension dim_full >= dim. This vector is then used to compute function value and derivatives. The derivatives must later be reduced again to dimension dim.

This is so as to make the fixing of parameters transparent to the caller.

The methods preprocess, postprocess are overwritten for the above functionality, respectively.

Parameters:
  • dim_full – Dimension of the full vector including fixed parameters.

  • x_free_indices – Vector containing the indices (zero-based) of free parameters (complimentary to x_fixed_indices).

  • x_fixed_indices – Vector containing the indices (zero-based) of parameter components that are not to be optimized.

  • x_fixed_vals – Vector of the same length as x_fixed_indices, containing the values of the fixed parameters.

property x_names: Optional[List[str]]

Parameter names.

pypesto.objective.get_parameter_prior_dict(index: int, prior_type: str, prior_parameters: list, parameter_scale: str = 'lin')[source]

Return the prior dict used to define priors for some default priors.

index:

index of the parameter in x_full

prior_type:

Prior is defined in LINEAR=untransformed parameter space, unless it starts with “parameterScale”. prior_type can be any of {“uniform”, “normal”, “laplace”, “logNormal”, “parameterScaleUniform”, “parameterScaleNormal”, “parameterScaleLaplace”}

prior_parameters:

Parameters of the priors. Parameters are defined in linear scale.

parameter_scale:

scale in which the parameter is defined (since a parameter can be log-transformed, while the prior is always defined in the linear space, unless it starts with “parameterScale”)

Julia objective

class pypesto.objective.julia.JuliaObjective(module: str, source_file: Optional[str] = None, fun: Optional[str] = None, grad: Optional[str] = None, hess: Optional[str] = None, res: Optional[str] = None, sres: Optional[str] = None)[source]

Bases: Objective

Wrapper around an objective defined in Julia.

This class provides objective function wrappers around Julia objects. It expects the corresponding Julia objects to be defined in a source_file within a module.

We use the PyJulia package to access Julia from inside Python. It can be installed via pip install pypesto[julia], however requires additional Julia dependencies to be installed via:

>>> python -c "import julia; julia.install()"

For further information, see https://pyjulia.readthedocs.io/en/latest/installation.html.

There are some known problems, e.g. with statically linked Python interpreters, see https://pyjulia.readthedocs.io/en/latest/troubleshooting.html for details. Possible solutions are to pass compiled_modules=False to the Julia constructor early in your code:

>>> from julia.api import Julia
>>> jl = Julia(compiled_modules=False)

This however slows down loading and using Julia packages, especially for large ones. An alternative is to use the python-jl command shipped with PyJulia:

>>> python-jl MY_SCRIPT.py

This basically launches a Python interpreter inside Julia. When using Jupyter notebooks, this wrapper can be installed as an additional kernel via:

>>> python -m ipykernel install --name python-jl [--prefix=/path/to/python/env]

And changing the first argument in /path/to/python/env/share/jupyter/kernels/python-jl/kernel.json to python-jl.

Model simulations are eagerly converted to Python objects (specifically, numpy.ndarray and pandas.DataFrame). This can introduce overhead and could be avoided by an alternative lazy implementation.

Parameters:
  • module – Julia module name.

  • source_file – Julia source file name. Defaults to {module}.jl.

  • fun – Names of callables within the Julia code of the corresponding objective functions and derivatives.

  • grad – Names of callables within the Julia code of the corresponding objective functions and derivatives.

  • hess – Names of callables within the Julia code of the corresponding objective functions and derivatives.

  • res – Names of callables within the Julia code of the corresponding objective functions and derivatives.

  • sres – Names of callables within the Julia code of the corresponding objective functions and derivatives.

__init__(module: str, source_file: Optional[str] = None, fun: Optional[str] = None, grad: Optional[str] = None, hess: Optional[str] = None, res: Optional[str] = None, sres: Optional[str] = None)[source]
get(name: str, as_array: bool = False) Optional[Callable][source]

Get variable from Julia module.

Use this function to access any variable from the Julia module.

pypesto.objective.julia.display_source_ipython(source_file: str)[source]

Display source code as syntax highlighted HTML within IPython.

Optimize

Multistart optimization with support for various optimizers.

class pypesto.optimize.CESSOptimizer(ess_init_args: List[Dict], max_iter: int, max_walltime_s: float = inf)[source]

Bases: object

Cooperative Enhanced Scatter Search Optimizer (CESS).

A cooperative scatter search algorithm based on [VillaverdeEge2012]. In short, multiple scatter search instances with different hyperparameters are running in different threads/processes, and exchange information. Some instances focus on diversification while others focus on intensification. Communication happens at fixed time intervals.

Proposed hyperparameter values in [VillaverdeEge2012]:

  • dim_refset: [0.5 n_parameter, 20 n_parameters]

  • local_n2: [0, 100]

  • balance: [0, 0.5]

  • n_diverse: [5 n_par, 20 n_par]

  • max_eval: such that \(\tau = log10(max_eval / n_par)\) is in [2.5, 3.5], with a recommended default value of 2.5.

[VillaverdeEge2012] (1,2)

‘A cooperative strategy for parameter estimation in large scale systems biology models’, Villaverde, A.F., Egea, J.A. & Banga, J.R. BMC Syst Biol 2012, 6, 75. https://doi.org/10.1186/1752-0509-6-75

ess_init_args

List of argument dictionaries passed to ESSOptimizer.__init__(). The length of this list is the number of parallel ESS processes. Resource limits such as max_eval apply to a single CESS iteration, not to the full search.

max_iter

Maximum number of CESS iterations.

max_walltime_s

Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit.

fx_best

The best objective value seen so far.

x_best

Parameter vector corresponding to fx_best.

starttime

Starting time of the most recent optimization.

i_iter

Current iteration number.

__init__(ess_init_args: List[Dict], max_iter: int, max_walltime_s: float = inf)[source]

Construct.

Parameters:
  • ess_init_args – List of argument dictionaries passed to ESSOptimizer.__init__(). The length of this list is the number of parallel ESS processes. Resource limits such as max_eval apply to a single CESS iteration, not to the full search.

  • max_iter – Maximum number of CESS iterations.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit.

minimize(problem: Problem, startpoint_method: StartpointMethod) Result[source]

Minimize the given objective using CESS.

Parameters:
  • problem – Problem to run ESS on.

  • startpoint_method – Method for choosing starting points.

class pypesto.optimize.CmaesOptimizer(par_sigma0: float = 0.25, options: Optional[Dict] = None)[source]

Bases: Optimizer

Global optimization using covariance matrix adaptation evolutionary search.

This optimizer interfaces the cma package (https://github.com/CMA-ES/pycma).

__init__(par_sigma0: float = 0.25, options: Optional[Dict] = None)[source]

Initialize.

Parameters:
  • par_sigma0 – scalar, initial standard deviation in each coordinate. par_sigma0 should be about 1/4th of the search domain width (where the optimum is to be expected)

  • options – Optimizer options that are directly passed on to cma.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.DlibOptimizer(options: Optional[Dict] = None)[source]

Bases: Optimizer

Use the Dlib toolbox for optimization.

__init__(options: Optional[Dict] = None)[source]

Initialize base class.

get_default_options()[source]

Create default options specific for the optimizer.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.ESSOptimizer(*, max_iter: int = 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, dim_refset: Optional[int] = None, local_n1: int = 1, local_n2: int = 10, balance: float = 0.5, local_optimizer: Optional[Optimizer] = None, max_eval=inf, n_diverse: Optional[int] = None, n_procs=None, n_threads=None, max_walltime_s=None)[source]

Bases: object

Enhanced Scatter Search (ESS) global optimization.

__init__(*, max_iter: int = 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, dim_refset: Optional[int] = None, local_n1: int = 1, local_n2: int = 10, balance: float = 0.5, local_optimizer: Optional[Optimizer] = None, max_eval=inf, n_diverse: Optional[int] = None, n_procs=None, n_threads=None, max_walltime_s=None)[source]

Construct new ESS instance.

For plausible values of hyperparameters, see VillaverdeEge2012.

Parameters:
  • dim_refset – Size of the ReferenceSet. Note that in every iteration at least dim_refset**2 - dim_refset function evaluations will occur.

  • max_iter – Maximum number of ESS iterations.

  • local_n1 – Minimum number of iterations before first local search.

  • local_n2 – Minimum number of iterations between consecutive local searches. Maximally one local search per performed in each iteration.

  • local_optimizer – Local optimizer for refinement, or None to skip local searches.

  • n_diverse – Number of samples to choose from to construct the initial RefSet

  • max_eval – Maximum number of objective functions allowed. This criterion is only checked once per iteration, not after every objective evaluation, so the actual number of function evaluations may exceed this value.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search.

  • balance – Quality vs diversity balancing factor [0, 1]; 0 = only quality; 1 = only diversity

  • n_procs – Number of parallel processes to use for parallel function evaluation. Mutually exclusive with n_threads.

  • n_threads – Number of parallel threads to use for parallel function evaluation. Mutually exclusive with n_procs.

minimize(problem: Optional[Problem] = None, startpoint_method: Optional[StartpointMethod] = None, refset: Optional[RefSet] = None) Result[source]

Minimize the given objective.

Parameters:
  • problem – Problem to run ESS on.

  • startpoint_method – Method for choosing starting points.

  • refset – The initial RefSet or None to auto-generate.

class pypesto.optimize.FidesOptimizer(hessian_update: Optional[fides.hessian_approximation.HessianApproximation] = 'default', options: Optional[Dict] = None, verbose: Optional[int] = 20)[source]

Bases: Optimizer

Global/Local optimization using the trust region optimizer fides.

Package Homepage: https://fides-optimizer.readthedocs.io/en/latest

__init__(hessian_update: Optional[fides.hessian_approximation.HessianApproximation] = 'default', options: Optional[Dict] = None, verbose: Optional[int] = 20)[source]

Initialize.

Parameters:
  • options – Optimizer options.

  • hessian_update – Hessian update strategy. If this is None, a hybrid approximation that switches from the problem.objective provided Hessian ( approximation) to a BFGS approximation will be used.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.IpoptOptimizer(options: Optional[Dict] = None)[source]

Bases: Optimizer

Use IpOpt (https://pypi.org/project/ipopt/) for optimization.

__init__(options: Optional[Dict] = None)[source]

Initialize.

Parameters:

options – Options are directly passed on to cyipopt.minimize_ipopt.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.NLoptOptimizer(method=None, local_method=None, options: Optional[Dict] = None, local_options: Optional[Dict] = None)[source]

Bases: Optimizer

Global/Local optimization using NLopt.

Package homepage: https://nlopt.readthedocs.io/en/latest/

__init__(method=None, local_method=None, options: Optional[Dict] = None, local_options: Optional[Dict] = None)[source]

Initialize.

Parameters:
  • method – Local or global Optimizer to use for minimization.

  • local_method – Local method to use in combination with the global optimizer ( for the MLSL family of solvers) or to solve a subproblem (for the AUGLAG family of solvers)

  • options – Optimizer options. scipy option maxiter is automatically transformed into maxeval and takes precedence.

  • local_options – Optimizer options for the local method

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.OptimizeOptions(allow_failed_starts: bool = True, report_sres: bool = True, report_hess: bool = True, history_beats_optimizer: bool = True)[source]

Bases: dict

Options for the multistart optimization.

Parameters:
  • allow_failed_starts – Flag indicating whether we tolerate that exceptions are thrown during the minimization process.

  • report_sres – Flag indicating whether sres will be stored in the results object. Deactivating this option will improve memory consumption for large scale problems.

  • report_hess – Flag indicating whether hess will be stored in the results object. Deactivating this option will improve memory consumption for large scale problems.

  • history_beats_optimizer – Whether the optimal value recorded by pyPESTO in the history has priority over the optimal value reported by the optimizer (True) or not (False).

__init__(allow_failed_starts: bool = True, report_sres: bool = True, report_hess: bool = True, history_beats_optimizer: bool = True)[source]
static assert_instance(maybe_options: Union[OptimizeOptions, Dict]) OptimizeOptions[source]

Return a valid options object.

Parameters:

maybe_options (OptimizeOptions or dict) –

class pypesto.optimize.Optimizer[source]

Bases: ABC

Optimizer base class, not functional on its own.

An optimizer takes a problem, and possibly a start point, and then performs an optimization. It returns an OptimizerResult.

__init__()[source]

Initialize base class.

get_default_options()[source]

Create default options specific for the optimizer.

abstract is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

abstract minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.PyswarmOptimizer(options: Optional[Dict] = None)[source]

Bases: Optimizer

Global optimization using pyswarm.

__init__(options: Optional[Dict] = None)[source]

Initialize base class.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.PyswarmsOptimizer(par_popsize: float = 10, options: Optional[Dict] = None)[source]

Bases: Optimizer

Global optimization using pyswarms.

Package homepage: https://pyswarms.readthedocs.io/en/latest/index.html

Parameters:
  • par_popsize – number of particles in the swarm, default value 10

  • options – Optimizer options that are directly passed on to pyswarms. c1: cognitive parameter c2: social parameter w: inertia parameter Default values are (c1,c2,w) = (0.5, 0.3, 0.9)

Examples

Arguments that can be passed to options:

maxiter:

used to calculate the maximal number of funcion evaluations. Default: 1000

__init__(par_popsize: float = 10, options: Optional[Dict] = None)[source]

Initialize base class.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.SacessOptimizer(num_workers: Optional[int] = None, ess_init_args: Optional[List[Dict[str, Any]]] = None, max_walltime_s: float = inf, sacess_loglevel: int = 20, ess_loglevel: int = 30)[source]

Bases: object

SACESS optimizer.

A shared-memory-based implementation of the SaCeSS algorithm presented in [PenasGon2017]. Multiple processes (workers) run consecutive ESSs in parallel. After each ESS run, depending on the outcome, there is a chance of exchanging good parameters, and changing ESS hyperparameters to those of the most promising worker.

[PenasGon2017]

‘Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy’, David R. Penas, Patricia González, Jose A. Egea, Ramón Doallo and Julio R. Banga, BMC Bioinformatics 2017, 18, 52. https://doi.org/10.1186/s12859-016-1452-4

__init__(num_workers: Optional[int] = None, ess_init_args: Optional[List[Dict[str, Any]]] = None, max_walltime_s: float = inf, sacess_loglevel: int = 20, ess_loglevel: int = 30)[source]

Construct.

Parameters:
  • ess_init_args – List of argument dictionaries passed to ESSOptimizer.__init__(). Each entry corresponds to one worker process. I.e., the length of this list is the number of ESSs. Ideally, this list contains some more conservative and some more aggressive configurations. Resource limits such as max_eval apply to a single CESS iteration, not to the full search. Mutually exclusive with num_workers.

  • num_workers – Number of workers to be used. If this argument is given, (different) default ESS settings will be used for each worker. Mutually exclusive with ess_init_args.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit.

  • ess_loglevel – Loglevel for ESS runs.

  • sacess_loglevel – Loglevel for SACESS runs.

minimize(problem: Problem, startpoint_method: StartpointMethod)[source]

Solve the given optimization problem.

class pypesto.optimize.ScipyDifferentialEvolutionOptimizer(options: Optional[Dict] = None)[source]

Bases: Optimizer

Global optimization using scipy’s differential evolution optimizer.

Package homepage: https://docs.scipy.org/doc/scipy/reference/generated /scipy.optimize.differential_evolution.html

Parameters:

options – Optimizer options that are directly passed on to scipy’s optimizer.

Examples

Arguments that can be passed to options:

maxiter:

used to calculate the maximal number of funcion evaluations by maxfevals = (maxiter + 1) * popsize * len(x) Default: 100

popsize:

population size, default value 15

__init__(options: Optional[Dict] = None)[source]

Initialize base class.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
class pypesto.optimize.ScipyOptimizer(method: str = 'L-BFGS-B', tol: Optional[float] = None, options: Optional[Dict] = None)[source]

Bases: Optimizer

Use the SciPy optimizers.

Find details on the optimizer and configuration options at: https://docs.scipy.org/doc/scipy/reference/generated/scipy. optimize.minimize.html#scipy.optimize.minimize

__init__(method: str = 'L-BFGS-B', tol: Optional[float] = None, options: Optional[Dict] = None)[source]

Initialize base class.

get_default_options()[source]

Create default options specific for the optimizer.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: Optional[HistoryOptions] = None, optimize_options: Optional[OptimizeOptions] = None)[source]
pypesto.optimize.fill_result_from_history(result: OptimizerResult, optimizer_history: OptimizerHistory, optimize_options: Optional[OptimizeOptions] = None) OptimizerResult[source]

Overwrite some values in the result object with values in the history.

Parameters:
  • result (Result as reported from the used optimizer.) –

  • optimizer_history (History of function values recorded by the objective.) –

  • optimize_options (Options on e.g. how to override.) –

Returns:

result

Return type:

The in-place modified result.

pypesto.optimize.minimize(problem: Problem, optimizer: Optional[Optimizer] = None, n_starts: int = 100, ids: Optional[Iterable[str]] = None, startpoint_method: Optional[Union[StartpointMethod, Callable, bool]] = None, result: Optional[Result] = None, engine: Optional[Engine] = None, progress_bar: bool = True, options: Optional[OptimizeOptions] = None, history_options: Optional[HistoryOptions] = None, filename: Optional[Union[str, Callable]] = None, overwrite: bool = False) Result[source]

Do multistart optimization.

Parameters:
  • problem – The problem to be solved.

  • optimizer – The optimizer to be used n_starts times.

  • n_starts – Number of starts of the optimizer.

  • ids – Ids assigned to the startpoints.

  • startpoint_method – Method for how to choose start points. False means the optimizer does not require start points, e.g. for the ‘PyswarmOptimizer’.

  • result – A result object to append the optimization results to. For example, one might append more runs to a previous optimization. If None, a new object is created.

  • engine – Parallelization engine. Defaults to sequential execution on a SingleCoreEngine.

  • progress_bar – Whether to display a progress bar.

  • options – Various options applied to the multistart optimization.

  • history_options – Optimizer history options.

  • filename – Name of the hdf5 file, where the result will be saved. Default is None, which deactivates automatic saving. If set to “Auto” it will automatically generate a file named year_month_day_profiling_result.hdf5. Optionally a method, see docs for pypesto.store.auto.autosave.

  • overwrite – Whether to overwrite result/optimization in the autosave file if it already exists.

Returns:

Result object containing the results of all multistarts in result.optimize_result.

Return type:

result

pypesto.optimize.optimization_result_from_history(filename: str, problem: Problem) Result[source]

Convert a saved hdf5 History to an optimization result.

Used for interrupted optimization runs.

Parameters:
  • filename – The name of the file in which the information are stored.

  • problem – Problem, needed to identify what parameters to accept.

Returns:

  • A result object in which the optimization result is constructed from

  • history. But missing “Time, Message and Exitflag” keys.

pypesto.optimize.read_result_from_file(problem: Optional[Problem], history_options: HistoryOptions, identifier: str) OptimizerResult[source]

Fill an OptimizerResult from history.

Parameters:
  • problem – The problem to find optimal parameters for. If None, bounds will be assumed to be [-inf, inf] for checking for admissible points.

  • identifier – Multistart id.

  • history_options – Optimizer history options.

pypesto.optimize.read_results_from_file(problem: Problem, history_options: HistoryOptions, n_starts: int) Result[source]

Fill a Result from a set of histories.

Parameters:
  • problem – The problem to find optimal parameters for.

  • n_starts – Number of performed multistarts.

  • history_options – Optimizer history options.

PEtab

pyPESTO support for the PEtab data format.

class pypesto.petab.PetabImporter(petab_problem: Problem, output_folder: Optional[str] = None, model_name: Optional[str] = None, validate_petab: bool = True, validate_petab_hierarchical: bool = True, hierarchical: bool = False)[source]

Bases: AmiciObjectBuilder

Importer for Petab files.

Create an amici.Model, an objective.AmiciObjective or a pypesto.Problem from Petab files.

MODEL_BASE_DIR = 'amici_models'
__init__(petab_problem: Problem, output_folder: Optional[str] = None, model_name: Optional[str] = None, validate_petab: bool = True, validate_petab_hierarchical: bool = True, hierarchical: bool = False)[source]

Initialize importer.

Parameters:
  • petab_problem – Managing access to the model and data.

  • output_folder – Folder to contain the amici model. Defaults to ‘./amici_models/{model_name}’.

  • model_name – Name of the model, which will in particular be the name of the compiled model python module.

  • validate_petab – Flag indicating if the PEtab problem shall be validated.

  • validate_petab_hierarchical – Flag indicating if the PEtab problem shall be validated in terms of pyPESTO’s hierarchical optimization implementation.

  • hierarchical – Whether to use hierarchical optimization or not, in case the underlying PEtab problem has parameters marked for hierarchical optimization (non-empty parameterType column in the PEtab parameter table).

check_gradients(*args, rtol: float = 0.01, atol: float = 0.001, mode: Optional[Union[str, List[str]]] = None, multi_eps=None, **kwargs) bool[source]

Check if gradients match finite differences (FDs).

Parameters:
  • rtol (relative error tolerance) –

  • atol (absolute error tolerance) –

  • mode (function values or residuals) –

  • objAbsoluteTolerance (absolute tolerance in sensitivity calculation) –

  • objRelativeTolerance (relative tolerance in sensitivity calculation) –

  • multi_eps (multiple test step width for FDs) –

Returns:

match

Return type:

Whether gradients match FDs (True) or not (False)

compile_model(**kwargs)[source]

Compile the model.

If the output folder exists already, it is first deleted.

Parameters:

kwargs (Extra arguments passed to amici.SbmlImporter.sbml2amici.) –

create_edatas(model: Optional[amici.Model] = None, simulation_conditions=None) List[amici.ExpData][source]

Create list of amici.ExpData objects.

create_model(force_compile: bool = False, **kwargs) amici.Model[source]

Import amici model.

Parameters:
  • force_compile

    If False, the model is compiled only if the output folder does not exist yet. If True, the output folder is deleted and the model (re-)compiled in either case.

    Warning

    If force_compile, then an existing folder of that name will be deleted.

  • kwargs (Extra arguments passed to amici.SbmlImporter.sbml2amici) –

create_objective(model: Optional[amici.Model] = None, solver: Optional[amici.Solver] = None, edatas: Optional[Sequence[amici.ExpData]] = None, force_compile: bool = False, **kwargs) AmiciObjective[source]

Create a pypesto.AmiciObjective.

Parameters:
  • model – The AMICI model.

  • solver – The AMICI solver.

  • edatas – The experimental data in AMICI format.

  • force_compile – Whether to force-compile the model if not passed.

  • **kwargs – Additional arguments passed on to the objective.

Returns:

A pypesto.AmiciObjective for the model and the data.

Return type:

objective

create_predictor(objective: Optional[AmiciObjective] = None, amici_output_fields: Optional[Sequence[str]] = None, post_processor: Optional[Callable] = None, post_processor_sensi: Optional[Callable] = None, post_processor_time: Optional[Callable] = None, max_chunk_size: Optional[int] = None, output_ids: Optional[Sequence[str]] = None, condition_ids: Optional[Sequence[str]] = None) AmiciPredictor[source]

Create a pypesto.predict.AmiciPredictor.

The AmiciPredictor facilitates generation of predictions from parameter vectors.

Parameters:
  • objective – An objective object, which will be used to get model simulations

  • amici_output_fields – keys that exist in the return data object from AMICI, which should be available for the post-processors

  • post_processor – A callable function which applies postprocessing to the simulation results. Default are the observables of the AMICI model. This method takes a list of ndarrays (as returned in the field [‘y’] of amici ReturnData objects) as input.

  • post_processor_sensi – A callable function which applies postprocessing to the sensitivities of the simulation results. Default are the observable sensitivities of the AMICI model. This method takes two lists of ndarrays (as returned in the fields [‘y’] and [‘sy’] of amici ReturnData objects) as input.

  • post_processor_time – A callable function which applies postprocessing to the timepoints of the simulations. Default are the timepoints of the amici model. This method takes a list of ndarrays (as returned in the field [‘t’] of amici ReturnData objects) as input.

  • max_chunk_size – In some cases, we don’t want to compute all predictions at once when calling the prediction function, as this might not fit into the memory for large datasets and models. Here, the user can specify a maximum number of conditions, which should be simulated at a time. Default is 0 meaning that all conditions will be simulated. Other values are only applicable, if an output file is specified.

  • output_ids – IDs of outputs, if post-processing is used

  • condition_ids – IDs of conditions, if post-processing is used

Returns:

A pypesto.predict.AmiciPredictor for the model, using the outputs of the AMICI model and the timepoints from the PEtab data

Return type:

predictor

create_prior() Optional[NegLogParameterPriors][source]

Create a prior from the parameter table.

Returns None, if no priors are defined.

create_problem(objective: Optional[AmiciObjective] = None, x_guesses: Optional[Iterable[float]] = None, problem_kwargs: Optional[Dict[str, Any]] = None, **kwargs) Problem[source]

Create a pypesto.Problem.

Parameters:
  • objective – Objective as created by create_objective.

  • x_guesses – Guesses for the parameter values, shape (g, dim), where g denotes the number of guesses. These are used as start points in the optimization.

  • problem_kwargs – Passed to the pypesto.Problem constructor.

  • **kwargs – Additional key word arguments passed on to the objective, if not provided.

Returns:

A pypesto.Problem for the objective.

Return type:

problem

create_solver(model: Optional[amici.Model] = None) amici.Solver[source]

Return model solver.

create_startpoint_method(**kwargs) StartpointMethod[source]

Create a startpoint method.

Parameters:

**kwargs – Additional keyword arguments passed on to pypesto.startpoint.FunctionStartpoints.__init__().

static from_yaml(yaml_config: Union[dict, str], output_folder: Optional[str] = None, model_name: Optional[str] = None) PetabImporter[source]

Simplified constructor using a petab yaml file.

prediction_to_petab_measurement_df(prediction: PredictionResult, predictor: Optional[AmiciPredictor] = None) DataFrame[source]

Cast prediction into a dataframe.

If a PEtab problem is simulated without post-processing, then the result can be cast into a PEtab measurement or simulation dataframe

Parameters:
  • prediction – A prediction result as produced by an AmiciPredictor

  • predictor – The AmiciPredictor function

Returns:

A dataframe built from the rdatas in the format as in self.petab_problem.measurement_df.

Return type:

measurement_df

prediction_to_petab_simulation_df(prediction: PredictionResult, predictor: Optional[AmiciPredictor] = None) DataFrame[source]

See prediction_to_petab_measurement_df.

Except a PEtab simulation dataframe is created, i.e. the measurement column label is adjusted.

rdatas_to_measurement_df(rdatas: Sequence[amici.ReturnData], model: Optional[amici.Model] = None) DataFrame[source]

Create a measurement dataframe in the petab format.

Parameters:
  • rdatas – A list of rdatas as produced by pypesto.AmiciObjective.__call__(x, return_dict=True)[‘rdatas’].

  • model – The amici model.

Returns:

A dataframe built from the rdatas in the format as in self.petab_problem.measurement_df.

Return type:

measurement_df

rdatas_to_simulation_df(rdatas: Sequence[amici.ReturnData], model: Optional[amici.Model] = None) DataFrame[source]

See rdatas_to_measurement_df.

Execpt a petab simulation dataframe is created, i.e. the measurement column label is adjusted.

class pypesto.petab.PetabImporterPysb(petab_problem: amici.petab_import_pysb.PysbPetabProblem, validate_petab: bool = False, **kwargs)[source]

Bases: PetabImporter

Import for experimental PySB-based PEtab problems.

__init__(petab_problem: amici.petab_import_pysb.PysbPetabProblem, validate_petab: bool = False, **kwargs)[source]

Initialize importer.

Parameters:
  • petab_problem – Managing access to the model and data.

  • validate_petab – Flag indicating if the PEtab problem shall be validated.

  • kwargs – Passed to PetabImporter.__init__.

compile_model(**kwargs)[source]

Compile the model.

If the output folder exists already, it is first deleted.

Parameters:

kwargs (Extra arguments passed to amici.SbmlImporter.sbml2amici.) –

Prediction

Generate predictions from simulations with specified parameter vectors, with optional post-processing.

class pypesto.predict.AmiciPredictor(amici_objective: AmiciObjective, amici_output_fields: Union[Sequence[str], None] = None, post_processor: Union[Callable, None] = None, post_processor_sensi: Union[Callable, None] = None, post_processor_time: Union[Callable, None] = None, max_chunk_size: Union[int, None] = None, output_ids: Union[Sequence[str], None] = None, condition_ids: Union[Sequence[str], None] = None)[source]

Bases: object

Do forward simulations/predictions for an AMICI model.

The user may supply post-processing methods. If post-processing methods are supplied, and a gradient of the prediction is requested, then the sensitivities of the AMICI model must also be post-processed. There are no checks here to ensure that the sensitivities are correctly post-processed, this is explicitly left to the user. There are also no safeguards if the postprocessor routines fail. This may happen if, e.g., a call to Amici fails, and no timepoints, states or observables are returned. As the AmiciPredictor is agnostic about the dimension of the postprocessor and also the dimension of the postprocessed output, these checks are also left to the user. An example for such a check is provided in the default output (see _default_output()).

__call__(x: ndarray, sensi_orders: Tuple[int, ...] = (0,), mode: Literal['mode_fun', 'mode_res'] = 'mode_fun', output_file: str = '', output_format: str = 'csv', include_llh_weights: bool = False, include_sigmay: bool = False) PredictionResult[source]

Call the predictor.

Simulate a model for a certain prediction function. This method relies on the AmiciObjective, which is underlying, but allows the user to apply any post-processing of the results, the sensitivities, and the timepoints.

Parameters:
  • x – The parameters for which to evaluate the prediction function.

  • sensi_orders – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode – Whether to compute function values or residuals.

  • output_file – Path to an output file.

  • output_format – Either ‘csv’, ‘h5’. If an output file is specified, this routine will return a csv file, created from a DataFrame, or an h5 file, created from a dict.

  • include_llh_weights – Boolean whether weights should be included in the prediction. Necessary for weighted means of Ensembles.

  • include_sigmay – Boolean whether standard deviations should be included in the prediction output. Necessary for evaluation of weighted means of Ensembles.

Returns:

PredictionResult object containing timepoints, outputs, and output_sensitivities if requested

Return type:

results

__init__(amici_objective: AmiciObjective, amici_output_fields: Union[Sequence[str], None] = None, post_processor: Union[Callable, None] = None, post_processor_sensi: Union[Callable, None] = None, post_processor_time: Union[Callable, None] = None, max_chunk_size: Union[int, None] = None, output_ids: Union[Sequence[str], None] = None, condition_ids: Union[Sequence[str], None] = None)[source]

Initialize predictor.

Parameters:
  • amici_objective – An objective object, which will be used to get model simulations

  • amici_output_fields – keys that exist in the return data object from AMICI, which should be available for the post-processors

  • post_processor – A callable function which applies postprocessing to the simulation results and possibly defines different outputs than those of the amici model. Default are the observables (pypesto.C.AMICI_Y) of the AMICI model. This method takes a list of dicts (with the returned fields pypesto.C.AMICI_T, pypesto.C.AMICI_X, and pypesto.C.AMICI_Y of the AMICI ReturnData objects) as input. Safeguards for, e.g., failure of AMICI are left to the user.

  • post_processor_sensi – A callable function which applies postprocessing to the sensitivities of the simulation results. Defaults to the observable sensitivities of the AMICI model. This method takes a list of dicts (with the returned fields pypesto.C.AMICI_T, pypesto.C.AMICI_X, pypesto.C.AMICI_Y, pypesto.C.AMICI_SX, and pypesto.C.AMICI_SY of the AMICI ReturnData objects) as input. Safeguards for, e.g., failure of AMICI are left to the user.

  • post_processor_time – A callable function which applies postprocessing to the timepoints of the simulations. Defaults to the timepoints of the amici model. This method takes a list of dicts (with the returned field pypesto.C.AMICI_T of the amici ReturnData objects) as input. Safeguards for, e.g., failure of AMICI are left to the user.

  • max_chunk_size – In some cases, we don’t want to compute all predictions at once when calling the prediction function, as this might not fit into the memory for large datasets and models. Here, the user can specify a maximum chunk size of conditions, which should be simulated at a time. Defaults to None, meaning that all conditions will be simulated.

  • output_ids – IDs of outputs, as post-processing allows the creation of customizable outputs, which may not coincide with those from the AMICI model (defaults to AMICI observables).

  • condition_ids – List of identifiers for the conditions of the edata objects of the amici objective, will be passed to the PredictionResult at call.

class pypesto.predict.PredictorTask(predictor: pypesto.predict.Predictor, x: Sequence[float], sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], id: str)[source]

Bases: Task

Perform a single prediction with pypesto.engine.Task.

Designed for use with pypesto.ensemble.Ensemble.

predictor

The predictor to use.

x

The parameter vector to compute predictions with.

sensi_orders

Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

mode

Whether to compute function values or residuals.

id

The input ID.

__init__(predictor: pypesto.predict.Predictor, x: Sequence[float], sensi_orders: Tuple[int, ...], mode: Literal['mode_fun', 'mode_res'], id: str)[source]
execute() pypesto.predict.PredictionResult[source]

Execute and return the prediction.

Problem

A problem contains the objective as well as all information like prior describing the problem to be solved.

class pypesto.problem.Problem(objective: ObjectiveBase, lb: Union[ndarray, List[float]], ub: Union[ndarray, List[float]], dim_full: Optional[int] = None, x_fixed_indices: Optional[Union[Iterable[SupportsInt], SupportsInt]] = None, x_fixed_vals: Optional[Union[Iterable[SupportsFloat], SupportsFloat]] = None, x_guesses: Optional[Iterable[float]] = None, x_names: Optional[Iterable[str]] = None, x_scales: Optional[Iterable[str]] = None, x_priors_defs: Optional[NegLogParameterPriors] = None, lb_init: Optional[Union[ndarray, List[float]]] = None, ub_init: Optional[Union[ndarray, List[float]]] = None, copy_objective: bool = True)[source]

Bases: object

The problem formulation.

A problem specifies the objective function, boundaries and constraints, parameter guesses as well as the parameters which are to be optimized.

Parameters:
  • objective – The objective function for minimization. Note that a shallow copy is created.

  • lb – The lower and upper bounds for optimization. For unbounded directions set to +-inf.

  • ub – The lower and upper bounds for optimization. For unbounded directions set to +-inf.

  • lb_init – The lower and upper bounds for initialization, typically for defining search start points. If not set, set to lb, ub.

  • ub_init – The lower and upper bounds for initialization, typically for defining search start points. If not set, set to lb, ub.

  • dim_full – The full dimension of the problem, including fixed parameters.

  • x_fixed_indices – Vector containing the indices (zero-based) of parameter components that are not to be optimized.

  • x_fixed_vals – Vector of the same length as x_fixed_indices, containing the values of the fixed parameters.

  • x_guesses – Guesses for the parameter values, shape (g, dim), where g denotes the number of guesses. These are used as start points in the optimization.

  • x_names – Parameter names that can be optionally used e.g. in visualizations. If objective.get_x_names() is not None, those values are used, else the values specified here are used if not None, otherwise the variable names are set to [‘x0’, … ‘x{dim_full}’]. The list must always be of length dim_full.

  • x_scales – Parameter scales can be optionally given and are used e.g. in visualisation and prior generation. Currently the scales ‘lin’, ‘log`and ‘log10’ are supported.

  • x_priors_defs – Definitions of priors for parameters. Types of priors, and their required and optional parameters, are described in the Prior class.

  • copy_objective – Whethter to generate a deep copy of the objective function before potential modification the problem class performs on it.

Notes

On the fixing of parameter values:

The number of parameters dim_full the objective takes as input must be known, so it must be either lb a vector of that size, or dim_full specified as a parameter.

All vectors are mapped to the reduced space of dimension dim in __init__, regardless of whether they were in dimension dim or dim_full before. If the full representation is needed, the methods get_full_vector() and get_full_matrix() can be used.

__init__(objective: ObjectiveBase, lb: Union[ndarray, List[float]], ub: Union[ndarray, List[float]], dim_full: Optional[int] = None, x_fixed_indices: Optional[Union[Iterable[SupportsInt], SupportsInt]] = None, x_fixed_vals: Optional[Union[Iterable[SupportsFloat], SupportsFloat]] = None, x_guesses: Optional[Iterable[float]] = None, x_names: Optional[Iterable[str]] = None, x_scales: Optional[Iterable[str]] = None, x_priors_defs: Optional[NegLogParameterPriors] = None, lb_init: Optional[Union[ndarray, List[float]]] = None, ub_init: Optional[Union[ndarray, List[float]]] = None, copy_objective: bool = True)[source]
property dim: int

Return dimension only considering non fixed parameters.

fix_parameters(parameter_indices: Union[Iterable[SupportsInt], SupportsInt], parameter_vals: Union[Iterable[SupportsFloat], SupportsFloat]) None[source]

Fix specified parameters to specified values.

full_index_to_free_index(full_index: int)[source]

Calculate index in reduced vector from index in full vector.

Parameters:

full_index (The index in the full vector.) –

Returns:

free_index

Return type:

The index in the free vector.

get_full_matrix(x: Optional[ndarray]) Optional[ndarray][source]

Map matrix from dim to dim_full. Usually used for hessian.

Parameters:

x (array_like, shape=(dim, dim)) – The matrix in dimension dim.

get_full_vector(x: Optional[ndarray], x_fixed_vals: Optional[Iterable[float]] = None) Optional[ndarray][source]

Map vector from dim to dim_full. Usually used for x, grad.

Parameters:
  • x (array_like, shape=(dim,)) – The vector in dimension dim.

  • x_fixed_vals (array_like, ndim=1, optional) – The values to be used for the fixed indices. If None, then nans are inserted. Usually, None will be used for grad and problem.x_fixed_vals for x.

get_reduced_matrix(x_full: Optional[ndarray]) Optional[ndarray][source]

Map matrix from dim_full to dim, i.e. delete fixed indices.

Parameters:

x_full (array_like, ndim=2) – The matrix in dimension dim_full.

get_reduced_vector(x_full: Optional[ndarray], x_indices: Optional[List[int]] = None) Optional[ndarray][source]

Keep only those elements, which indices are specified in x_indices.

If x_indices is not provided, delete fixed indices.

Parameters:
  • x_full (array_like, ndim=1) – The vector in dimension dim_full.

  • x_indices – indices of x_full that should remain

property lb: ndarray

Return lower bounds of free parameters.

property lb_init: ndarray

Return initial lower bounds of free parameters.

normalize() None[source]

Process vectors.

Reduce all vectors to dimension dim and have the objective accept vectors of dimension dim.

print_parameter_summary() None[source]

Print a summary of parameters.

Include what parameters are being optimized and parameter boundaries.

set_x_guesses(x_guesses: Iterable[float])[source]

Set the x_guesses of a problem.

Parameters:

x_guesses

property ub: ndarray

Return upper bounds of free parameters.

property ub_init: ndarray

Return initial upper bounds of free parameters.

unfix_parameters(parameter_indices: Union[Iterable[SupportsInt], SupportsInt]) None[source]

Free specified parameters.

property x_free_indices: List[int]

Return non fixed parameters.

property x_guesses: ndarray

Return guesses of the free parameter values.

Profile

class pypesto.profile.ProfileOptions(default_step_size: float = 0.01, min_step_size: float = 0.001, max_step_size: float = 1.0, step_size_factor: float = 1.25, delta_ratio_max: float = 0.1, ratio_min: float = 0.145, reg_points: int = 10, reg_order: int = 4, magic_factor_obj_value: float = 0.5, whole_path: bool = False)[source]

Bases: dict

Options for optimization based profiling.

Parameters:
  • default_step_size – Default step size of the profiling routine along the profile path (adaptive step lengths algorithms will only use this as a first guess and then refine the update).

  • min_step_size – Lower bound for the step size in adaptive methods.

  • max_step_size – Upper bound for the step size in adaptive methods.

  • step_size_factor – Adaptive methods recompute the likelihood at the predicted point and try to find a good step length by a sort of line search algorithm. This factor controls step handling in this line search.

  • delta_ratio_max – Maximum allowed drop of the posterior ratio between two profile steps.

  • ratio_min – Lower bound for likelihood ratio of the profile, based on inverse chi2-distribution. The default 0.145 is slightly lower than the 95% quantile 0.1465 of a chi2 distribution with one degree of freedom.

  • reg_points – Number of profile points used for regression in regression based adaptive profile points proposal.

  • reg_order – Maximum degree of regression polynomial used in regression based adaptive profile points proposal.

  • magic_factor_obj_value – There is this magic factor in the old profiling code which slows down profiling at small ratios (must be >= 0 and < 1).

  • whole_path – Whether to profile the whole bounds or only till we get below the ratio.

__init__(default_step_size: float = 0.01, min_step_size: float = 0.001, max_step_size: float = 1.0, step_size_factor: float = 1.25, delta_ratio_max: float = 0.1, ratio_min: float = 0.145, reg_points: int = 10, reg_order: int = 4, magic_factor_obj_value: float = 0.5, whole_path: bool = False)[source]
static create_instance(maybe_options: Union[ProfileOptions, Dict]) ProfileOptions[source]

Return a valid options object.

Parameters:

maybe_options (ProfileOptions or dict) –

pypesto.profile.approximate_parameter_profile(problem: Problem, result: Result, profile_index: Optional[Iterable[int]] = None, profile_list: Optional[int] = None, result_index: int = 0, n_steps: int = 100) Result[source]

Calculate profile approximation.

Based on an approximation via a normal likelihood centered at the chosen optimal parameter value, with the covariance matrix being the Hessian or FIM.

Parameters:
  • problem – The problem to be solved.

  • result – A result object to initialize profiling and to append the profiling results to. For example, one might append more profiling runs to a previous profile, in order to merge these. The existence of an optimization result is obligatory.

  • profile_index – List with the profile indices to be computed (by default all of the free parameters).

  • profile_list – Integer which specifies whether a call to the profiler should create a new list of profiles (default) or should be added to a specific profile list.

  • result_index – Index from which optimization result profiling should be started (default: global optimum, i.e., index = 0).

  • n_steps – Number of profile steps in each dimension.

Returns:

The profile results are filled into result.profile_result.

Return type:

result

pypesto.profile.calculate_approximate_ci(xs: ndarray, ratios: ndarray, confidence_ratio: float) Tuple[float, float][source]

Calculate approximate confidence interval based on profile.

Interval bounds are linerly interpolated.

Parameters:
  • xs – The ordered parameter values along the profile for the coordinate of interest.

  • ratios – The likelihood ratios corresponding to the parameter values.

  • confidence_ratio – Minimum confidence ratio to base the confidence interval upon, as obtained via pypesto.profile.chi2_quantile_to_ratio.

Returns:

Bounds of the approximate confidence interval.

Return type:

lb, ub

pypesto.profile.chi2_quantile_to_ratio(alpha: float = 0.95, df: int = 1)[source]

Compute profile likelihood threshold.

Transform lower tail probability alpha for a chi2 distribution with df degrees of freedom to a profile likelihood ratio threshold.

Parameters:
  • alpha – Lower tail probability, defaults to 95% interval.

  • df – Degrees of freedom.

Returns:

Corresponds to a likelihood ratio.

Return type:

ratio

pypesto.profile.parameter_profile(problem: Problem, result: Result, optimizer: Optimizer, engine: Optional[Engine] = None, profile_index: Optional[Iterable[int]] = None, profile_list: Optional[int] = None, result_index: int = 0, next_guess_method: Union[Callable, str] = 'adaptive_step_regression', profile_options: Optional[ProfileOptions] = None, progress_bar: bool = True, filename: Optional[Union[str, Callable]] = None, overwrite: bool = False) Result[source]

Call to do parameter profiling.

Parameters:
  • problem – The problem to be solved.

  • result – A result object to initialize profiling and to append the profiling results to. For example, one might append more profiling runs to a previous profile, in order to merge these. The existence of an optimization result is obligatory.

  • optimizer – The optimizer to be used along each profile.

  • engine – The engine to be used.

  • profile_index – List with the parameter indices to be profiled (by default all free indices).

  • profile_list – Integer which specifies whether a call to the profiler should create a new list of profiles (default) or should be added to a specific profile list.

  • result_index – Index from which optimization result profiling should be started (default: global optimum, i.e., index = 0).

  • next_guess_method – Function handle to a method that creates the next starting point for optimization in profiling.

  • profile_options – Various options applied to the profile optimization.

  • progress_bar – Whether to display a progress bar.

  • filename – Name of the hdf5 file, where the result will be saved. Default is None, which deactivates automatic saving. If set to “Auto” it will automatically generate a file named year_month_day_profiling_result.hdf5. Optionally a method, see docs for pypesto.store.auto.autosave.

  • overwrite – Whether to overwrite result/profiling in the autosave file if it already exists.

Returns:

The profile results are filled into result.profile_result.

Return type:

result

pypesto.profile.validation_profile_significance(problem_full_data: Problem, result_training_data: Result, result_full_data: Optional[Result] = None, n_starts: Optional[int] = 1, optimizer: Optional[Optimizer] = None, engine: Optional[Engine] = None, lsq_objective: bool = False, return_significance: bool = True) float[source]

Compute significance of Validation Interval.

It is a confidence region/interval for a new validation experiment. [1] et al. (This method per default returns the significance = 1-alpha!)

The reasoning behind their approach is, that a validation data set is outside the validation interval, if fitting the full data set would lead to a fit $theta_{new}$, that does not contain the old fit $theta_{train}$ in their (Profile-Likelihood) based parameter-confidence intervals. (I.e. the old fit would be rejected by the fit of the full data.)

This method returns the significance of the validation data set (where result_full_data is the objective function for fitting both data sets). I.e. the largest alpha, such that there is a validation region/interval such that the validation data set lies outside this Validation Interval with probability alpha. (If one is interested in the opposite, set return_significance=False.)

Parameters:
  • problem_full_data – pypesto.problem, such that the objective is the negative-log-likelihood of the training and validation data set.

  • result_training_data – result object from the fitting of the training data set only.

  • result_full_data – pypesto.result object that contains the result of fitting training and validation data combined.

  • n_starts – number of starts for fitting the full data set (if result_full_data is not provided).

  • optimizer – optimizer used for refitting the data (if result_full_data is not provided).

  • engine – engine for refitting (if result_full_data is not provided).

  • lsq_objective – indicates if the objective of problem_full_data corresponds to a nllh (False), or a chi^2 value (True).

  • return_significance – indicates, if the function should return the significance (True) (i.e. the probability, that the new data set lies outside the Confidence Interval for the validation experiment, as given by the method), or the largest alpha, such that the validation experiment still lies within the Confidence Interval (False). I.e. alpha = 1-significance.

Result

The pypesto.Result object contains all results generated by the pypesto components. It contains sub-results for optimization, profiling, sampling.

class pypesto.result.McmcPtResult(trace_x: ndarray, trace_neglogpost: ndarray, trace_neglogprior: ndarray, betas: Iterable[float], burn_in: Optional[int] = None, time: float = 0.0, auto_correlation: Optional[float] = None, effective_sample_size: Optional[float] = None, message: Optional[str] = None)[source]

Bases: dict

The result of a sampler run using Markov-chain Monte Carlo.

Currently result object of all supported samplers. Can be used like a dict.

Parameters:
  • trace_x ([n_chain, n_iter, n_par]) – Parameters.

  • trace_neglogpost ([n_chain, n_iter]) – Negative log posterior values.

  • trace_neglogprior ([n_chain, n_iter]) – Negative log prior values.

  • betas ([n_chain]) – The associated inverse temperatures.

  • burn_in ([n_chain]) – The burn in index.

  • time ([n_chain]) – The computation time.

  • auto_correlation ([n_chain]) – The estimated chain autcorrelation.

  • effective_sample_size ([n_chain]) – The estimated effective sample size.

  • message (str) – Textual comment on the profile result.

  • Here

  • chains (n_chain denotes the number of) –

  • of (n_iter the number) –

  • (i.e. (iterations) –

  • length) (the chain) –

  • parameters. (and n_par the number of) –

__init__(trace_x: ndarray, trace_neglogpost: ndarray, trace_neglogprior: ndarray, betas: Iterable[float], burn_in: Optional[int] = None, time: float = 0.0, auto_correlation: Optional[float] = None, effective_sample_size: Optional[float] = None, message: Optional[str] = None)[source]
class pypesto.result.OptimizeResult[source]

Bases: object

Result of the pypesto.optimize.minimize() function.

__init__()[source]
append(optimize_result: Union[OptimizerResult, OptimizeResult], sort: bool = True, prefix: str = '')[source]

Append an OptimizerResult or an OptimizeResult to the result object.

Parameters:
  • optimize_result – The result of one or more (local) optimizer run.

  • sort – Boolean used so we only sort once when appending an optimize_result.

  • prefix – The IDs for all appended results will be prefixed with this.

as_dataframe(keys=None) DataFrame[source]

Get as pandas DataFrame.

If keys is a list, return only the specified values, otherwise all.

as_list(keys=None) Sequence[source]

Get as list.

If keys is a list, return only the specified values.

Parameters:

keys (list(str), optional) – Labels of the field to extract.

get_for_key(key) list[source]

Extract the list of values for the specified key as a list.

sort()[source]

Sort the optimizer results by function value fval (ascending).

summary(disp_best: bool = True, disp_worst: bool = False) str[source]

Get summary of the object.

Parameters:
  • disp_best – Whether to display a detailed summary of the best run.

  • disp_worst – Whether to display a detailed summary of the worst run.

class pypesto.result.OptimizerResult(id: Optional[str] = None, x: Optional[ndarray] = None, fval: Optional[float] = None, grad: Optional[ndarray] = None, hess: Optional[ndarray] = None, res: Optional[ndarray] = None, sres: Optional[ndarray] = None, n_fval: Optional[int] = None, n_grad: Optional[int] = None, n_hess: Optional[int] = None, n_res: Optional[int] = None, n_sres: Optional[int] = None, x0: Optional[ndarray] = None, fval0: Optional[float] = None, history: Optional[HistoryBase] = None, exitflag: Optional[int] = None, time: Optional[float] = None, message: Optional[str] = None, optimizer: Optional[str] = None)[source]

Bases: dict

The result of an optimizer run.

Used as a standardized return value to map from the individual result objects returned by the employed optimizers to the format understood by pypesto.

Can be used like a dict.

id

Id of the optimizer run. Usually the start index.

x

The best found parameters.

fval

The best found function value, fun(x).

grad

The gradient at x.

hess

The Hessian at x.

res

The residuals at x.

sres

The residual sensitivities at x.

n_fval

Number of function evaluations.

n_grad

Number of gradient evaluations.

n_hess

Number of Hessian evaluations.

n_res

Number of residuals evaluations.

n_sres

Number of residual sensitivity evaluations.

x0

The starting parameters.

fval0

The starting function value, fun(x0).

history

Objective history.

exitflag

The exitflag of the optimizer.

time

Execution time.

message

Textual comment on the optimization result.

Type:

str

optimizer

The optimizer used for optimization.

Type:

str

Notes

Any field not supported by the optimizer is filled with None.

__init__(id: Optional[str] = None, x: Optional[ndarray] = None, fval: Optional[float] = None, grad: Optional[ndarray] = None, hess: Optional[ndarray] = None, res: Optional[ndarray] = None, sres: Optional[ndarray] = None, n_fval: Optional[int] = None, n_grad: Optional[int] = None, n_hess: Optional[int] = None, n_res: Optional[int] = None, n_sres: Optional[int] = None, x0: Optional[ndarray] = None, fval0: Optional[float] = None, history: Optional[HistoryBase] = None, exitflag: Optional[int] = None, time: Optional[float] = None, message: Optional[str] = None, optimizer: Optional[str] = None)[source]
summary()[source]

Get summary of the object.

update_to_full(problem: Problem) None[source]

Update values to full vectors/matrices.

Parameters:

problem – problem which contains info about how to convert to full vectors or matrices

class pypesto.result.PredictionConditionResult(timepoints: ndarray, output_ids: Sequence[str], output: Optional[ndarray] = None, output_sensi: Optional[ndarray] = None, output_weight: Optional[float] = None, output_sigmay: Optional[ndarray] = None, x_names: Optional[Sequence[str]] = None)[source]

Bases: object

Light-weight wrapper for the prediction of one simulation condition.

It should provide a common api how amici predictions should look like in pyPESTO.

__init__(timepoints: ndarray, output_ids: Sequence[str], output: Optional[ndarray] = None, output_sensi: Optional[ndarray] = None, output_weight: Optional[float] = None, output_sigmay: Optional[ndarray] = None, x_names: Optional[Sequence[str]] = None)[source]

Initialize PredictionConditionResult.

Parameters:
  • timepoints – Output timepoints for this simulation condition

  • output_ids – IDs of outputs for this simulation condition

  • output – Postprocessed outputs (ndarray)

  • output_sensi – Sensitivities of postprocessed outputs (ndarray)

  • output_weight – LLH of the simulation

  • output_sigmay – Standard deviations of postprocessed observables

  • x_names – IDs of model parameter w.r.t to which sensitivities were computed

class pypesto.result.PredictionResult(conditions: Sequence[Union[PredictionConditionResult, Dict]], condition_ids: Optional[Sequence[str]] = None, comment: Optional[str] = None)[source]

Bases: object

Light-weight wrapper around prediction from pyPESTO made by an AMICI model.

Its only purpose is to have fixed format/api, how prediction results should be stored, read, and handled: as predictions are a very flexible format anyway, they should at least have a common definition, which allows to work with them in a reasonable way.

__init__(conditions: Sequence[Union[PredictionConditionResult, Dict]], condition_ids: Optional[Sequence[str]] = None, comment: Optional[str] = None)[source]

Initialize PredictionResult.

Parameters:
  • conditions – A list of PredictionConditionResult objects or dicts

  • condition_ids – IDs or names of the simulation conditions, which belong to this prediction (e.g., PEtab uses tuples of preequilibration condition and simulation conditions)

  • comment – An additional note, which can be attached to this prediction

write_to_csv(output_file: str)[source]

Save predictions to a csv file.

Parameters:

output_file – path to file/folder to which results will be written

write_to_h5(output_file: str, base_path: Optional[str] = None)[source]

Save predictions to an h5 file.

It appends to the file if the file already exists.

Parameters:
  • output_file – path to file/folder to which results will be written

  • base_path – base path in the h5 file

class pypesto.result.ProfileResult[source]

Bases: object

Result of the profile() function.

It holds a list of profile lists. Each profile list consists of a list of ProfilerResult objects, one for each parameter.

__init__()[source]
append_empty_profile_list() int[source]

Append an empty profile list to the list of profile lists.

Returns:

The index of the created profile list.

Return type:

index

append_profiler_result(profiler_result: Optional[ProfilerResult] = None, profile_list: Optional[int] = None) None[source]

Append the profiler result to the profile list.

Parameters:
  • profiler_result – The result of one profiler run for a parameter, or None if to be left empty.

  • profile_list – Index specifying the profile list to which we want to append. Defaults to the last list.

get_profiler_result(i_par: int, profile_list: Optional[int] = None)[source]

Get the profiler result at parameter index i_par of profile_list.

Parameters:
  • i_par – Integer specifying the profile index.

  • profile_list – Index specifying the profile list. Defaults to the last list.

set_profiler_result(profiler_result: ProfilerResult, i_par: int, profile_list: Optional[int] = None) None[source]

Write a profiler result to the result object.

Parameters:
  • profiler_result – The result of one (local) profiler run.

  • i_par – Integer specifying the parameter index where to put profiler_result.

  • profile_list – Index specifying the profile list. Defaults to the last list.

class pypesto.result.ProfilerResult(x_path: ndarray, fval_path: ndarray, ratio_path: ndarray, gradnorm_path: ndarray = nan, exitflag_path: ndarray = nan, time_path: ndarray = nan, time_total: float = 0.0, n_fval: int = 0, n_grad: int = 0, n_hess: int = 0, message: Optional[str] = None)[source]

Bases: dict

The result of a profiler run.

The standardized return value from pypesto.profile, which can either be initialized from an OptimizerResult or from an existing ProfilerResult (in order to extend the computation).

Can be used like a dict.

x_path

The path of the best found parameters along the profile (Dimension: n_par x n_profile_points)

fval_path

The function values, fun(x), along the profile.

ratio_path

The ratio of the posterior function along the profile.

gradnorm_path

The gradient norm along the profile.

exitflag_path

The exitflags of the optimizer along the profile.

time_path

The computation time of the optimizer runs along the profile.

time_total

The total computation time for the profile.

n_fval

Number of function evaluations.

n_grad

Number of gradient evaluations.

n_hess

Number of Hessian evaluations.

message

Textual comment on the profile result.

Notes

Any field not supported by the profiler or the profiling optimizer is filled with None. Some fields are filled by pypesto itself.

__init__(x_path: ndarray, fval_path: ndarray, ratio_path: ndarray, gradnorm_path: ndarray = nan, exitflag_path: ndarray = nan, time_path: ndarray = nan, time_total: float = 0.0, n_fval: int = 0, n_grad: int = 0, n_hess: int = 0, message: Optional[str] = None)[source]
append_profile_point(x: ndarray, fval: float, ratio: float, gradnorm: float = nan, time: float = nan, exitflag: float = nan, n_fval: int = 0, n_grad: int = 0, n_hess: int = 0) None[source]

Append a new point to the profile path.

Parameters:
  • x – The parameter values.

  • fval – The function value at x.

  • ratio – The ratio of the function value at x by the optimal function value.

  • gradnorm – The gradient norm at x.

  • time – The computation time to find x.

  • exitflag – The exitflag of the optimizer (useful if an optimization was performed to find x).

  • n_fval – Number of function evaluations performed to find x.

  • n_grad – Number of gradient evaluations performed to find x.

  • n_hess – Number of Hessian evaluations performed to find x.

flip_profile() None[source]

Flip the profiling direction (left-right).

Profiling direction needs to be changed once (if the profile is new), or twice if we append to an existing profile. All profiling paths are flipped in-place.

class pypesto.result.Result(problem=None, optimize_result: Optional[OptimizeResult] = None, profile_result: Optional[ProfileResult] = None, sample_result: Optional[SampleResult] = None)[source]

Bases: object

Universal result object for pypesto.

The algorithms like optimize, profile, sample fill different parts of it.

problem

The problem underlying the results.

Type:

pypesto.Problem

optimize_result

The results of the optimizer runs.

profile_result

The results of the profiler run.

sample_result

The results of the sampler run.

__init__(problem=None, optimize_result: Optional[OptimizeResult] = None, profile_result: Optional[ProfileResult] = None, sample_result: Optional[SampleResult] = None)[source]
summary() str[source]

Get summary of the object.

class pypesto.result.SampleResult[source]

Bases: object

Result of the sample() function.

__init__()[source]

Sample

Draw samples from the distribution, with support for various samplers.

class pypesto.sample.AdaptiveMetropolisSampler(options: Optional[Dict] = None)[source]

Bases: MetropolisSampler

Metropolis-Hastings sampler with adaptive proposal covariance.

__init__(options: Optional[Dict] = None)[source]
classmethod default_options()[source]

Return the default options for the sampler.

initialize(problem: Problem, x0: ndarray)[source]

Initialize the sampler.

class pypesto.sample.AdaptiveParallelTemperingSampler(internal_sampler: InternalSampler, betas: Optional[Sequence[float]] = None, n_chains: Optional[int] = None, options: Optional[Dict] = None)[source]

Bases: ParallelTemperingSampler

Parallel tempering sampler with adaptive temperature adaptation.

adjust_betas(i_sample: int, swapped: Sequence[bool])[source]

Update temperatures as in Vousden2016.

classmethod default_options() Dict[source]

Get default options for sampler.

class pypesto.sample.DynestySampler(sampler_args: Optional[dict] = None, run_args: Optional[dict] = None, dynamic: bool = True)[source]

Bases: Sampler

Use dynesty for sampling.

NB: get_samples returns MCMC-like samples, by resampling original dynesty samples according to their importance weights. This is because the original samples contain many low-likelihood samples. To work with the original samples, modify the results object with pypesto_result.sample_result = sampler.get_original_samples(), where sampler is an instance of pypesto.sample.DynestySampler. The original dynesty results object is available at sampler.results.

Wrapper around https://dynesty.readthedocs.io/en/stable/, see there for details.

__init__(sampler_args: Optional[dict] = None, run_args: Optional[dict] = None, dynamic: bool = True)[source]

Initialize sampler.

Parameters:
  • sampler_args – Further keyword arguments that are passed on to the __init__ method of the dynesty sampler.

  • run_args – Further keyword arguments that are passed on to the run_nested method of the dynesty sampler.

  • dynamic – Whether to use dynamic or static nested sampling.

get_original_samples() McmcPtResult[source]

Get the samples into the fitting pypesto format.

Return type:

The pyPESTO sample result.

get_samples() McmcPtResult[source]

Get MCMC-like samples into the fitting pypesto format.

Return type:

The pyPESTO sample result.

initialize(problem: Problem, x0: Union[ndarray, List[ndarray]]) None[source]

Initialize the sampler.

prior_transform(prior_sample: ndarray) ndarray[source]

Transform prior sample from unit cube to pyPESTO prior.

TODO support priors that are not uniform.

raise warning in self.initialize for now.

Parameters:
  • prior_sample – The prior sample, provided by dynesty.

  • problem – The pyPESTO problem.

Return type:

The transformed prior sample.

restore_internal_sampler(filename: str) None[source]

Restore the state of the internal dynesty sampler.

Parameters:

filename – The internal sampler will be saved here.

sample(n_samples: int, beta: Optional[float] = None) None[source]

Return the most recent sample state.

save_internal_sampler(filename: str) None[source]

Save the state of the internal dynesty sampler.

This makes it easier to analyze the original dynesty samples, after sampling, with restore_internal.

Parameters:

filename – The internal sampler will be saved here.

class pypesto.sample.EmceeSampler(nwalkers: int = 1, sampler_args: Optional[dict] = None, run_args: Optional[dict] = None)[source]

Bases: Sampler

Use emcee for sampling.

Wrapper around https://emcee.readthedocs.io/en/stable/, see there for details.

__init__(nwalkers: int = 1, sampler_args: Optional[dict] = None, run_args: Optional[dict] = None)[source]

Initialize sampler.

Parameters:
  • nwalkers – The number of walkers in the ensemble.

  • sampler_args – Further keyword arguments that are passed on to emcee.EnsembleSampler.__init__.

  • run_args – Further keyword arguments that are passed on to emcee.EnsembleSampler.run_mcmc.

get_epsilon_ball_initial_state(center: ndarray, problem: Problem, epsilon: float = 0.001)[source]

Get walker initial positions as samples from an epsilon ball.

The ball is scaled in each direction according to the magnitude of the center in that direction.

It is assumed that, because vectors are generated near a good point, all generated vectors are evaluable, so evaluability is not checked.

Points that are generated outside the problem bounds will get shifted to lie on the edge of the problem bounds.

Parameters:
  • center – The center of the epsilon ball. The dimension should match the full dimension of the pyPESTO problem. This will be returned as the first position.

  • problem – The pyPESTO problem.

  • epsilon – The relative radius of the ball. e.g., if epsilon=0.5 and the center of the first dimension is at 100, then the upper and lower bounds of the epsilon ball in the first dimension will be 150 and 50, respectively.

get_samples() McmcPtResult[source]

Get the samples into the fitting pypesto format.

initialize(problem: Problem, x0: Union[ndarray, List[ndarray]]) None[source]

Initialize the sampler.

It is recommended to initialize walkers

Parameters:

x0 – The “a priori preferred position”. e.g., an optimized parameter vector. https://emcee.readthedocs.io/en/stable/user/faq/ The position of the first walker will be this, the remaining walkers will be assigned positions uniformly in a smaller ball around this vector. Alternatively, a set of vectors can be provided, which will be used to initialize walkers. In this case, any remaining walkers will be initialized at points sampled uniformly within the problem bounds.

sample(n_samples: int, beta: float = 1.0) None[source]

Return the most recent sample state.

class pypesto.sample.InternalSampler(options: Optional[Dict] = None)[source]

Bases: Sampler

Sampler to be used inside a parallel tempering sampler.

The last sample can be obtained via get_last_sample and set via set_last_sample.

abstract get_last_sample() InternalSample[source]

Get the last sample in the chain.

Returns:

The last sample in the chain in the exchange format.

Return type:

internal_sample

make_internal(temper_lpost: bool)[source]

Allow the inner samplers to be used as inner samplers.

Can be called by parallel tempering samplers during initialization. Default: Do nothing.

Parameters:

temper_lpost – Whether to temperate the posterior or only the likelihood.

abstract set_last_sample(sample: InternalSample)[source]

Set the last sample in the chain to the passed value.

Parameters:

sample – The sample that will replace the last sample in the chain.

class pypesto.sample.MetropolisSampler(options: Optional[Dict] = None)[source]

Bases: InternalSampler

Simple Metropolis-Hastings sampler with fixed proposal variance.

__init__(options: Optional[Dict] = None)[source]
classmethod default_options()[source]

Return the default options for the sampler.

get_last_sample() InternalSample[source]

Get the last sample in the chain.

Returns:

The last sample in the chain in the exchange format.

Return type:

internal_sample

get_samples() McmcPtResult[source]

Get the samples into the fitting pypesto format.

initialize(problem: Problem, x0: ndarray)[source]

Initialize the sampler.

make_internal(temper_lpost: bool)[source]

Allow the inner samplers to be used as inner samplers.

Can be called by parallel tempering samplers during initialization. Default: Do nothing.

Parameters:

temper_lpost – Whether to temperate the posterior or only the likelihood.

sample(n_samples: int, beta: float = 1.0)[source]

Load last recorded particle.

set_last_sample(sample: InternalSample)[source]

Set the last sample in the chain to the passed value.

Parameters:

sample – The sample that will replace the last sample in the chain.

class pypesto.sample.ParallelTemperingSampler(internal_sampler: InternalSampler, betas: Optional[Sequence[float]] = None, n_chains: Optional[int] = None, options: Optional[Dict] = None)[source]

Bases: Sampler

Simple parallel tempering sampler.

__init__(internal_sampler: InternalSampler, betas: Optional[Sequence[float]] = None, n_chains: Optional[int] = None, options: Optional[Dict] = None)[source]
adjust_betas(i_sample: int, swapped: Sequence[bool])[source]

Adjust temperature values. Default: Do nothing.

classmethod default_options() Dict[source]

Return the default options for the sampler.

get_samples() McmcPtResult[source]

Concatenate all chains.

initialize(problem: Problem, x0: Union[ndarray, List[ndarray]])[source]

Initialize all samplers.

sample(n_samples: int, beta: float = 1.0)[source]

Sample and swap in between samplers.

swap_samples() Sequence[bool][source]

Swap samples as in Vousden2016.

class pypesto.sample.Sampler(options: Optional[Dict] = None)[source]

Bases: ABC

Sampler base class, not functional on its own.

The sampler maintains an internal chain, which is initialized in initialize, and updated in sample.

__init__(options: Optional[Dict] = None)[source]
classmethod default_options() Dict[source]

Set/Get default options.

Returns:

Default sampler options.

Return type:

default_options

abstract get_samples() McmcPtResult[source]

Get the generated samples.

abstract initialize(problem: Problem, x0: Union[ndarray, List[ndarray]])[source]

Initialize the sampler.

Parameters:
  • problem – The problem for which to sample.

  • x0 – Should, but is not required to, be used as initial parameter.

abstract sample(n_samples: int, beta: float = 1.0)[source]

Perform sampling.

Parameters:
  • n_samples – Number of samples to generate.

  • beta – Inverse of the temperature to which the system is elevated.

classmethod translate_options(options)[source]

Translate options and fill in defaults.

Parameters:

options – Options configuring the sampler.

pypesto.sample.auto_correlation(result: Result) float[source]

Calculate the autocorrelation of the MCMC chains.

Parameters:

result – The pyPESTO result object with filled sample result.

Returns:

Estimate of the integrated autocorrelation time of the MCMC chains.

Return type:

auto_correlation

pypesto.sample.calculate_ci_mcmc_sample(result: Result, ci_level: float = 0.95, exclude_burn_in: bool = True) Tuple[ndarray, ndarray][source]

Calculate parameter credibility intervals based on MCMC samples.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • ci_level – Lower tail probability, defaults to 95% interval.

Returns:

Bounds of the MCMC percentile-based confidence interval.

Return type:

lb, ub

pypesto.sample.calculate_ci_mcmc_sample_prediction(simulated_values: ndarray, ci_level: float = 0.95) Tuple[ndarray, ndarray][source]

Calculate prediction credibility intervals based on MCMC samples.

Parameters:
  • simulated_values – Simulated model states or model observables.

  • ci_level – Lower tail probability, defaults to 95% interval.

Returns:

Bounds of the MCMC-based prediction confidence interval.

Return type:

lb, ub

pypesto.sample.effective_sample_size(result: Result) float[source]

Calculate the effective sample size of the MCMC chains.

Parameters:

result – The pyPESTO result object with filled sample result.

Returns:

Estimate of the effective sample size of the MCMC chains.

Return type:

ess

pypesto.sample.geweke_test(result: Result, zscore: float = 2.0) int[source]

Calculate the burn-in of MCMC chains.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • zscore – The Geweke test threshold.

Returns:

Iteration where the first and the last fraction of the chain do not differ significantly regarding Geweke test -> Burn-In

Return type:

burn_in

pypesto.sample.sample(problem: Problem, n_samples: Optional[int], sampler: Optional[Sampler] = None, x0: Optional[Union[ndarray, List[ndarray]]] = None, result: Optional[Result] = None, filename: Optional[Union[str, Callable]] = None, overwrite: bool = False) Result[source]

Call to do parameter sampling.

Parameters:
  • problem – The problem to be solved. If None is provided, a pypesto.AdaptiveMetropolisSampler is used.

  • n_samples – Number of samples to generate. None can be used if the sampler does not use n_samples.

  • sampler – The sampler to perform the actual sampling.

  • x0 – Initial parameter for the Markov chain. If None, the best parameter found in optimization is used. Note that some samplers require an initial parameter, some may ignore it. x0 can also be a list, to have separate starting points for parallel tempering chains.

  • result – A result to write to. If None provided, one is created from the problem.

  • filename – Name of the hdf5 file, where the result will be saved. Default is None, which deactivates automatic saving. If set to “Auto” it will automatically generate a file named year_month_day_profiling_result.hdf5. Optionally a method, see docs for pypesto.store.auto.autosave.

  • overwrite – Whether to overwrite result/sampling in the autosave file if it already exists.

Returns:

A result with filled in sample_options part.

Return type:

result

Model Selection

Perform model selection with a PEtab Select problem.

class pypesto.select.Problem(petab_select_problem: Problem, model_postprocessor: Optional[Callable[[ModelProblem], None]] = None)[source]

Bases: object

Handles use of a model selection algorithm.

Handles model selection. Usage involves initialisation with a model specifications file, and then calling the select() method to perform model selection with a specified algorithm and criterion.

calibrated_models

Storage for all calibrated models. A dictionary, where keys are model hashes, and values are petab_select.Model objects.

newly_calibrated_models

Storage for models that were calibrated in the previous iteration of model selection. Same type as calibrated_models.

method_caller

A MethodCaller, used to run a single iteration of a model selection method.

model_postprocessor

A method that is applied to each model after calibration.

petab_select_problem

A PEtab Select problem.

__init__(petab_select_problem: Problem, model_postprocessor: Optional[Callable[[ModelProblem], None]] = None)[source]
create_method_caller(**kwargs) MethodCaller[source]

Create a method caller.

args and kwargs are passed to the MethodCaller constructor.

Returns:

A MethodCaller instance.

Return type:

MethodCaller

handle_select_kwargs(kwargs: Dict[str, Any])[source]

Check keyword arguments to select calls.

multistart_select(predecessor_models: Optional[Iterable[Model]] = None, **kwargs) Tuple[Model, List[Model]][source]

Run an algorithm multiple times, with different predecessor models.

Note that the same method caller is currently shared between all calls. This may change when parallelization is implemented, but for now ensures that the same model isn’t calibrated twice. Could also be managed by sharing the same “calibrated_models” object (but then the same model could be repeatedly calibrated, if the calibrations start before any have stopped).

kwargs are passed to the MethodCaller constructor.

Parameters:

predecessor_models – The models that will be used as initial models. One “model selection iteration” will be run for each predecessor model.

Returns:

A 2-tuple, with the following values:

  1. the best model; and

  2. the best models (the best model at each iteration).

Return type:

tuple

select(**kwargs) Tuple[Model, Dict[str, Model], Dict[str, Model]][source]

Run a single iteration of a model selection algorithm.

The result is the selected model for the current run, independent of previous selected models.

kwargs are passed to the MethodCaller constructor.

Returns:

A 3-tuple, with the following values:

  1. the best model;

  2. all candidate models in this iteration, as a dict with model hashes as keys and models as values; and

  3. all candidate models from all iterations, as a dict with model hashes as keys and models as values.

Return type:

tuple

select_to_completion(**kwargs) List[Model][source]

Run an algorithm until an exception StopIteration is raised.

kwargs are passed to the MethodCaller constructor.

An exception StopIteration is raised by pypesto.select.method.MethodCaller.__call__ when no candidate models are found.

Returns:

The best models (the best model at each iteration).

Return type:

list

set_state(calibrated_models: Dict[str, Model], newly_calibrated_models: Dict[str, Model]) None[source]

Set the state of the problem.

See Problem attributes for argument documentation.

update_with_newly_calibrated_models(newly_calibrated_models: Optional[Dict[str, Model]] = None) None[source]

Update the state of the problem with newly calibrated models.

Parameters:

newly_calibrated_models – See attributes of Problem.

pypesto.select.model_to_pypesto_problem(model: Model, objective: Optional[Objective] = None, x_guesses: Optional[Iterable[Dict[str, float]]] = None) Problem[source]

Create a pyPESTO problem from a PEtab Select model.

Parameters:
  • model – The model.

  • objective – The pyPESTO objective.

  • x_guesses – Startpoints to be used in the multi-start optimization. For example, this could be the maximum likelihood estimate from another model. Each dictionary has parameter IDs as keys, and parameter values as values. Values in x_guess for parameters that are not estimated will be ignored and replaced with their value from the PEtab Select model, if defined, else their nominal value in the PEtab parameters table.

Returns:

The pyPESTO select problem.

Return type:

Problem

Startpoint

Methods for selecting points that can be used as startpoints for multi-start optimization. Startpoint methods can be implemented by deriving from pypesto.startpoint.StartpointMethod.

class pypesto.startpoint.CheckedStartpoints(use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False)[source]

Bases: StartpointMethod, ABC

Startpoints checked for function value and/or gradient finiteness.

__call__(n_starts: int, problem: Problem) ndarray[source]

Generate checked startpoints.

__init__(use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False)[source]

Initialize.

Parameters:
  • use_guesses – Whether to use guesses provided in the problem.

  • check_fval – Whether to check function values at the startpoint, and resample if not finite.

  • check_grad – Whether to check gradients at the startpoint, and resample if not finite.

check_and_resample(xs: ndarray, lb: ndarray, ub: ndarray, objective: ObjectiveBase) ndarray[source]

Check sampled points for fval, grad, and potentially resample ones.

Parameters:
  • xs (Startpoints candidates, shape (n_starts, n_par).) –

  • lb (Lower parameter bound.) –

  • ub (Upper parameter bound.) –

  • objective (Objective function, for evaluation.) –

Returns:

Checked and potentially partially resampled startpoints, shape (n_starts, n_par).

Return type:

xs

abstract sample(n_starts: int, lb: ndarray, ub: ndarray) ndarray[source]

Actually sample startpoints.

While in this implementation, __call__ handles the checking of guesses and resampling, this method defines the actual sampling.

Parameters:
  • n_starts (Number of startpoints to generate.) –

  • lb (Lower parameter bound.) –

  • ub (Upper parameter bound.) –

Returns:

xs

Return type:

Startpoints, shape (n_starts, n_par).

class pypesto.startpoint.FunctionStartpoints(function: Callable, use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False)[source]

Bases: CheckedStartpoints

Define startpoints via callable.

The callable should take the same arguments as the __call__ method.

__init__(function: Callable, use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False)[source]

Initialize.

Parameters:
  • function (The callable sampling startpoints.) –

  • use_guesses (As in CheckedStartpoints.) –

  • check_fval (As in CheckedStartpoints.) –

  • check_grad (As in CheckedStartpoints.) –

sample(n_starts: int, lb: ndarray, ub: ndarray) ndarray[source]

Call function.

class pypesto.startpoint.LatinHypercubeStartpoints(use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False, smooth: bool = True)[source]

Bases: CheckedStartpoints

Generate latin hypercube-sampled startpoints.

See e.g. https://en.wikipedia.org/wiki/Latin_hypercube_sampling.

__init__(use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False, smooth: bool = True)[source]

Initialize.

Parameters:
  • use_guesses – As in CheckedStartpoints.

  • check_fval – As in CheckedStartpoints.

  • check_grad – As in CheckedStartpoints.

  • smooth – Whether a (uniformly chosen) random starting point within the hypercube [i/n_starts, (i+1)/n_starts] should be chosen (True) or the midpoint of the interval (False).

sample(n_starts: int, lb: ndarray, ub: ndarray) ndarray[source]

Call function.

class pypesto.startpoint.NoStartpoints[source]

Bases: StartpointMethod

Dummy class generating nan points. Useful if no startpoints needed.

__call__(n_starts: int, problem: Problem) ndarray[source]

Generate a (n_starts, dim) nan matrix.

class pypesto.startpoint.StartpointMethod[source]

Bases: ABC

Startpoint generation, in particular for multi-start optimization.

Abstract base class, specific sampling method needs to be defined in sub-classes.

abstract __call__(n_starts: int, problem: Problem) ndarray[source]

Generate startpoints.

Parameters:
  • n_starts (Number of starts.) –

  • problem (Problem specifying e.g. dimensions, bounds, and guesses.) –

Returns:

xs

Return type:

Startpoints, shape (n_starts, n_par).

class pypesto.startpoint.UniformStartpoints(use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False)[source]

Bases: FunctionStartpoints

Generate uniformly sampled startpoints.

__init__(use_guesses: bool = True, check_fval: bool = False, check_grad: bool = False)[source]

Initialize.

Parameters:
  • function (The callable sampling startpoints.) –

  • use_guesses (As in CheckedStartpoints.) –

  • check_fval (As in CheckedStartpoints.) –

  • check_grad (As in CheckedStartpoints.) –

pypesto.startpoint.latin_hypercube(n_starts: int, lb: ndarray, ub: ndarray, smooth: bool = True) ndarray[source]

Generate latin hypercube points.

Parameters:
  • n_starts – Number of points.

  • lb – Lower bound.

  • ub – Upper bound.

  • smooth – Whether a (uniformly chosen) random starting point within the hypercube [i/n_starts, (i+1)/n_starts] should be chosen (True) or the midpoint of the interval (False).

Returns:

Latin hypercube points, shape (n_starts, n_x).

Return type:

xs

pypesto.startpoint.to_startpoint_method(maybe_startpoint_method: Union[StartpointMethod, Callable, bool]) StartpointMethod[source]

Create StartpointMethod instance if possible, otherwise raise.

Parameters:

maybe_startpoint_method – A StartpointMethod instance, or a Callable as expected by FunctionStartpoints.

Returns:

A StartpointMethod instance.

Return type:

startpoint_method

Raises:

TypeError if arguments cannot be converted to a StartpointMethod.

pypesto.startpoint.uniform(n_starts: int, lb: ndarray, ub: ndarray) ndarray[source]

Generate uniform points.

Parameters:
  • n_starts (Number of starts.) –

  • lb (Lower bound.) –

  • ub (Upper bound.) –

Returns:

xs

Return type:

Uniformly sampled points in [lb, ub], shape (n_starts, n_x).

Storage

Saving and loading traces and results objects.

class pypesto.store.OptimizationResultHDF5Reader(storage_filename: str)[source]

Bases: object

Reader of the HDF5 result files written by OptimizationResultHDF5Writer.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize reader.

Parameters:

storage_filename (str) – HDF5 result file name

read() Result[source]

Read HDF5 result file and return pyPESTO result object.

class pypesto.store.OptimizationResultHDF5Writer(storage_filename: str)[source]

Bases: object

Writer of the HDF5 result files.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize Writer.

Parameters:

storage_filename (str) – HDF5 result file name

write(result: Result, overwrite=False)[source]

Write HDF5 result file from pyPESTO result object.

class pypesto.store.ProblemHDF5Reader(storage_filename: str)[source]

Bases: object

Reader of the HDF5 problem files written by ProblemHDF5Writer.

storage_filename

HDF5 problem file name

__init__(storage_filename: str)[source]

Initialize reader.

Parameters:

storage_filename (str) – HDF5 problem file name

read(objective: Optional[ObjectiveBase] = None) Problem[source]

Read HDF5 problem file and return pyPESTO problem object.

Parameters:

objective – Objective function which is currently not saved to storage.

Returns:

A problem instance with all attributes read in.

Return type:

problem

class pypesto.store.ProblemHDF5Writer(storage_filename: str)[source]

Bases: object

Writer of the HDF5 problem files.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize writer.

Parameters:

storage_filename (str) – HDF5 problem file name

write(problem, overwrite: bool = False)[source]

Write HDF5 problem file from pyPESTO problem object.

class pypesto.store.ProfileResultHDF5Reader(storage_filename: str)[source]

Bases: object

Reader of the HDF5 result files written by OptimizationResultHDF5Writer.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize reader.

Parameters:

storage_filename – HDF5 result file name

read() Result[source]

Read HDF5 result file and return pyPESTO result object.

class pypesto.store.ProfileResultHDF5Writer(storage_filename: str)[source]

Bases: object

Writer of the HDF5 result files.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize Writer.

Parameters:

storage_filename (str) – HDF5 result file name

write(result: Result, overwrite: bool = False)[source]

Write HDF5 result file from pyPESTO result object.

class pypesto.store.SamplingResultHDF5Reader(storage_filename: str)[source]

Bases: object

Reader of the HDF5 result files written by SamplingResultHDF5Writer.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize reader.

Parameters:

storage_filename (str) – HDF5 result file name

read() Result[source]

Read HDF5 result file and return pyPESTO result object.

class pypesto.store.SamplingResultHDF5Writer(storage_filename: str)[source]

Bases: object

Writer of the HDF5 sampling files.

storage_filename

HDF5 result file name

__init__(storage_filename: str)[source]

Initialize Writer.

Parameters:

storage_filename (str) – HDF5 result file name

write(result: Result, overwrite: bool = False)[source]

Write HDF5 sampling file from pyPESTO result object.

pypesto.store.autosave(filename: Optional[Union[str, Callable]], result: Result, store_type: str, overwrite: bool = False)[source]

Save the result of optimization, profiling or sampling automatically.

Parameters:
  • filename – Either the filename to save to or “Auto”, in which case it will automatically generate a file named year_month_day_{type}_result.hdf5. A method can also be provided. All input to the autosave method will be passed to the filename method. The output should be the filename (str).

  • result – The result to be saved.

  • store_type – Either optimize, sample or profile. Depending on the method the function is called in.

  • overwrite – Whether to overwrite the currently existing results.

pypesto.store.load_objective_config(filename: str)[source]

Load the objective information stored in f.

Parameters:

filename – The name of the file in which the information are stored.

Returns:

  • A dictionary of the information, stored instead of the

  • actual objective in problem.objective.

pypesto.store.read_result(filename: str, problem: bool = True, optimize: bool = False, profile: bool = False, sample: bool = False) Result[source]

Save the whole pypesto.Result object in an HDF5 file.

By default, loads everything. If any of optimize, profile, sample is explicitly set to true, loads only this one.

Parameters:
  • filename – The HDF5 filename.

  • problem – Read the problem.

  • optimize – Read the optimize result.

  • profile – Read the profile result.

  • sample – Read the sample result.

Returns:

Result object containing the results stored in HDF5 file.

Return type:

result

pypesto.store.write_array(f: Group, path: str, values: Collection) None[source]

Write array to hdf5.

Parameters:
  • f – h5py.Group where dataset should be created

  • path – path of the dataset to create

  • values – array to write

pypesto.store.write_result(result: Result, filename: str, overwrite: bool = False, problem: bool = True, optimize: bool = False, profile: bool = False, sample: bool = False)[source]

Save whole pypesto.Result to hdf5 file.

Boolean indicators allow specifying what to save.

Parameters:
  • result – The pypesto.Result object to be saved.

  • filename – The HDF5 filename.

  • overwrite – Boolean, whether already existing results should be overwritten.

  • problem – Read the problem.

  • optimize – Read the optimize result.

  • profile – Read the profile result.

  • sample – Read the sample result.

Visualize

pypesto comes with various visualization routines. To use these, import pypesto.visualize.

class pypesto.visualize.ReferencePoint(reference=None, x=None, fval=None, color=None, legend=None)[source]

Bases: dict

Reference point for plotting.

Should contain a parameter value and an objective function value, may also contain a color and a legend.

Can be used like a dict.

x

Reference parameters.

Type:

ndarray

fval

Function value, fun(x), for reference parameters.

Type:

float

color

Color which should be used for reference point.

Type:

RGBA, optional

auto_color

flag indicating whether color for this reference point should be assigned automatically or whether it was assigned by user

Type:

boolean

legend

legend text for reference point

Type:

str

__init__(reference=None, x=None, fval=None, color=None, legend=None)[source]
pypesto.visualize.assign_clustered_colors(vals, balance_alpha=True, highlight_global=True)[source]

Cluster and assign colors.

Parameters:
  • vals (numeric list or array) – List to be clustered and assigned colors.

  • balance_alpha (bool (optional)) – Flag indicating whether alpha for large clusters should be reduced to avoid overplotting (default: True)

  • highlight_global (bool (optional)) – flag indicating whether global optimum should be highlighted

Returns:

colors – One for each element in ‘vals’.

Return type:

list of RGBA

pypesto.visualize.assign_clusters(vals)[source]

Find clustering.

Parameters:

vals (numeric list or array) – List to be clustered.

Returns:

  • clust (numeric list) – Indicating the corresponding cluster of each element from ‘vals’.

  • clustsize (numeric list) – Size of clusters, length equals number of clusters.

pypesto.visualize.assign_colors(vals, colors=None, balance_alpha=True, highlight_global=True)[source]

Assign colors or format user specified colors.

Parameters:
  • vals (numeric list or array) – List to be clustered and assigned colors.

  • colors (list, or RGBA, optional) – list of colors, or single color

  • balance_alpha (bool (optional)) – Flag indicating whether alpha for large clusters should be reduced to avoid overplotting (default: True)

  • highlight_global (bool (optional)) – flag indicating whether global optimum should be highlighted

Returns:

colors – One for each element in ‘vals’.

Return type:

list of RGBA

pypesto.visualize.create_references(references=None, x=None, fval=None, color=None, legend=None) List[ReferencePoint][source]

Create a list of reference point objects from user inputs.

Parameters:
  • references (ReferencePoint or dict or list, optional) – Will be converted into a list of RefPoints

  • x (ndarray, optional) – Parameter vector which should be used for reference point

  • fval (float, optional) – Objective function value which should be used for reference point

  • color (RGBA, optional) – Color which should be used for reference point.

  • legend (str) – legend text for reference point

Returns:

colors – One for each element in ‘vals’.

Return type:

list of RGBA

pypesto.visualize.delete_nan_inf(fvals: ndarray, x: Optional[ndarray] = None, xdim: Optional[int] = 1, magnitude_bound: Optional[float] = inf) Tuple[ndarray, ndarray][source]

Delete nan and inf values in fvals.

If parameters ‘x’ are passed, also the corresponding entries are deleted.

Parameters:
  • x – array of parameters

  • fvals – array of fval

  • xdim – dimension of x, in case x dimension cannot be inferred

  • magnitude_bound – any values with a magnitude (absolute value) larger than the magnitude_bound are also deleted

Returns:

  • x – array of parameters without nan or inf

  • fvals – array of fval without nan or inf

pypesto.visualize.ensemble_crosstab_scatter_lowlevel(dataset: ndarray, component_labels: Optional[Sequence[str]] = None, **kwargs)[source]

Plot cross-classification table of scatter plots for different coordinates.

Lowlevel routine for multiple UMAP and PCA plots, but can also be used to visualize, e.g., parameter traces across optimizer runs.

Parameters:
  • dataset – array of data points to be shown as scatter plot

  • component_labels – labels for the x-axes and the y-axes

Returns:

A dictionary of plot axes.

Return type:

axs

pypesto.visualize.ensemble_identifiability(ensemble: Ensemble, ax: Optional[Axes] = None, size: Optional[Tuple[float]] = (12, 6))[source]

Visualize identifiablity of parameter ensemble.

Plot an overview about how many parameters hit the parameter bounds based on a ensemble of parameters. confidence intervals/credible ranges are computed via the ensemble mean plus/minus 1 standard deviation. This highlevel routine expects a ensemble object as input.

Parameters:
  • ensemble – ensemble of parameter vectors (from pypesto.ensemble)

  • ax – Axes object to use.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.ensemble_scatter_lowlevel(dataset, ax: Optional[Axes] = None, size: Optional[Tuple[float]] = (12, 6), x_label: str = 'component 1', y_label: str = 'component 2', color_by: Optional[Sequence[float]] = None, color_map: str = 'viridis', background_color: Tuple[float, float, float, float] = (0.0, 0.0, 0.0, 1.0), marker_type: str = '.', scatter_size: float = 0.5, invert_scatter_order: bool = False)[source]

Create a scatter plot.

Parameters:
  • dataset – array of data points in reduced dimension

  • ax – Axes object to use.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • x_label – The x-axis label

  • y_label – The y-axis label

  • color_by – A sequence/list of floats, which specify the color in the colormap

  • color_map – A colormap name known to pyplot

  • background_color – Background color of the axes object (defaults to black)

  • marker_type – Type of plotted markers

  • scatter_size – Size of plotted markers

  • invert_scatter_order – Specifies the order of plotting the scatter points, can be important in case of overplotting

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.optimization_run_properties_one_plot(results: Result, properties_to_plot: Optional[List[str]] = None, size: Tuple[float, float] = (18.5, 10.5), start_indices: Optional[Union[int, Iterable[int]]] = None, colors: Optional[Union[List[float], List[List[float]]]] = None, legends: Optional[Union[str, List[str]]] = None, plot_type: str = 'line') Axes[source]

Plot stats for allproperties specified in properties_to_plot on one plot.

Parameters:
  • results – Optimization result obtained by ‘optimize.py’ or list of those

  • properties_to_plot – Optimization run properties that should be plotted

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • start_indices – List of integers specifying the multistarts to be plotted or int specifying up to which start index should be plotted

  • colors – List of RGBA colors (one color per property in properties_to_plot), or single RGBA color. If not set and one result, clustering is done and colors are assigned automatically

  • legends – Labels, one label per optimization property

  • plot_type – Specifies plot type. Possible values: ‘line’ and ‘hist’

Returns:

The plot axes.

Return type:

ax

Examples

optimization_properties_per_multistart(

result1, properties_to_plot=[‘time’], colors=[.5, .9, .9, .3])

optimization_properties_per_multistart(

result1, properties_to_plot=[‘time’, ‘n_grad’], colors=[[.5, .9, .9, .3], [.2, .1, .9, .5]])

pypesto.visualize.optimization_run_properties_per_multistart(results: Union[Result, Sequence[Result]], properties_to_plot: Optional[List[str]] = None, size: Tuple[float, float] = (18.5, 10.5), start_indices: Optional[Union[int, Iterable[int]]] = None, colors: Optional[Union[List[float], List[List[float]]]] = None, legends: Optional[Union[str, List[str]]] = None, plot_type: str = 'line') Dict[str, Axes][source]

One plot per optimization property in properties_to_plot.

Parameters:
  • results – Optimization result obtained by ‘optimize.py’ or list of those

  • properties_to_plot – Optimization run properties that should be plotted

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • start_indices – List of integers specifying the multistarts to be plotted or int specifying up to which start index should be plotted

  • colors – List of RGBA colors (one color per result in results), or single RGBA color. If not set and one result, clustering is done and colors are assigned automatically

  • legends – Labels for line plots, one label per result object

  • plot_type – Specifies plot type. Possible values: ‘line’ and ‘hist’

Returns:

  • ax

  • The plot axes.

Examples

optimization_properties_per_multistart(

result1, properties_to_plot=[‘time’], colors=[.5, .9, .9, .3])

optimization_properties_per_multistart(

[result1, result2], properties_to_plot=[‘time’], colors=[[.5, .9, .9, .3], [.2, .1, .9, .5]])

optimization_properties_per_multistart(

result1, properties_to_plot=[‘time’, ‘n_grad’], colors=[.5, .9, .9, .3])

optimization_properties_per_multistart(

[result1, result2], properties_to_plot=[‘time’, ‘n_fval’], colors=[[.5, .9, .9, .3], [.2, .1, .9, .5]])

pypesto.visualize.optimization_run_property_per_multistart(results: Union[Result, Sequence[Result]], opt_run_property: str, axes: Optional[Axes] = None, size: Tuple[float, float] = (18.5, 10.5), start_indices: Optional[Union[int, Iterable[int]]] = None, colors: Optional[Union[List[float], List[List[float]]]] = None, legends: Optional[Union[str, List[str]]] = None, plot_type: str = 'line') Axes[source]

Plot stats for an optimization run property specified by opt_run_property.

It is possible to plot a histogram or a line plot. In a line plot, on the x axis are the numbers of the multistarts, where the multistarts are ordered with respect to a function value. On the y axis of the line plot the value of the corresponding parameter for each multistart is displayed.

Parameters:
  • opt_run_property – optimization run property to plot. One of the ‘time’, ‘n_fval’, ‘n_grad’, ‘n_hess’, ‘n_res’, ‘n_sres’

  • results – Optimization result obtained by ‘optimize.py’ or list of those

  • axes – Axes object to use

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • start_indices – List of integers specifying the multistarts to be plotted or int specifying up to which start index should be plotted

  • colors – List of RGBA colors (one color per result in results), or single RGBA color. If not set and one result, clustering is done and colors are assigned automatically

  • legends – Labels for line plots, one label per result object

  • plot_type – Specifies plot type. Possible values: ‘line’, ‘hist’, ‘both’

Returns:

The plot axes.

Return type:

axes

pypesto.visualize.optimization_scatter(result: Result, parameter_indices: Union[str, Sequence[int]] = 'free_only', start_indices: Optional[Union[int, Iterable[int]]] = None, diag_kind: str = 'kde', suptitle: Optional[str] = None, size: Optional[Tuple[float, float]] = None, show_bounds: bool = False)[source]

Plot a scatter plot of all pairs of parameters for the given starts.

Parameters:
  • result – Optimization result obtained by ‘optimize.py’.

  • parameter_indices – List of integers specifying the parameters to be considered.

  • start_indices – List of integers specifying the multistarts to be plotted or int specifying up to which start index should be plotted.

  • diag_kind – Visualization mode for marginal densities {‘auto’, ‘hist’, ‘kde’, None}.

  • suptitle – Title of the plot.

  • size – Size of the plot.

  • show_bounds – Whether to show the parameter bounds.

Returns:

The plot axis.

Return type:

ax

pypesto.visualize.optimizer_convergence(result: Result, ax: Optional[Axes] = None, xscale: str = 'symlog', yscale: str = 'log', size: Tuple[float] = (18.5, 10.5)) Axes[source]

Visualize to help spotting convergence issues.

Scatter plot of function values and gradient values at the end of optimization. Optimizer exit-message is encoded by color. Can help identifying convergence issues in optimization and guide tolerance refinement etc.

Parameters:
  • result – Optimization result obtained by ‘optimize.py’

  • ax – Axes object to use.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • xscale – Scale for x-axis

  • yscale – Scale for y-axis

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.optimizer_history(results: Union[Result, List[Result]], ax: Optional[Axes] = None, size: Tuple = (18.5, 10.5), trace_x: str = 'steps', trace_y: str = 'fval', scale_y: str = 'log10', offset_y: Optional[float] = None, colors: Optional[Union[Tuple[float, float, float, float], List[Tuple[float, float, float, float]]]] = None, y_limits: Optional[Union[float, List[float], ndarray]] = None, start_indices: Optional[Union[int, List[int]]] = None, reference: Optional[Union[ReferencePoint, dict, List[ReferencePoint], List[dict]]] = None, legends: Optional[Union[str, List[str]]] = None) Axes[source]

Plot history of optimizer.

Can plot either the history of the cost function or of the gradient norm, over either the optimizer steps or the computation time.

Parameters:
  • results – Optimization result obtained by ‘optimize.py’ or list of those

  • ax – Axes object to use.

  • size (tuple, optional) – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • trace_x – What should be plotted on the x-axis? Possibilities: TRACE_X Default: TRACE_X_STEPS

  • trace_y – What should be plotted on the y-axis? Possibilities: TRACE_Y_FVAL, TRACE_Y_GRADNORM Default: TRACE_Y_FVAl

  • scale_y – May be logarithmic or linear (‘log10’ or ‘lin’)

  • offset_y – Offset for the y-axis-values, as these are plotted on a log10-scale Will be computed automatically if necessary

  • colors (list, or RGBA, optional) – list of colors, or single color color or list of colors for plotting. If not set, clustering is done and colors are assigned automatically

  • y_limits – maximum value to be plotted on the y-axis, or y-limits

  • start_indices – list of integers specifying the multistart to be plotted or int specifying up to which start index should be plotted

  • reference – List of reference points for optimization results, containing at least a function value fval

  • legends – Labels for line plots, one label per result object

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.optimizer_history_lowlevel(vals: List[ndarray], scale_y: str = 'log10', colors: Optional[Union[Tuple[float, float, float, float], List[Tuple[float, float, float, float]]]] = None, ax: Optional[Axes] = None, size: Tuple = (18.5, 10.5), x_label: str = 'Optimizer steps', y_label: str = 'Objective value', legend_text: Optional[str] = None) Axes[source]

Plot optimizer history using list of numpy arrays.

Parameters:
  • vals – list of 2xn-arrays (x_values and y_values of the trace)

  • scale_y – May be logarithmic or linear (‘log10’ or ‘lin’)

  • colors (list, or RGBA, optional) – list of colors, or single color color or list of colors for plotting. If not set, clustering is done and colors are assigned automatically

  • ax – Axes object to use.

  • size – see waterfall

  • x_label – label for x-axis

  • y_label – label for y-axis

  • legend_text – Label for line plots

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.parameter_hist(result: Result, parameter_name: str, bins: Union[int, str] = 'auto', ax: Optional[matplotlib.Axes] = None, size: Optional[Tuple[float]] = (18.5, 10.5), color: Optional[List[float]] = None, start_indices: Optional[Union[int, List[int]]] = None)[source]

Plot parameter values as a histogram.

Parameters:
  • result – Optimization result obtained by ‘optimize.py’

  • parameter_name – The name of the parameter that should be plotted

  • bins – Specifies bins of the histogram

  • ax – Axes object to use

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • color – RGBA color.

  • start_indices – List of integers specifying the multistarts to be plotted or int specifying up to which start index should be plotted

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.parameters(results: Union[Result, Sequence[Result]], ax: Optional[Axes] = None, parameter_indices: Union[str, Sequence[int]] = 'free_only', lb: Optional[Union[ndarray, List[float]]] = None, ub: Optional[Union[ndarray, List[float]]] = None, size: Optional[Tuple[float, float]] = None, reference: Optional[List[ReferencePoint]] = None, colors: Optional[Union[Tuple[float, float, float, float], List[Tuple[float, float, float, float]]]] = None, legends: Optional[Union[str, List[str]]] = None, balance_alpha: bool = True, start_indices: Optional[Union[int, Iterable[int]]] = None, scale_to_interval: Optional[Tuple[float, float]] = None) Axes[source]

Plot parameter values.

Parameters:
  • results – Optimization result obtained by ‘optimize.py’ or list of those

  • ax – Axes object to use.

  • parameter_indices – Specifies which parameters should be plotted. Allowed string values are ‘all’ (both fixed and free parameters will be plotted) and ‘free_only’ (only free parameters will be plotted)

  • lb – If not None, override result.problem.lb, problem.problem.ub. Dimension either result.problem.dim or result.problem.dim_full.

  • ub – If not None, override result.problem.lb, problem.problem.ub. Dimension either result.problem.dim or result.problem.dim_full.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • reference – List of reference points for optimization results, containing at least a function value fval

  • colors – list of RGBA colors, or single RGBA color If not set, clustering is done and colors are assigned automatically

  • legends – Labels for line plots, one label per result object

  • balance_alpha – Flag indicating whether alpha for large clusters should be reduced to avoid overplotting (default: True)

  • start_indices – list of integers specifying the multistarts to be plotted or int specifying up to which start index should be plotted

  • scale_to_interval – Tuple of bounds to which to scale all parameter values and bounds, or None to use bounds as determined by lb, ub.

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.parameters_lowlevel(xs: Sequence[Union[ndarray, List[float]]], fvals: Union[ndarray, List[float]], lb: Optional[Union[ndarray, List[float]]] = None, ub: Optional[Union[ndarray, List[float]]] = None, x_labels: Optional[Iterable[str]] = None, ax: Optional[Axes] = None, size: Optional[Tuple[float, float]] = None, colors: Optional[Sequence[Union[ndarray, List[float]]]] = None, linestyle: str = '-', legend_text: Optional[str] = None, balance_alpha: bool = True) Axes[source]

Plot parameters plot using list of parameters.

Parameters:
  • xs – Including optimized parameters for each startpoint. Shape: (n_starts, dim).

  • fvals – Function values. Needed to assign cluster colors.

  • lb – The lower and upper bounds.

  • ub – The lower and upper bounds.

  • x_labels – Labels to be used for the parameters.

  • ax – Axes object to use.

  • size – see parameters

  • colors – One for each element in ‘fvals’.

  • linestyle – linestyle argument for parameter plot

  • legend_text – Label for line plots

  • balance_alpha – Flag indicating whether alpha for large clusters should be reduced to avoid overplotting (default: True)

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.process_offset_y(offset_y: Optional[float], scale_y: str, min_val: float) float[source]

Compute offset for y-axis, depend on user settings.

Parameters:
  • offset_y – value for offsetting the later plotted values, in order to ensure positivity if a semilog-plot is used

  • scale_y – Can be ‘lin’ or ‘log10’, specifying whether values should be plotted on linear or on log10-scale

  • min_val – Smallest value to be plotted

Returns:

offset_y – value for offsetting the later plotted values, in order to ensure positivity if a semilog-plot is used

Return type:

float

pypesto.visualize.process_result_list(results: Union[Result, List[Result]], colors=None, legends=None)[source]

Assign colors and legends to a list of results, check user provided lists.

Parameters:
  • results (list or pypesto.Result) – list of pypesto.Result objects or a single pypesto.Result

  • colors (list, optional) – list of RGBA colors

  • legends (str or list) – labels for line plots

Returns:

  • results (list of pypesto.Result) – list of pypesto.Result objects

  • colors (list of RGBA) – One for each element in ‘results’.

  • legends (list of str) – labels for line plots

pypesto.visualize.process_y_limits(ax, y_limits)[source]

Apply user specified limits of y-axis.

Parameters:
  • ax (matplotlib.Axes, optional) – Axes object to use.

  • y_limits (ndarray) – y_limits, minimum and maximum, for current axes object

Returns:

ax – Axes object to use.

Return type:

matplotlib.Axes, optional

pypesto.visualize.profile_cis(result: Result, confidence_level: float = 0.95, profile_indices: Optional[Sequence[int]] = None, profile_list: int = 0, color: Union[str, tuple] = 'C0', show_bounds: bool = False, ax: Optional[Axes] = None) Axes[source]

Plot approximate confidence intervals based on profiles.

Parameters:
  • result – The result object after profiling.

  • confidence_level – The confidence level in (0,1), which is translated to an approximate threshold assuming a chi2 distribution, using pypesto.profile.chi2_quantile_to_ratio.

  • profile_indices – List of integer values specifying which profiles should be plotted. Defaults to the indices for which profiles were generated in profile list profile_list.

  • profile_list – Index of the profile list to be used.

  • color – Main plot color.

  • show_bounds – Whether to show, and extend the plot to, the lower and upper bounds.

  • ax – Axes object to use. Default: Create a new one.

pypesto.visualize.profile_lowlevel(fvals, ax=None, size: Tuple[float, float] = (18.5, 6.5), color=None, legend_text: Optional[str] = None, show_bounds: bool = False, lb: Optional[float] = None, ub: Optional[float] = None)[source]

Lowlevel routine for plotting one profile, working with a numpy array only.

Parameters:
  • fvals (numeric list or array) – Values to plot.

  • ax (matplotlib.Axes, optional) – Axes object to use.

  • size (tuple, optional) – Figure size (width, height) in inches. Is only applied when no ax object is specified.

  • color (RGBA, optional) – Color for profiles in plot.

  • legend_text (str) – Label for line plots.

  • show_bounds – Whether to show, and extend the plot to, the lower and upper bounds.

  • lb – Lower bound.

  • ub – Upper bound.

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.profiles(results: Union[Result, Sequence[Result]], ax=None, profile_indices: Optional[Sequence[int]] = None, size: Sequence[float] = (18.5, 6.5), reference: Optional[Union[ReferencePoint, Sequence[ReferencePoint]]] = None, colors=None, legends: Optional[Sequence[str]] = None, x_labels: Optional[Sequence[str]] = None, profile_list_ids: Union[int, Sequence[int]] = 0, ratio_min: float = 0.0, show_bounds: bool = False)[source]

Plot classical 1D profile plot.

Using the posterior, e.g. Gaussian like profile.

Parameters:
  • results (list or pypesto.Result) – List of or single pypesto.Result after profiling.

  • ax (list of matplotlib.Axes, optional) – List of axes objects to use.

  • profile_indices (list of integer values) – List of integer values specifying which profiles should be plotted.

  • size (tuple, optional) – Figure size (width, height) in inches. Is only applied when no ax object is specified.

  • reference (list, optional) – List of reference points for optimization results, containing at least a function value fval.

  • colors (list, or RGBA, optional) – List of colors, or single color.

  • legends (list or str, optional) – Labels for line plots, one label per result object.

  • x_labels (list of str) – Labels for parameter value axes (e.g. parameter names).

  • profile_list_ids (int or list of ints, optional) – Index or list of indices of the profile lists to be used for profiling.

  • ratio_min – Minimum ratio below which to cut off.

  • show_bounds – Whether to show, and extend the plot to, the lower and upper bounds.

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.profiles_lowlevel(fvals, ax=None, size: Tuple[float, float] = (18.5, 6.5), color=None, legend_text: Optional[str] = None, x_labels=None, show_bounds: bool = False, lb_full=None, ub_full=None)[source]

Lowlevel routine for profile plotting.

Working with a list of arrays only, opening different axes objects in case.

Parameters:
  • fvals (numeric list or array) – Values to plot.

  • ax (list of matplotlib.Axes, optional) – List of axes object to use.

  • size (tuple, optional) – Figure size (width, height) in inches. Is only applied when no ax object is specified.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified.

  • color (RGBA, optional) – Color for profiles in plot.

  • legend_text (List[str]) – Label for line plots.

  • legend_text – Label for line plots.

  • show_bounds – Whether to show, and extend the plot to, the lower and upper bounds.

  • lb_full – Lower bound.

  • ub_full – Upper bound.

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.projection_scatter_pca(pca_coordinates: ndarray, components: Sequence[int] = (0, 1), **kwargs)[source]

Plot a scatter plot for PCA coordinates.

Creates either one or multiple scatter plots, depending on the number of coordinates passed to it.

Parameters:
  • pca_coordinates – array of pca coordinates (returned as first output by the routine get_pca_representation) to be shown as scatter plot

  • components – Components to be plotted (corresponds to columns of pca_coordinates)

Returns:

Either one axes object, or a dictionary of plot axes (depending on the number of coordinates passed)

Return type:

axs

pypesto.visualize.projection_scatter_umap(umap_coordinates: ndarray, components: Sequence[int] = (0, 1), **kwargs)[source]

Plot a scatter plots for UMAP coordinates.

Creates either one or multiple scatter plots, depending on the number of coordinates passed to it.

Parameters:
  • umap_coordinates – array of umap coordinates (returned as first output by the routine get_umap_representation) to be shown as scatter plot

  • components – Components to be plotted (corresponds to columns of umap_coordinates)

Returns:

Either one axes object, or a dictionary of plot axes (depending on the number of coordinates passed)

Return type:

axs

pypesto.visualize.projection_scatter_umap_original(umap_object: UmapTypeObject, color_by: Sequence[float] = None, components: Sequence[int] = (0, 1), **kwargs)[source]

See projection_scatter_umap for more documentation.

Wrapper around umap.plot.points. Similar to projection_scatter_umap, but uses the original plotting routine from umap.plot.

Parameters:
  • umap_object – umap object (returned as second output by get_umap_representation) to be shown as scatter plot

  • color_by – A sequence/list of floats, which specify the color in the colormap

  • components – Components to be plotted (corresponds to columns of umap_coordinates)

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.sampling_1d_marginals(result: Result, i_chain: int = 0, par_indices: Optional[Sequence[int]] = None, stepsize: int = 1, plot_type: str = 'both', bw_method: str = 'scott', suptitle: Optional[str] = None, size: Optional[Tuple[float, float]] = None)[source]

Plot marginals.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • i_chain – Which chain to plot. Default: First chain.

  • par_indices (list of integer values) – List of integer values specifying which parameters to plot. Default: All parameters are shown.

  • stepsize – Only one in stepsize values is plotted.

  • plot_type ({'hist'|'kde'|'both'}) – Specify whether to plot a histogram (‘hist’), a kernel density estimate (‘kde’), or both (‘both’).

  • bw_method ({'scott', 'silverman' | scalar | pair of scalars}) – Kernel bandwidth method.

  • suptitle – Figure super title.

  • size – Figure size in inches.

Returns:

matplotlib-axes

Return type:

ax

pypesto.visualize.sampling_fval_traces(result: Result, i_chain: int = 0, full_trace: bool = False, stepsize: int = 1, title: Optional[str] = None, size: Optional[Tuple[float, float]] = None, ax: Optional[Axes] = None)[source]

Plot log-posterior (=function value) over iterations.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • i_chain – Which chain to plot. Default: First chain.

  • full_trace – Plot the full trace including warm up. Default: False.

  • stepsize – Only one in stepsize values is plotted.

  • title – Axes title.

  • size (ndarray) – Figure size in inches.

  • ax – Axes object to use.

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.sampling_parameter_cis(result: Result, alpha: Optional[Sequence[int]] = None, step: float = 0.05, show_median: bool = True, title: Optional[str] = None, size: Optional[Tuple[float, float]] = None, ax: Optional[Axes] = None) Axes[source]

Plot MCMC-based parameter credibility intervals.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • alpha – List of lower tail probabilities, defaults to 95% interval.

  • step – Height of boxes for projectile plot, defaults to 0.05.

  • show_median – Plot the median of the MCMC chain. Default: True.

  • title – Axes title.

  • size (ndarray) – Figure size in inches.

  • ax – Axes object to use.

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.sampling_parameter_traces(result: Result, i_chain: int = 0, par_indices: Optional[Sequence[int]] = None, full_trace: bool = False, stepsize: int = 1, use_problem_bounds: bool = True, suptitle: Optional[str] = None, size: Optional[Tuple[float, float]] = None, ax: Optional[Axes] = None)[source]

Plot parameter values over iterations.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • i_chain – Which chain to plot. Default: First chain.

  • par_indices (list of integer values) – List of integer values specifying which parameters to plot. Default: All parameters are shown.

  • full_trace – Plot the full trace including warm up. Default: False.

  • stepsize – Only one in stepsize values is plotted.

  • use_problem_bounds – Defines if the y-limits shall be the lower and upper bounds of parameter estimation problem.

  • suptitle – Figure suptitle.

  • size – Figure size in inches.

  • ax – Axes object to use.

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.sampling_prediction_trajectories(ensemble_prediction: EnsemblePrediction, levels: Union[float, Sequence[float]], title: Optional[str] = None, size: Optional[Tuple[float, float]] = None, axes: Optional[Axes] = None, labels: Optional[Dict[str, str]] = None, axis_label_padding: int = 50, groupby: str = 'condition', condition_gap: float = 0.01, condition_ids: Optional[Sequence[str]] = None, output_ids: Optional[Sequence[str]] = None, weighting: bool = False, reverse_opacities: bool = False, average: str = 'median', add_sd: bool = False, measurement_df: Optional[DataFrame] = None) Axes[source]

Visualize prediction trajectory of an EnsemblePrediction.

Plot MCMC-based prediction credibility intervals for the model states or outputs. One or various credibility levels can be depicted. Plots are grouped by condition.

Parameters:
  • ensemble_prediction – The ensemble prediction.

  • levels – Credibility levels, e.g. [95] for a 95% credibility interval. See the _get_level_percentiles() method for a description of how these levels are handled, and current limitations.

  • title – Axes title.

  • size (ndarray) – Figure size in inches.

  • axes – Axes object to use.

  • labels – Keys should be ensemble output IDs, values should be the desired label for that output. Defaults to output IDs.

  • axis_label_padding – Pixels between axis labels and plots.

  • groupby – Group plots by pypesto.C.OUTPUT or pypesto.C.CONDITION.

  • condition_gap – Gap between conditions when groupby == pypesto.C.CONDITION.

  • condition_ids – If provided, only data for the provided condition IDs will be plotted.

  • output_ids – If provided, only data for the provided output IDs will be plotted.

  • weighting – Whether weights should be used for trajectory.

  • reverse_opacities – Whether to reverse the opacities that are assigned to different levels.

  • average – The ID of the statistic that will be plotted as the average (e.g., MEDIAN or MEAN).

  • add_sd – Whether to add the standard deviation of the predictions to the plot.

  • measurement_df – Plot measurement data. NB: This should take the form of a PEtab measurements table, and the observableId column should correspond to the output IDs in the ensemble prediction.

Returns:

The plot axes.

Return type:

axes

pypesto.visualize.sampling_scatter(result: Result, i_chain: int = 0, stepsize: int = 1, suptitle: Optional[str] = None, diag_kind: str = 'kde', size: Optional[Tuple[float, float]] = None, show_bounds: bool = True)[source]

Parameter scatter plot.

Parameters:
  • result – The pyPESTO result object with filled sample result.

  • i_chain – Which chain to plot. Default: First chain.

  • stepsize – Only one in stepsize values is plotted.

  • suptitle – Figure super title.

  • diag_kind – Visualization mode for marginal densities {‘auto’, ‘hist’, ‘kde’, None}

  • size – Figure size in inches.

  • show_bounds – Whether to show, and extend the plot to, the lower and upper bounds.

Returns:

The plot axes.

Return type:

ax

pypesto.visualize.waterfall(results: Union[Result, Sequence[Result]], ax: Optional[Axes] = None, size: Optional[Tuple[float]] = (18.5, 10.5), y_limits: Optional[Tuple[float]] = None, scale_y: Optional[str] = 'log10', offset_y: Optional[float] = None, start_indices: Optional[Union[Sequence[int], int]] = None, n_starts_to_zoom: int = 0, reference: Optional[Sequence[ReferencePoint]] = None, colors: Optional[Union[Tuple[float, float, float, float], Sequence[Tuple[float, float, float, float]]]] = None, legends: Optional[Union[Sequence[str], str]] = None, order_by_id: bool = False)[source]

Plot waterfall plot.

Parameters:
  • results – Optimization result obtained by ‘optimize.py’ or list of those

  • ax (matplotlib.Axes, optional) – Axes object to use.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • y_limits (float or ndarray, optional) – Maximum value to be plotted on the y-axis, or y-limits

  • scale_y – May be logarithmic or linear (‘log10’ or ‘lin’)

  • offset_y – Offset for the y-axis, if it is supposed to be in log10-scale

  • start_indices – Integers specifying the multistart to be plotted or int specifying up to which start index should be plotted

  • n_starts_to_zoom – Number of best multistarts that should be zoomed in. Should be smaller that the total number of multistarts

  • reference – Reference points for optimization results, containing at least a function value fval

  • colors – Colors or single color for plotting. If not set, clustering is done and colors are assigned automatically

  • legends – Labels for line plots, one label per result object

  • order_by_id – Function values corresponding to the same start ID will be located at the same x-axis position. Only applicable when a list of result objects are provided. Default behavior is to sort the function values of each result independently of other results.

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

pypesto.visualize.waterfall_lowlevel(fvals, ax: Optional[Axes] = None, size: Optional[Tuple[float]] = (18.5, 10.5), scale_y: str = 'log10', offset_y: float = 0.0, colors: Optional[Union[Tuple[float, float, float, float], Sequence[Tuple[float, float, float, float]]]] = None, legend_text: Optional[str] = None)[source]

Plot waterfall plot using list of function values.

Parameters:
  • fvals (numeric list or array) – Including values need to be plotted. None values indicate that the corresponding start index should be skipped.

  • ax (matplotlib.Axes) – Axes object to use.

  • size – Figure size (width, height) in inches. Is only applied when no ax object is specified

  • scale_y (str, optional) – May be logarithmic or linear (‘log10’ or ‘lin’)

  • offset_y – offset for the y-axis, if it is supposed to be in log10-scale

  • colors (list, or RGBA, optional) – list of colors, or single color color or list of colors for plotting. If not set, clustering is done and colors are assigned automatically

  • legend_text – Label for line plots

Returns:

ax – The plot axes.

Return type:

matplotlib.Axes

Visualization of the model fit after optimization.

Currently only for PEtab problems.

pypesto.visualize.model_fit.time_trajectory_model(result: Union[Result, Sequence[Result]], problem: Optional[Problem] = None, timepoints: Optional[Union[ndarray, Sequence[ndarray]]] = None, n_timepoints: int = 1000, start_index: int = 0, state_ids: Optional[Union[str, Sequence[str]]] = None, state_names: Optional[Union[str, Sequence[str]]] = None, observable_ids: Optional[Union[str, Sequence[str]]] = None) Optional[Axes][source]

Visualize the time trajectory of the model with given timepoints.

It does this by calling the amici plotting routines.

Parameters:
  • result – The result object from optimization.

  • problem – A pypesto problem. Default is ‘None’ in which case result.problem is used. Needed in case the result is loaded from hdf5.

  • timepoints – Array of timepoints, at which the trajectory will be plotted.

  • n_timepoints – Number of timepoints to be plotted between 0 and last measurement of the model. Only used when timepoints==None.

  • start_index – Index of Optimization run to be plotted. Default is best start.

  • state_ids – Ids of the states to be plotted.

  • state_names – Names of the states to be plotted.

  • observable_ids – Ids of the observables to be plotted.

Returns:

matplotlib.axes.Axes object of the plot.

Return type:

axes

pypesto.visualize.model_fit.visualize_optimized_model_fit(petab_problem: Problem, result: Union[Result, Sequence[Result]], pypesto_problem: Problem, start_index: int = 0, return_dict: bool = False, unflattened_petab_problem: Optional[Problem] = None, **kwargs) Optional[Axes][source]

Visualize the optimized model fit of a PEtab problem.

Function calls the PEtab visualization file of the petab_problem and visualizes the fit of the optimized parameter. Common additional argument is subplot_dir to specify the directory each subplot is saved to. Further keyword arguments are delegated to petab.visualize.plot_with_vis_spec(), see there for more information.

Parameters:
  • petab_problem – The petab.Problem that was optimized.

  • result – The result object from optimization.

  • start_index – The index of the optimization run in result.optimize_result.list. Ignored if problem_parameters is provided.

  • pypesto_problem – The pyPESTO problem.

  • return_dict – Return plot and simulation results as a dictionary.

  • unflattened_petab_problem – If the original PEtab problem is flattened, this can be passed to plot with the original unflattened problem.

  • kwargs – Passed to petab.visualize.plot_problem().

Returns:

  • axes (matplotlib.axes.Axes object of the created plot.)

  • None (In case subplots are saved to file)