pypesto.optimize

Optimize

Multistart optimization with support for various optimizers.

class pypesto.optimize.CESSOptimizer(ess_init_args: List[Dict], max_iter: int, max_walltime_s: float = inf)[source]

Bases: object

Cooperative Enhanced Scatter Search Optimizer (CESS).

A cooperative scatter search algorithm based on [VillaverdeEge2012]. In short, multiple scatter search instances with different hyperparameters are running in different threads/processes, and exchange information. Some instances focus on diversification while others focus on intensification. Communication happens at fixed time intervals.

Proposed hyperparameter values in [VillaverdeEge2012]:

  • dim_refset: [0.5 n_parameter, 20 n_parameters]

  • local_n2: [0, 100]

  • balance: [0, 0.5]

  • n_diverse: [5 n_par, 20 n_par]

  • max_eval: such that \(\tau = log10(max_eval / n_par)\) is in [2.5, 3.5], with a recommended default value of 2.5.

[VillaverdeEge2012] (1,2)

‘A cooperative strategy for parameter estimation in large scale systems biology models’, Villaverde, A.F., Egea, J.A. & Banga, J.R. BMC Syst Biol 2012, 6, 75. https://doi.org/10.1186/1752-0509-6-75

ess_init_args

List of argument dictionaries passed to ESSOptimizer.__init__(). The length of this list is the number of parallel ESS processes. Resource limits such as max_eval apply to a single CESS iteration, not to the full search.

max_iter

Maximum number of CESS iterations.

max_walltime_s

Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit.

fx_best

The best objective value seen so far.

x_best

Parameter vector corresponding to fx_best.

starttime

Starting time of the most recent optimization.

i_iter

Current iteration number.

__init__(ess_init_args: List[Dict], max_iter: int, max_walltime_s: float = inf)[source]

Construct.

Parameters:
  • ess_init_args – List of argument dictionaries passed to ESSOptimizer.__init__(). The length of this list is the number of parallel ESS processes. Resource limits such as max_eval apply to a single CESS iteration, not to the full search.

  • max_iter – Maximum number of CESS iterations.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit.

minimize(problem: Problem, startpoint_method: StartpointMethod) Result[source]

Minimize the given objective using CESS.

Parameters:
  • problem – Problem to run ESS on.

  • startpoint_method – Method for choosing starting points.

class pypesto.optimize.CmaesOptimizer(par_sigma0: float = 0.25, options: Dict | None = None)[source]

Bases: Optimizer

Global optimization using covariance matrix adaptation evolutionary search.

This optimizer interfaces the cma package (https://github.com/CMA-ES/pycma).

__init__(par_sigma0: float = 0.25, options: Dict | None = None)[source]

Initialize.

Parameters:
  • par_sigma0 – scalar, initial standard deviation in each coordinate. par_sigma0 should be about 1/4th of the search domain width (where the optimum is to be expected)

  • options – Optimizer options that are directly passed on to cma.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.DlibOptimizer(options: Dict | None = None)[source]

Bases: Optimizer

Use the Dlib toolbox for optimization.

__init__(options: Dict | None = None)[source]

Initialize base class.

check_x0_support(x_guesses: ndarray | None = None) bool[source]

Check whether optimizer supports x0.

get_default_options()[source]

Create default options specific for the optimizer.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.ESSOptimizer(*, max_iter: int = 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, dim_refset: int | None = None, local_n1: int = 1, local_n2: int = 10, balance: float = 0.5, local_optimizer: Optimizer | None = None, max_eval=inf, n_diverse: int | None = None, n_procs=None, n_threads=None, max_walltime_s=None)[source]

Bases: object

Enhanced Scatter Search (ESS) global optimization.

__init__(*, max_iter: int = 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, dim_refset: int | None = None, local_n1: int = 1, local_n2: int = 10, balance: float = 0.5, local_optimizer: Optimizer | None = None, max_eval=inf, n_diverse: int | None = None, n_procs=None, n_threads=None, max_walltime_s=None)[source]

Construct new ESS instance.

For plausible values of hyperparameters, see VillaverdeEge2012.

Parameters:
  • dim_refset – Size of the ReferenceSet. Note that in every iteration at least dim_refset**2 - dim_refset function evaluations will occur.

  • max_iter – Maximum number of ESS iterations.

  • local_n1 – Minimum number of iterations before first local search.

  • local_n2 – Minimum number of iterations between consecutive local searches. Maximally one local search per performed in each iteration.

  • local_optimizer – Local optimizer for refinement, or None to skip local searches.

  • n_diverse – Number of samples to choose from to construct the initial RefSet

  • max_eval – Maximum number of objective functions allowed. This criterion is only checked once per iteration, not after every objective evaluation, so the actual number of function evaluations may exceed this value.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search.

  • balance – Quality vs diversity balancing factor [0, 1]; 0 = only quality; 1 = only diversity

  • n_procs – Number of parallel processes to use for parallel function evaluation. Mutually exclusive with n_threads.

  • n_threads – Number of parallel threads to use for parallel function evaluation. Mutually exclusive with n_procs.

minimize(problem: Problem | None = None, startpoint_method: StartpointMethod | None = None, refset: RefSet | None = None) Result[source]

Minimize the given objective.

Parameters:
  • problem – Problem to run ESS on.

  • startpoint_method – Method for choosing starting points.

  • refset – The initial RefSet or None to auto-generate.

class pypesto.optimize.FidesOptimizer(hessian_update: fides.hessian_approximation.HessianApproximation | None = 'default', options: Dict | None = None, verbose: int | None = 20)[source]

Bases: Optimizer

Global/Local optimization using the trust region optimizer fides.

Package Homepage: https://fides-optimizer.readthedocs.io/en/latest

__init__(hessian_update: fides.hessian_approximation.HessianApproximation | None = 'default', options: Dict | None = None, verbose: int | None = 20)[source]

Initialize.

Parameters:
  • options – Optimizer options.

  • hessian_update – Hessian update strategy. If this is None, a hybrid approximation that switches from the problem.objective provided Hessian ( approximation) to a BFGS approximation will be used.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.IpoptOptimizer(options: Dict | None = None)[source]

Bases: Optimizer

Use IpOpt (https://pypi.org/project/ipopt/) for optimization.

__init__(options: Dict | None = None)[source]

Initialize.

Parameters:

options – Options are directly passed on to cyipopt.minimize_ipopt.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.NLoptOptimizer(method=None, local_method=None, options: Dict | None = None, local_options: Dict | None = None)[source]

Bases: Optimizer

Global/Local optimization using NLopt.

Package homepage: https://nlopt.readthedocs.io/en/latest/

__init__(method=None, local_method=None, options: Dict | None = None, local_options: Dict | None = None)[source]

Initialize.

Parameters:
  • method – Local or global Optimizer to use for minimization.

  • local_method – Local method to use in combination with the global optimizer ( for the MLSL family of solvers) or to solve a subproblem (for the AUGLAG family of solvers)

  • options – Optimizer options. scipy option maxiter is automatically transformed into maxeval and takes precedence.

  • local_options – Optimizer options for the local method

check_x0_support(x_guesses: ndarray | None = None) bool[source]

Check whether optimizer supports multiple initial guesses.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.OptimizeOptions(allow_failed_starts: bool = True, report_sres: bool = True, report_hess: bool = True, history_beats_optimizer: bool = True)[source]

Bases: dict

Options for the multistart optimization.

Parameters:
  • allow_failed_starts – Flag indicating whether we tolerate that exceptions are thrown during the minimization process.

  • report_sres – Flag indicating whether sres will be stored in the results object. Deactivating this option will improve memory consumption for large scale problems.

  • report_hess – Flag indicating whether hess will be stored in the results object. Deactivating this option will improve memory consumption for large scale problems.

  • history_beats_optimizer – Whether the optimal value recorded by pyPESTO in the history has priority over the optimal value reported by the optimizer (True) or not (False).

__init__(allow_failed_starts: bool = True, report_sres: bool = True, report_hess: bool = True, history_beats_optimizer: bool = True)[source]
static assert_instance(maybe_options: OptimizeOptions | Dict) OptimizeOptions[source]

Return a valid options object.

Parameters:

maybe_options (OptimizeOptions or dict) –

class pypesto.optimize.Optimizer[source]

Bases: ABC

Optimizer base class, not functional on its own.

An optimizer takes a problem, and possibly a start point, and then performs an optimization. It returns an OptimizerResult.

__init__()[source]

Initialize base class.

check_x0_support(x_guesses: ndarray | None = None) bool[source]

Check whether optimizer supports x0, return boolean.

get_default_options()[source]

Create default options specific for the optimizer.

abstract is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

abstract minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.PyswarmOptimizer(options: Dict | None = None)[source]

Bases: Optimizer

Global optimization using pyswarm.

__init__(options: Dict | None = None)[source]

Initialize base class.

check_x0_support(x_guesses: ndarray | None = None) bool[source]

Check whether optimizer supports x0.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.PyswarmsOptimizer(par_popsize: float = 10, options: Dict | None = None)[source]

Bases: Optimizer

Global optimization using pyswarms.

Package homepage: https://pyswarms.readthedocs.io/en/latest/index.html

Parameters:
  • par_popsize – number of particles in the swarm, default value 10

  • options – Optimizer options that are directly passed on to pyswarms. c1: cognitive parameter c2: social parameter w: inertia parameter Default values are (c1,c2,w) = (0.5, 0.3, 0.9)

Examples

Arguments that can be passed to options:

maxiter:

used to calculate the maximal number of funcion evaluations. Default: 1000

__init__(par_popsize: float = 10, options: Dict | None = None)[source]

Initialize base class.

check_x0_support(x_guesses: ndarray | None = None) bool[source]

Check whether optimizer supports x0.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.SacessOptimizer(num_workers: int | None = None, ess_init_args: List[Dict[str, Any]] | None = None, max_walltime_s: float = inf, sacess_loglevel: int = 20, ess_loglevel: int = 30)[source]

Bases: object

SACESS optimizer.

A shared-memory-based implementation of the SaCeSS algorithm presented in [PenasGon2017]. Multiple processes (workers) run consecutive ESSs in parallel. After each ESS run, depending on the outcome, there is a chance of exchanging good parameters, and changing ESS hyperparameters to those of the most promising worker.

[PenasGon2017]

‘Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy’, David R. Penas, Patricia González, Jose A. Egea, Ramón Doallo and Julio R. Banga, BMC Bioinformatics 2017, 18, 52. https://doi.org/10.1186/s12859-016-1452-4

__init__(num_workers: int | None = None, ess_init_args: List[Dict[str, Any]] | None = None, max_walltime_s: float = inf, sacess_loglevel: int = 20, ess_loglevel: int = 30)[source]

Construct.

Parameters:
  • ess_init_args – List of argument dictionaries passed to ESSOptimizer.__init__(). Each entry corresponds to one worker process. I.e., the length of this list is the number of ESSs. Ideally, this list contains some more conservative and some more aggressive configurations. Resource limits such as max_eval apply to a single CESS iteration, not to the full search. Mutually exclusive with num_workers.

  • num_workers – Number of workers to be used. If this argument is given, (different) default ESS settings will be used for each worker. Mutually exclusive with ess_init_args.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit.

  • ess_loglevel – Loglevel for ESS runs.

  • sacess_loglevel – Loglevel for SACESS runs.

minimize(problem: Problem, startpoint_method: StartpointMethod)[source]

Solve the given optimization problem.

class pypesto.optimize.ScipyDifferentialEvolutionOptimizer(options: Dict | None = None)[source]

Bases: Optimizer

Global optimization using scipy’s differential evolution optimizer.

Package homepage: https://docs.scipy.org/doc/scipy/reference/generated /scipy.optimize.differential_evolution.html

Parameters:

options – Optimizer options that are directly passed on to scipy’s optimizer.

Examples

Arguments that can be passed to options:

maxiter:

used to calculate the maximal number of funcion evaluations by maxfevals = (maxiter + 1) * popsize * len(x) Default: 100

popsize:

population size, default value 15

__init__(options: Dict | None = None)[source]

Initialize base class.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
class pypesto.optimize.ScipyOptimizer(method: str = 'L-BFGS-B', tol: float | None = None, options: Dict | None = None)[source]

Bases: Optimizer

Use the SciPy optimizers.

Find details on the optimizer and configuration options at: https://docs.scipy.org/doc/scipy/reference/generated/scipy. optimize.minimize.html#scipy.optimize.minimize

__init__(method: str = 'L-BFGS-B', tol: float | None = None, options: Dict | None = None)[source]

Initialize base class.

get_default_options()[source]

Create default options specific for the optimizer.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem: Problem, x0: ndarray, id: str, history_options: HistoryOptions | None = None, optimize_options: OptimizeOptions | None = None)[source]
pypesto.optimize.fill_result_from_history(result: OptimizerResult, optimizer_history: OptimizerHistory, optimize_options: OptimizeOptions | None = None) OptimizerResult[source]

Overwrite some values in the result object with values in the history.

Parameters:
  • result (Result as reported from the used optimizer.) –

  • optimizer_history (History of function values recorded by the objective.) –

  • optimize_options (Options on e.g. how to override.) –

Returns:

result

Return type:

The in-place modified result.

pypesto.optimize.minimize(problem: Problem, optimizer: Optimizer | None = None, n_starts: int = 100, ids: Iterable[str] | None = None, startpoint_method: StartpointMethod | Callable | bool | None = None, result: Result | None = None, engine: Engine | None = None, progress_bar: bool = True, options: OptimizeOptions | None = None, history_options: HistoryOptions | None = None, filename: str | Callable | None = None, overwrite: bool = False) Result[source]

Do multistart optimization.

Parameters:
  • problem – The problem to be solved.

  • optimizer – The optimizer to be used n_starts times.

  • n_starts – Number of starts of the optimizer.

  • ids – Ids assigned to the startpoints.

  • startpoint_method – Method for how to choose start points. False means the optimizer does not require start points, e.g. for the ‘PyswarmOptimizer’.

  • result – A result object to append the optimization results to. For example, one might append more runs to a previous optimization. If None, a new object is created.

  • engine – Parallelization engine. Defaults to sequential execution on a SingleCoreEngine.

  • progress_bar – Whether to display a progress bar.

  • options – Various options applied to the multistart optimization.

  • history_options – Optimizer history options.

  • filename – Name of the hdf5 file, where the result will be saved. Default is None, which deactivates automatic saving. If set to “Auto” it will automatically generate a file named year_month_day_profiling_result.hdf5. Optionally a method, see docs for pypesto.store.auto.autosave.

  • overwrite – Whether to overwrite result/optimization in the autosave file if it already exists.

Returns:

Result object containing the results of all multistarts in result.optimize_result.

Return type:

result

pypesto.optimize.optimization_result_from_history(filename: str, problem: Problem) Result[source]

Convert a saved hdf5 History to an optimization result.

Used for interrupted optimization runs.

Parameters:
  • filename – The name of the file in which the information are stored.

  • problem – Problem, needed to identify what parameters to accept.

Returns:

  • A result object in which the optimization result is constructed from

  • history. But missing “Time, Message and Exitflag” keys.

pypesto.optimize.read_result_from_file(problem: Problem | None, history_options: HistoryOptions, identifier: str) OptimizerResult[source]

Fill an OptimizerResult from history.

Parameters:
  • problem – The problem to find optimal parameters for. If None, bounds will be assumed to be [-inf, inf] for checking for admissible points.

  • identifier – Multistart id.

  • history_options – Optimizer history options.

pypesto.optimize.read_results_from_file(problem: Problem, history_options: HistoryOptions, n_starts: int) Result[source]

Fill a Result from a set of histories.

Parameters:
  • problem – The problem to find optimal parameters for.

  • n_starts – Number of performed multistarts.

  • history_options – Optimizer history options.