pypesto.optimize

Optimize

Multistart optimization with support for various optimizers.

class pypesto.optimize.CmaOptimizer[source]

Bases: Optimizer

Global optimization using covariance matrix adaptation evolutionary search.

This optimizer interfaces the cma package (https://github.com/CMA-ES/pycma).

__init__(par_sigma0=0.25, options=None)[source]

Initialize.

Parameters:
  • par_sigma0 (float) – scalar, initial standard deviation in each coordinate. par_sigma0 should be about 1/4th of the search domain width (where the optimum is to be expected)

  • options (dict) – Optimizer options that are directly passed on to cma.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.DlibOptimizer[source]

Bases: Optimizer

Use the Dlib toolbox for optimization.

__init__(options=None)[source]

Initialize base class.

Parameters:

options (dict) –

check_x0_support(x_guesses=None)[source]

Check whether optimizer supports x0.

Return type:

bool

Parameters:

x_guesses (ndarray) –

get_default_options()[source]

Create default options specific for the optimizer.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.ESSOptimizer[source]

Bases: object

Enhanced Scatter Search (ESS) global optimization.

See papers on ESS [1][2], CESS [3], and saCeSS [4].

__init__(*, max_iter=None, dim_refset=None, local_n1=1, local_n2=10, balance=0.5, local_optimizer=None, max_eval=None, n_diverse=None, n_procs=None, n_threads=None, max_walltime_s=None, result_includes_refset=False)[source]

Construct new ESS instance.

For plausible values of hyperparameters, see Villaverde et al.[3].

Parameters:
  • dim_refset (int) – Size of the ReferenceSet. Note that in every iteration at least dim_refset**2 - dim_refset function evaluations will occur.

  • max_iter (int) – Maximum number of ESS iterations.

  • local_n1 (int) – Minimum number of iterations before first local search.

  • local_n2 (int) – Minimum number of iterations between consecutive local searches. Maximally one local search per performed in each iteration.

  • local_optimizer (Union[Optimizer, Callable[..., Optimizer]]) – Local optimizer for refinement, or a callable that creates an pypesto.optimize.Optimizer or None to skip local searches. In case of a callable, it will be called with the keyword arguments max_walltime_s and max_eval, which should be passed to the optimizer (if supported) to honor the overall budget. See SacessFidesFactory for an example.

  • n_diverse (int) – Number of samples to choose from to construct the initial RefSet

  • max_eval – Maximum number of objective functions allowed. This criterion is only checked once per iteration, not after every objective evaluation, so the actual number of function evaluations may exceed this value.

  • max_walltime_s – Maximum walltime in seconds. Will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search.

  • balance (float) – Quality vs diversity balancing factor [0, 1]; 0 = only quality; 1 = only diversity

  • n_procs – Number of parallel processes to use for parallel function evaluation. Mutually exclusive with n_threads.

  • n_threads – Number of parallel threads to use for parallel function evaluation. Mutually exclusive with n_procs.

  • history – History of the best values/parameters found so far. (Monotonously decreasing objective values.)

  • result_includes_refset (bool) – Whether the minimize() result should include the final RefSet, or just the local search results and the overall best parameters.

minimize(problem=None, startpoint_method=None, refset=None)[source]

Minimize the given objective.

Parameters:
  • problem (Problem) – Problem to run ESS on.

  • startpoint_method (StartpointMethod) – Method for choosing starting points. Deprecated. Use ``problem.startpoint_method`` instead.

  • refset (Optional[RefSet]) – The initial RefSet or None to auto-generate.

Return type:

Result

class pypesto.optimize.FidesOptimizer[source]

Bases: Optimizer

Global/Local optimization using the trust region optimizer fides.

Package Homepage: https://fides-optimizer.readthedocs.io/en/latest

__init__(hessian_update='default', options=None, verbose=20)[source]

Initialize.

Parameters:
is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.IpoptOptimizer[source]

Bases: Optimizer

Use IpOpt (https://pypi.org/project/ipopt/) for optimization.

__init__(options=None)[source]

Initialize.

Parameters:

options (dict) – Options are directly passed on to cyipopt.minimize_ipopt, except for the approx_grad option, which is handled separately.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.NLoptOptimizer[source]

Bases: Optimizer

Global/Local optimization using NLopt.

Package homepage: https://nlopt.readthedocs.io/en/latest/

__init__(method=None, local_method=None, options=None, local_options=None)[source]

Initialize.

Parameters:
  • method – Local or global Optimizer to use for minimization.

  • local_method – Local method to use in combination with the global optimizer ( for the MLSL family of solvers) or to solve a subproblem (for the AUGLAG family of solvers)

  • options (dict) – Optimizer options. scipy option maxiter is automatically transformed into maxeval and takes precedence.

  • local_options (dict) – Optimizer options for the local method

check_x0_support(x_guesses=None)[source]

Check whether optimizer supports multiple initial guesses.

Return type:

bool

Parameters:

x_guesses (ndarray) –

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.OptimizeOptions[source]

Bases: dict

Options for the multistart optimization.

Parameters:
  • allow_failed_starts (bool) – Flag indicating whether we tolerate that exceptions are thrown during the minimization process.

  • report_sres (bool) – Flag indicating whether sres will be stored in the results object. Deactivating this option will improve memory consumption for large scale problems.

  • report_hess (bool) – Flag indicating whether hess will be stored in the results object. Deactivating this option will improve memory consumption for large scale problems.

  • history_beats_optimizer (bool) – Whether the optimal value recorded by pyPESTO in the history has priority over the optimal value reported by the optimizer (True) or not (False).

__init__(allow_failed_starts=True, report_sres=True, report_hess=True, history_beats_optimizer=True)[source]
Parameters:
  • allow_failed_starts (bool) –

  • report_sres (bool) –

  • report_hess (bool) –

  • history_beats_optimizer (bool) –

static assert_instance(maybe_options)[source]

Return a valid options object.

Parameters:

maybe_options (OptimizeOptions or dict) –

Return type:

OptimizeOptions

class pypesto.optimize.Optimizer[source]

Bases: ABC

Optimizer base class, not functional on its own.

An optimizer takes a problem, and possibly a start point, and then performs an optimization. It returns an OptimizerResult.

__init__()[source]

Initialize base class.

check_x0_support(x_guesses=None)[source]

Check whether optimizer supports x0, return boolean.

Return type:

bool

Parameters:

x_guesses (ndarray) –

get_default_options()[source]

Create default options specific for the optimizer.

abstract is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

abstract minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization.

Parameters:
  • problem (Problem) – The problem to find optimal parameters for.

  • x0 (ndarray) – The starting parameters.

  • id (str) – Multistart id.

  • history_options (HistoryOptions) – Optimizer history options.

  • optimize_options (OptimizeOptions) – Global optimization options.

Return type:

OptimizerResult

class pypesto.optimize.PyswarmOptimizer[source]

Bases: Optimizer

Global optimization using pyswarm.

__init__(options=None)[source]

Initialize base class.

Parameters:

options (dict) –

check_x0_support(x_guesses=None)[source]

Check whether optimizer supports x0.

Return type:

bool

Parameters:

x_guesses (ndarray) –

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.PyswarmsOptimizer[source]

Bases: Optimizer

Global optimization using pyswarms.

Package homepage: https://pyswarms.readthedocs.io/en/latest/index.html

Parameters:
  • par_popsize (float) – number of particles in the swarm, default value 10

  • options (dict) – Optimizer options that are directly passed on to pyswarms. c1: cognitive parameter c2: social parameter w: inertia parameter Default values are (c1,c2,w) = (0.5, 0.3, 0.9)

Examples

Arguments that can be passed to options:

maxiter:

used to calculate the maximal number of funcion evaluations. Default: 1000

__init__(par_popsize=10, options=None)[source]

Initialize base class.

Parameters:
check_x0_support(x_guesses=None)[source]

Check whether optimizer supports x0.

Return type:

bool

Parameters:

x_guesses (ndarray) –

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.SacessFidesFactory[source]

Bases: object

Factory for FidesOptimizer instances for use with SacessOptimizer.

__call__() will forward the walltime limit and function evaluation limit imposed on SacessOptimizer to FidesOptimizer. Besides that, default options are used.

Parameters:
__call__(max_walltime_s, max_eval)[source]

Create a FidesOptimizer instance.

Return type:

FidesOptimizer

Parameters:
  • max_walltime_s (int) –

  • max_eval (int) –

__init__(fides_options=None, fides_kwargs=None)[source]
Parameters:
class pypesto.optimize.SacessOptimizer[source]

Bases: object

SACESS optimizer.

A shared-memory-based implementation of the SaCeSS algorithm presented in Penas et al.[4]. Multiple processes (workers) run enhanced scatter searches (ESSs) in parallel. After each ESS iteration, depending on the outcome, there is a chance of exchanging good parameters, and changing ESS hyperparameters to those of the most promising worker. See Penas et al.[4] for details.

SacessOptimizer can be used with or without a local optimizer, but it is highly recommended to use one.

histories

List of the histories of the best values/parameters found by each worker. (Monotonously decreasing objective values.) See pypesto.visualize.optimizer_history.sacess_history() for visualization.

__init__(num_workers=None, ess_init_args=None, max_walltime_s=inf, sacess_loglevel=20, ess_loglevel=30, tmpdir=None, mp_start_method='spawn')[source]

Construct.

Parameters:
  • ess_init_args (Optional[list[dict[str, Any]]]) – List of argument dictionaries passed to ESSOptimizer.__init__(). Each entry corresponds to one worker process. I.e., the length of this list is the number of ESSs. Ideally, this list contains some more conservative and some more aggressive configurations. Resource limits such as max_eval apply to a single CESS iteration, not to the full search. Mutually exclusive with num_workers. Recommended default settings can be obtained from get_default_ess_options().

  • num_workers (Optional[int]) – Number of workers to be used. If this argument is given, (different) default ESS settings will be used for each worker. Mutually exclusive with ess_init_args. See get_default_ess_options() for details on the default settings.

  • max_walltime_s (float) – Maximum walltime in seconds. It will only be checked between local optimizations and other simulations, and thus, may be exceeded by the duration of a local search. Defaults to no limit. Note that in order to impose the wall time limit also on the local optimizer, the user has to provide a wrapper function similar to SacessFidesFactory.__call__().

  • ess_loglevel (int) – Loglevel for ESS runs.

  • sacess_loglevel (int) – Loglevel for SACESS runs.

  • tmpdir (Union[Path, str]) – Directory for temporary files. This defaults to a directory in the current working directory named SacessOptimizerTemp-{random suffix}. When setting this option, make sure any optimizers running in parallel have a unique tmpdir.

  • mp_start_method (str) – The start method for the multiprocessing context. See multiprocessing for details.

minimize(problem, startpoint_method=None)[source]

Solve the given optimization problem.

Note that if this function is called from a multithreaded program ( multiple threads running at the time of calling this function) and the multiprocessing start method is set to fork, there is a good chance for deadlocks. Postpone spawning threads until after minimize or change the start method to spawn.

Parameters:
  • problem (Problem) – Minimization problem. Problem.startpoint_method() will be used to sample random points. SacessOptimizer will deal with non-evaluable points. Therefore, using pypesto.startpoint.CheckedStartpoints with check_fval=True or check_grad=True is not recommended since it would create significant overhead.

  • startpoint_method (StartpointMethod) – Method for choosing starting points. Deprecated. Use ``problem.startpoint_method`` instead.

Return type:

Result

Returns:

  • Result object with optimized parameters in

  • pypesto.Result.optimize_result.

  • Results are sorted by objective. At least the best parameters are

  • included. Additional results may be included - this is subject to

  • change.

class pypesto.optimize.ScipyDifferentialEvolutionOptimizer[source]

Bases: Optimizer

Global optimization using scipy’s differential evolution optimizer.

See: scipy.optimize.differential_evolution().

Parameters:

options (dict) – Optimizer options that are directly passed on to scipy’s optimizer.

Examples

Arguments that can be passed to options:

maxiter:

used to calculate the maximal number of function evaluations by maxfevals = (maxiter + 1) * popsize * len(x) Default: 100

popsize:

population size, default value 15

__init__(options=None)[source]

Initialize base class.

Parameters:

options (dict) –

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization.

See Optimizer.minimize().

Return type:

OptimizerResult

Parameters:
class pypesto.optimize.ScipyOptimizer[source]

Bases: Optimizer

Use the SciPy optimizers.

Find details on the optimizer and configuration options at: scipy.optimize.minimize().

Note

Least-squares optimizers may face errors in case of non-continuous differentiable objective functions (e.g. Laplace priors).

__init__(method='L-BFGS-B', tol=None, options=None)[source]

Initialize base class.

Parameters:
get_default_options()[source]

Create default options specific for the optimizer.

is_least_squares()[source]

Check whether optimizer is a least squares optimizer.

minimize(problem, x0, id, history_options=None, optimize_options=None)[source]

Perform optimization. Parameters: see Optimizer documentation.

Return type:

OptimizerResult

Parameters:
pypesto.optimize.fill_result_from_history(result, optimizer_history, optimize_options=None)[source]

Overwrite some values in the result object with values in the history.

Parameters:
  • result (Result as reported from the used optimizer.) –

  • optimizer_history (History of function values recorded by the objective.) –

  • optimize_options (Options on e.g. how to override.) –

Return type:

OptimizerResult

Returns:

result (The in-place modified result.)

pypesto.optimize.get_default_ess_options(num_workers, dim, local_optimizer=True)[source]

Get default ESS settings for (SA)CESS.

Returns settings for num_workers parallel scatter searches, combining more aggressive and more conservative configurations. Mainly intended for use with SacessOptimizer. For details on the different options, see keyword arguments of ESSOptimizer.__init__().

Setting appropriate values for n_threads and local_optimizer is left to the user. Defaults to single-threaded and no local optimizer.

Based on https://bitbucket.org/DavidPenas/sacess-library/src/508e7ac15579104731cf1f8c3969960c6e72b872/src/method_module_fortran/eSS/parallelscattersearchfunctions.f90#lines-929

Parameters:
  • num_workers (Number of configurations to return.) –

  • dim (Problem dimension (number of optimized parameters).) –

  • local_optimizer (The local optimizer to use) – (see same argument in ESSOptimizer), a boolean indicating whether to set the default local optimizer (currently FidesOptimizer), a Optimizer instance, or a Callable returning an optimizer instance. The latter can be used to propagate walltime limits to the local optimizers. See SacessFidesFactory.__call__() for an example.

Return type:

list[dict]

pypesto.optimize.minimize(problem, optimizer=None, n_starts=100, ids=None, startpoint_method=None, result=None, engine=None, progress_bar=None, options=None, history_options=None, filename=None, overwrite=False)[source]

Do multistart optimization.

Parameters:
  • problem (Problem) – The problem to be solved.

  • optimizer (Optimizer) – The optimizer to be used n_starts times.

  • n_starts (int) – Number of starts of the optimizer.

  • ids (Iterable[str]) – Ids assigned to the startpoints.

  • startpoint_method (Union[StartpointMethod, Callable, bool]) – Method for how to choose start points. False means the optimizer does not require start points, e.g. for the pypesto.optimize.PyswarmOptimizer. Deprecated. Use ``problem.startpoint_method`` instead.

  • result (Result) – A result object to append the optimization results to. For example, one might append more runs to a previous optimization. If None, a new object is created.

  • engine (Engine) – Parallelization engine. Defaults to sequential execution using pypesto.engine.SingleCoreEngine.

  • progress_bar (bool) – Whether to display a progress bar.

  • options (OptimizeOptions) – Various options applied to the multistart optimization.

  • history_options (HistoryOptions) – Optimizer history options.

  • filename (Union[str, Callable, None]) – Name of the hdf5 file, where the result will be saved. Default is None, which deactivates automatic saving. If set to Auto it will automatically generate a file named year_month_day_profiling_result.hdf5. Optionally a method, see docs for pypesto.store.auto.autosave().

  • overwrite (bool) – Whether to overwrite result/optimization in the autosave file if it already exists.

Return type:

Result

Returns:

  • Result object containing the results of all multistarts in

  • result.optimize_result.

pypesto.optimize.optimization_result_from_history(filename, problem)[source]

Convert a saved hdf5 History to an optimization result.

Used for interrupted optimization runs.

Parameters:
  • filename (str) – The name of the file in which the information are stored.

  • problem (Problem) – Problem, needed to identify what parameters to accept.

Return type:

Result

Returns:

A result object in which the optimization result is constructed from history. But missing “Time, Message and Exitflag” keys.

pypesto.optimize.read_result_from_file(problem, history_options, identifier)[source]

Fill an OptimizerResult from history.

Parameters:
  • problem (Optional[Problem]) – The problem to find optimal parameters for. If None, bounds will be assumed to be [-inf, inf] for checking for admissible points.

  • identifier (str) – Multistart id.

  • history_options (HistoryOptions) – Optimizer history options.

Return type:

OptimizerResult

pypesto.optimize.read_results_from_file(problem, history_options, n_starts)[source]

Fill a Result from a set of histories.

Parameters:
  • problem (Problem) – The problem to find optimal parameters for.

  • n_starts (int) – Number of performed multistarts.

  • history_options (HistoryOptions) – Optimizer history options.

Return type:

Result