Optimize¶
Multistart optimization with support for various optimizers.
-
class
pypesto.optimize.
CmaesOptimizer
(par_sigma0: float = 0.25, options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Global optimization using cma-es. Package homepage: https://pypi.org/project/cma-es/
-
__init__
(par_sigma0: float = 0.25, options: Optional[Dict] = None)[source]¶ - Parameters
par_sigma0 – scalar, initial standard deviation in each coordinate. par_sigma0 should be about 1/4th of the search domain width (where the optimum is to be expected)
options – Optimizer options that are directly passed on to cma.
-
-
class
pypesto.optimize.
DlibOptimizer
(options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Use the Dlib toolbox for optimization.
-
class
pypesto.optimize.
FidesOptimizer
(hessian_update: Optional[fides.HessianApproximation] = 'Hybrid', options: Optional[Dict] = None, verbose: Optional[int] = 20)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Global/Local optimization using the trust region optimizer fides. Package Homepage: https://fides-optimizer.readthedocs.io/en/latest
-
__init__
(hessian_update: Optional[fides.HessianApproximation] = 'Hybrid', options: Optional[Dict] = None, verbose: Optional[int] = 20)[source]¶ - Parameters
options – Optimizer options.
hessian_update – Hessian update strategy. If this is None, Hessian (approximation) computed by problem.objective will be used.
-
-
class
pypesto.optimize.
IpoptOptimizer
(options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Use IpOpt (https://pypi.org/project/ipopt/) for optimization.
-
class
pypesto.optimize.
NLoptOptimizer
(method=None, local_method=None, options: Optional[Dict] = None, local_options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Global/Local optimization using NLopt. Package homepage: https://nlopt.readthedocs.io/en/latest/
-
__init__
(method=None, local_method=None, options: Optional[Dict] = None, local_options: Optional[Dict] = None)[source]¶ - Parameters
method – Local or global Optimizer to use for minimization.
local_method – Local method to use in combination with the global optimizer ( for the MLSL family of solvers) or to solve a subproblem (for the AUGLAG family of solvers)
options – Optimizer options. scipy option maxiter is automatically transformed into maxeval and takes precedence.
local_options – Optimizer options for the local method
-
-
class
pypesto.optimize.
OptimizeOptions
(startpoint_resample: bool = False, allow_failed_starts: bool = True)[source]¶ Bases:
dict
Options for the multistart optimization.
- Parameters
startpoint_resample – Flag indicating whether initial points are supposed to be resampled if function evaluation fails at the initial point
allow_failed_starts – Flag indicating whether we tolerate that exceptions are thrown during the minimization process.
-
__init__
(startpoint_resample: bool = False, allow_failed_starts: bool = True)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
static
assert_instance
(maybe_options: Union[pypesto.optimize.options.OptimizeOptions, Dict]) → pypesto.optimize.options.OptimizeOptions[source]¶ Returns a valid options object.
- Parameters
maybe_options (OptimizeOptions or dict) –
-
class
pypesto.optimize.
Optimizer
[source]¶ Bases:
abc.ABC
This is the optimizer base class, not functional on its own. An optimizer takes a problem, and possibly a start point, and then performs an optimization. It returns an OptimizerResult.
-
class
pypesto.optimize.
OptimizerResult
(id: Optional[str] = None, x: Optional[numpy.ndarray] = None, fval: Optional[float] = None, grad: Optional[numpy.ndarray] = None, hess: Optional[numpy.ndarray] = None, res: Optional[numpy.ndarray] = None, sres: Optional[numpy.ndarray] = None, n_fval: Optional[int] = None, n_grad: Optional[int] = None, n_hess: Optional[int] = None, n_res: Optional[int] = None, n_sres: Optional[int] = None, x0: Optional[numpy.ndarray] = None, fval0: Optional[float] = None, history: Optional[pypesto.objective.history.History] = None, exitflag: Optional[int] = None, time: Optional[float] = None, message: Optional[str] = None)[source]¶ Bases:
dict
The result of an optimizer run. Used as a standardized return value to map from the individual result objects returned by the employed optimizers to the format understood by pypesto.
Can be used like a dict.
-
id
¶ Id of the optimizer run. Usually the start index.
-
x
¶ The best found parameters.
-
fval
¶ The best found function value, fun(x).
-
grad
¶ The gradient at x.
-
hess
¶ The Hessian at x.
-
res
¶ The residuals at x.
-
sres
¶ The residual sensitivities at x.
-
n_fval
¶ Number of function evaluations.
-
n_grad
¶ Number of gradient evaluations.
-
n_hess
¶ Number of Hessian evaluations.
-
n_res
¶ Number of residuals evaluations.
-
n_sres
¶ Number of residual sensitivity evaluations.
-
x0
¶ The starting parameters.
-
fval0
¶ The starting function value, fun(x0).
-
history
¶ Objective history.
-
exitflag
¶ The exitflag of the optimizer.
-
time
¶ Execution time.
Notes
Any field not supported by the optimizer is filled with None.
-
__init__
(id: Optional[str] = None, x: Optional[numpy.ndarray] = None, fval: Optional[float] = None, grad: Optional[numpy.ndarray] = None, hess: Optional[numpy.ndarray] = None, res: Optional[numpy.ndarray] = None, sres: Optional[numpy.ndarray] = None, n_fval: Optional[int] = None, n_grad: Optional[int] = None, n_hess: Optional[int] = None, n_res: Optional[int] = None, n_sres: Optional[int] = None, x0: Optional[numpy.ndarray] = None, fval0: Optional[float] = None, history: Optional[pypesto.objective.history.History] = None, exitflag: Optional[int] = None, time: Optional[float] = None, message: Optional[str] = None)[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
update_to_full
(problem: pypesto.problem.Problem) → None[source]¶ Updates values to full vectors/matrices
- Parameters
problem – problem which contains info about how to convert to full vectors or matrices
-
-
class
pypesto.optimize.
PyswarmOptimizer
(options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Global optimization using pyswarm.
-
class
pypesto.optimize.
ScipyDifferentialEvolutionOptimizer
(options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Global optimization using scipy’s differential evolution optimizer. Package homepage: https://docs.scipy.org/doc/scipy/reference/generated /scipy.optimize.differential_evolution.html
- Parameters
options – Optimizer options that are directly passed on to scipy’s optimizer.
Examples
Arguments that can be passed to options:
- maxiter:
used to calculate the maximal number of funcion evaluations by maxfevals = (maxiter + 1) * popsize * len(x) Default: 100
- popsize:
population size, default value 15
-
class
pypesto.optimize.
ScipyOptimizer
(method: str = 'L-BFGS-B', tol: float = 1e-09, options: Optional[Dict] = None)[source]¶ Bases:
pypesto.optimize.optimizer.Optimizer
Use the SciPy optimizers. Find details on the optimizer and configuration options at: https://docs.scipy.org/doc/scipy/reference/generated/scipy. optimize.minimize.html#scipy.optimize.minimize
-
pypesto.optimize.
minimize
(problem: pypesto.problem.Problem, optimizer: Optional[pypesto.optimize.optimizer.Optimizer] = None, n_starts: int = 100, ids: Optional[Iterable[str]] = None, startpoint_method: Optional[Union[Callable, bool]] = None, result: Optional[pypesto.result.Result] = None, engine: Optional[pypesto.engine.base.Engine] = None, options: Optional[pypesto.optimize.options.OptimizeOptions] = None, history_options: Optional[pypesto.objective.history.HistoryOptions] = None) → pypesto.result.Result[source]¶ This is the main function to call to do multistart optimization.
- Parameters
problem – The problem to be solved.
optimizer – The optimizer to be used n_starts times.
n_starts – Number of starts of the optimizer.
ids – Ids assigned to the startpoints.
startpoint_method – Method for how to choose start points. False means the optimizer does not require start points, e.g. for the ‘PyswarmOptimizer’.
result – A result object to append the optimization results to. For example, one might append more runs to a previous optimization. If None, a new object is created.
engine – Parallelization engine. Defaults to sequential execution on a SingleCoreEngine.
options – Various options applied to the multistart optimization.
history_options – Optimizer history options.
- Returns
Result object containing the results of all multistarts in result.optimize_result.
- Return type
result