pypesto.problem

Problem

A problem contains the objective as well as all information like prior describing the problem to be solved.

class pypesto.problem.HierarchicalProblem[source]

Bases: Problem

The Hierarchical Problem.

A hierarchical problem is a problem with a nested structure: One or multiple inner problems are nested inside the outer problem. The inner problems are optimized for each evaluation of the outer problem. The objective’s calculator is used to collect the inner problems’ objective values.

Parameters:
  • hierarchical – A flag indicating the problem is hierarchical.

  • inner_x_names (Optional[Iterable[str]]) – Names of the inner optimization parameters. Only relevant if hierarchical is True. Contains the names of easily interpretable inner parameters only, e.g. noise parameters, scaling factors, offsets.

  • inner_lb (Union[ndarray, list[float], None]) – The lower and upper bounds for the inner optimization parameters. Only relevant if hierarchical is True. Contains the bounds of easily interpretable inner parameters only, e.g. noise parameters, scaling factors, offsets.

  • inner_ub (Union[ndarray, list[float], None]) – The lower and upper bounds for the inner optimization parameters. Only relevant if hierarchical is True. Contains the bounds of easily interpretable inner parameters only, e.g. noise parameters, scaling factors, offsets.

  • semiquant_observable_ids – The ids of semiquantitative observables. Only relevant if hierarchical is True. If not None, the optimization result’s spline_knots will be a list of lists of spline knots for each semiquantitative observable in the order of these ids.

__init__(inner_x_names=None, inner_lb=None, inner_ub=None, **problem_kwargs)[source]
Parameters:
class pypesto.problem.Problem[source]

Bases: object

The problem formulation.

A problem specifies the objective function, boundaries and constraints, parameter guesses as well as the parameters which are to be optimized.

Parameters:
  • objective (ObjectiveBase) – The objective function for minimization. Note that a shallow copy is created.

  • lb (Union[ndarray, list[float]]) – The lower and upper bounds for optimization. For unbounded directions set to +-inf.

  • ub (Union[ndarray, list[float]]) – The lower and upper bounds for optimization. For unbounded directions set to +-inf.

  • lb_init (Union[ndarray, list[float], None]) – The lower and upper bounds for initialization, typically for defining search start points. If not set, set to lb, ub.

  • ub_init (Union[ndarray, list[float], None]) – The lower and upper bounds for initialization, typically for defining search start points. If not set, set to lb, ub.

  • dim_full (Optional[int]) – The full dimension of the problem, including fixed parameters.

  • x_fixed_indices (Union[Iterable[SupportsInt], SupportsInt, None]) – Vector containing the indices (zero-based) of parameter components that are not to be optimized.

  • x_fixed_vals (Union[Iterable[SupportsFloat], SupportsFloat, None]) – Vector of the same length as x_fixed_indices, containing the values of the fixed parameters.

  • x_guesses (Optional[Iterable[float]]) – Guesses for the parameter values, shape (g, dim), where g denotes the number of guesses. These are used as start points in the optimization.

  • x_names (Optional[Iterable[str]]) – Parameter names that can be optionally used e.g. in visualizations. If objective.get_x_names() is not None, those values are used, else the values specified here are used if not None, otherwise the variable names are set to [‘x0’, … ‘x{dim_full}’]. The list must always be of length dim_full.

  • x_scales (Optional[Iterable[str]]) – Parameter scales can be optionally given and are used e.g. in visualisation and prior generation. Currently the scales ‘lin’, ‘log`and ‘log10’ are supported.

  • x_priors_defs (Optional[NegLogParameterPriors]) – Definitions of priors for parameters. Types of priors, and their required and optional parameters, are described in the Prior class.

  • copy_objective (bool) – Whethter to generate a deep copy of the objective function before potential modification the problem class performs on it.

  • startpoint_method (Union[StartpointMethod, Callable, bool]) – Method for how to choose start points. False means the optimizer does not require start points, e.g. for the PyswarmOptimizer.

Notes

On the fixing of parameter values:

The number of parameters dim_full the objective takes as input must be known, so it must be either lb a vector of that size, or dim_full specified as a parameter.

All vectors are mapped to the reduced space of dimension dim in __init__, regardless of whether they were in dimension dim or dim_full before. If the full representation is needed, the methods get_full_vector() and get_full_matrix() can be used.

__init__(objective, lb, ub, dim_full=None, x_fixed_indices=None, x_fixed_vals=None, x_guesses=None, x_names=None, x_scales=None, x_priors_defs=None, lb_init=None, ub_init=None, copy_objective=True, startpoint_method=None)[source]
Parameters:
property dim: int

Return dimension only considering non fixed parameters.

fix_parameters(parameter_indices, parameter_vals)[source]

Fix specified parameters to specified values.

Return type:

None

Parameters:
full_index_to_free_index(full_index)[source]

Calculate index in reduced vector from index in full vector.

Parameters:

full_index (The index in the full vector.) –

Returns:

free_index (The index in the free vector.)

get_full_matrix(x)[source]

Map matrix from dim to dim_full. Usually used for hessian.

Parameters:

x (array_like, shape=(dim, dim)) – The matrix in dimension dim.

Return type:

Optional[ndarray]

get_full_vector(x, x_fixed_vals=None)[source]

Map vector from dim to dim_full. Usually used for x, grad.

Parameters:
  • x (array_like, shape=(dim,)) – The vector in dimension dim.

  • x_fixed_vals (array_like, ndim=1, optional) – The values to be used for the fixed indices. If None, then nans are inserted. Usually, None will be used for grad and problem.x_fixed_vals for x.

Return type:

Optional[ndarray]

get_reduced_matrix(x_full)[source]

Map matrix from dim_full to dim, i.e. delete fixed indices.

Parameters:

x_full (array_like, ndim=2) – The matrix in dimension dim_full.

Return type:

Optional[ndarray]

get_reduced_vector(x_full, x_indices=None)[source]

Keep only those elements, which indices are specified in x_indices.

If x_indices is not provided, delete fixed indices.

Parameters:
  • x_full (array_like, ndim=1) – The vector in dimension dim_full.

  • x_indices (Optional[list[int]]) – indices of x_full that should remain

Return type:

Optional[ndarray]

property lb: ndarray

Return lower bounds of free parameters.

property lb_init: ndarray

Return initial lower bounds of free parameters.

normalize()[source]

Process vectors.

Reduce all vectors to dimension dim and have the objective accept vectors of dimension dim.

Return type:

None

print_parameter_summary()[source]

Print a summary of parameters.

Include what parameters are being optimized and parameter boundaries.

Return type:

None

set_x_guesses(x_guesses)[source]

Set the x_guesses of a problem.

Parameters:

x_guesses (Iterable[float]) –

property ub: ndarray

Return upper bounds of free parameters.

property ub_init: ndarray

Return initial upper bounds of free parameters.

unfix_parameters(parameter_indices)[source]

Free specified parameters.

Return type:

None

Parameters:

parameter_indices (Iterable[SupportsInt] | SupportsInt) –

property x_free_indices: list[int]

Return non fixed parameters.

property x_guesses: ndarray

Return guesses of the free parameter values.