Problem

A problem contains the objective as well as all information like prior describing the problem to be solved.

class pypesto.problem.Iterable

Bases: collections.abc.Iterable, typing.Generic

__abstractmethods__ = frozenset({'__iter__'})
__args__ = None
__class__

alias of GenericMeta

__delattr__

Implement delattr(self, name).

__dir__() → list

default dir() implementation

__eq__

Return self==value.

__extra__

alias of collections.abc.Iterable

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__gt__

Return self>value.

__hash__

Return hash(self).

__init__

Initialize self. See help(type(self)) for accurate signature.

__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

__iter__()
__le__

Return self<=value.

__lt__

Return self<value.

__module__ = 'typing'
__ne__

Return self!=value.

static __new__(cls, *args, **kwds)

Create and return a new object. See help(type) for accurate signature.

__next_in_mro__

alias of builtins.object

__orig_bases__ = (typing.Generic[+T_co],)
__origin__ = None
__parameters__ = (+T_co,)
__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__

Return repr(self).

__setattr__

Implement setattr(self, name, value).

__sizeof__() → int

size of object in memory, in bytes

__slots__ = ()
__str__

Return str(self).

__subclasshook__()
__tree_hash__ = -9223366159182168482
class pypesto.problem.List

Bases: list, typing.MutableSequence

__abstractmethods__ = frozenset()
__add__

Return self+value.

__args__ = None
__class__

alias of GenericMeta

__contains__

Return key in self.

__delattr__

Implement delattr(self, name).

__delitem__

Delete self[key].

__dir__() → list

default dir() implementation

__eq__

Return self==value.

__extra__

alias of builtins.list

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__getitem__()

x.__getitem__(y) <==> x[y]

__gt__

Return self>value.

__hash__ = None
__iadd__

Implement self+=value.

__imul__

Implement self*=value.

__init__

Initialize self. See help(type(self)) for accurate signature.

__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

__iter__

Implement iter(self).

__le__

Return self<=value.

__len__

Return len(self).

__lt__

Return self<value.

__module__ = 'typing'
__mul__

Return self*value.

__ne__

Return self!=value.

static __new__(cls, *args, **kwds)

Create and return a new object. See help(type) for accurate signature.

__next_in_mro__

alias of builtins.object

__orig_bases__ = (<class 'list'>, typing.MutableSequence[~T])
__origin__ = None
__parameters__ = (~T,)
__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__

Return repr(self).

__reversed__()

L.__reversed__() – return a reverse iterator over the list

__rmul__

Return value*self.

__setattr__

Implement setattr(self, name, value).

__setitem__

Set self[key] to value.

__sizeof__()

L.__sizeof__() – size of L in memory, in bytes

__slots__ = ()
__str__

Return str(self).

__subclasshook__()
__tree_hash__ = -9223366159182165491
append(object) → None -- append object to end
clear() → None -- remove all items from L
copy() → list -- a shallow copy of L
count(value) → integer -- return number of occurrences of value
extend(iterable) → None -- extend list by appending elements from the iterable
index(value[, start[, stop]]) → integer -- return first index of value.

Raises ValueError if the value is not present.

insert()

L.insert(index, object) – insert object before index

pop([index]) → item -- remove and return item at index (default last).

Raises IndexError if list is empty or index is out of range.

remove(value) → None -- remove first occurrence of value.

Raises ValueError if the value is not present.

reverse()

L.reverse() – reverse IN PLACE

sort(key=None, reverse=False) → None -- stable sort *IN PLACE*
class pypesto.problem.Objective(fun: Callable = None, grad: Union[Callable, bool] = None, hess: Callable = None, hessp: Callable = None, res: Callable = None, sres: Union[Callable, bool] = None, fun_accept_sensi_orders: bool = False, res_accept_sensi_orders: bool = False, x_names: List[str] = None)

Bases: object

The objective class is a simple wrapper around the objective function, giving a standardized way of calling. Apart from that, it manages several things including fixing of parameters and history.

The objective function is assumed to be in the format of a cost function, log-likelihood function, or log-posterior function. These functions are subject to minimization. For profiling and sampling, the sign is internally flipped, all returned and stored values are however given as returned by this objective function. If maximization is to be performed, the sign should be flipped before creating the objective function.

Parameters:
  • fun

    The objective function to be minimized. If it only computes the objective function value, it should be of the form

    fun(x) -> float

    where x is an 1-D array with shape (n,), and n is the parameter space dimension.

  • grad

    Method for computing the gradient vector. If it is a callable, it should be of the form

    grad(x) -> array_like, shape (n,).

    If its value is True, then fun should return the gradient as a second output.

  • hess

    Method for computing the Hessian matrix. If it is a callable, it should be of the form

    hess(x) -> array, shape (n,n).

    If its value is True, then fun should return the gradient as a second, and the Hessian as a third output, and grad should be True as well.

  • hessp

    Method for computing the Hessian vector product, i.e.

    hessp(x, v) -> array_like, shape (n,)

    computes the product H*v of the Hessian of fun at x with v.

  • res

    Method for computing residuals, i.e.

    res(x) -> array_like, shape(m,).
  • sres

    Method for computing residual sensitivities. If its is a callable, it should be of the form

    sres(x) -> array, shape (m,n).

    If its value is True, then res should return the residual sensitivities as a second output.

  • fun_accept_sensi_orders – Flag indicating whether fun takes sensi_orders as an argument. Default: False.
  • res_accept_sensi_orders – Flag indicating whether res takes sensi_orders as an argument. Default: False
  • x_names – Parameter names. None if no names provided, otherwise a list of str, length dim_full (as in the Problem class). Can be read by the problem.
history

For storing the call history. Initialized by the methods, e.g. the optimizer, in initialize_history().

pre_post_processor

Preprocess input values to and postprocess output values from __call__. Configured in update_from_problem().

Notes

If fun_accept_sensi_orders resp. res_accept_sensi_orders is True, fun resp. res can also return dictionaries instead of tuples. In that case, they are expected to follow the naming conventions in constants.py. This is of interest, because when __call__ is called with return_dict = True, the full dictionary is returned, which can contain e.g. also simulation data or debugging information.

__call__()

Method to obtain arbitrary sensitivities. This is the central method which is always called, also by the get_* methods.

There are different ways in which an optimizer calls the objective function, and in how the objective function provides information (e.g. derivatives via separate functions or along with the function values). The different calling modes increase efficiency in space and time and make the objective flexible.

Parameters:
  • x – The parameters for which to evaluate the objective function.
  • sensi_orders – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.
  • mode – Whether to compute function values or residuals.
  • return_dict – If False (default), the result is a Tuple of the requested values in the requested order. Tuples of length one are flattened. If True, instead a dict is returned which can carry further information.
Returns:

By default, this is a tuple of the requested function values and derivatives in the requested order (if only 1 value, the tuple is flattened). If return_dict, then instead a dict is returned with function values and derivatives indicated by ids.

Return type:

result

__class__

alias of builtins.type

__deepcopy__(memodict=None) → pypesto.objective.objective.Objective
__delattr__

Implement delattr(self, name).

__dict__ = mappingproxy({'__module__': 'pypesto.objective.objective', '__doc__': '\n The objective class is a simple wrapper around the objective function,\n giving a standardized way of calling. Apart from that, it manages several\n things including fixing of parameters and history.\n\n The objective function is assumed to be in the format of a cost function,\n log-likelihood function, or log-posterior function. These functions are\n subject to minimization. For profiling and sampling, the sign is internally\n flipped, all returned and stored values are however given as returned\n by this objective function. If maximization is to be performed, the sign\n should be flipped before creating the objective function.\n\n Parameters\n ----------\n\n fun:\n The objective function to be minimized. If it only computes the\n objective function value, it should be of the form\n\n ``fun(x) -> float``\n\n where x is an 1-D array with shape (n,), and n is the parameter space\n dimension.\n\n grad:\n Method for computing the gradient vector. If it is a callable,\n it should be of the form\n\n ``grad(x) -> array_like, shape (n,).``\n\n If its value is True, then fun should return the gradient as a second\n output.\n\n hess:\n Method for computing the Hessian matrix. If it is a callable,\n it should be of the form\n\n ``hess(x) -> array, shape (n,n).``\n\n If its value is True, then fun should return the gradient as a\n second, and the Hessian as a third output, and grad should be True as\n well.\n\n hessp:\n Method for computing the Hessian vector product, i.e.\n\n ``hessp(x, v) -> array_like, shape (n,)``\n\n computes the product H*v of the Hessian of fun at x with v.\n\n res:\n Method for computing residuals, i.e.\n\n ``res(x) -> array_like, shape(m,).``\n\n sres:\n Method for computing residual sensitivities. If its is a callable,\n it should be of the form\n\n ``sres(x) -> array, shape (m,n).``\n\n If its value is True, then res should return the residual\n sensitivities as a second output.\n\n fun_accept_sensi_orders:\n Flag indicating whether fun takes sensi_orders as an argument.\n Default: False.\n\n res_accept_sensi_orders:\n Flag indicating whether res takes sensi_orders as an argument.\n Default: False\n\n x_names:\n Parameter names. None if no names provided, otherwise a list of str,\n length dim_full (as in the Problem class). Can be read by the\n problem.\n\n Attributes\n ----------\n\n history:\n For storing the call history. Initialized by the methods, e.g. the\n optimizer, in `initialize_history()`.\n\n pre_post_processor:\n Preprocess input values to and postprocess output values from\n __call__. Configured in `update_from_problem()`.\n\n Notes\n -----\n\n If fun_accept_sensi_orders resp. res_accept_sensi_orders is True,\n fun resp. res can also return dictionaries instead of tuples.\n In that case, they are expected to follow the naming conventions\n in ``constants.py``. This is of interest, because when __call__ is\n called with return_dict = True, the full dictionary is returned, which\n can contain e.g. also simulation data or debugging information.\n ', '__init__': <function Objective.__init__>, '__deepcopy__': <function Objective.__deepcopy__>, 'initialize': <function Objective.initialize>, 'has_fun': <property object>, 'has_grad': <property object>, 'has_hess': <property object>, 'has_hessp': <property object>, 'has_res': <property object>, 'has_sres': <property object>, 'check_sensi_orders': <function Objective.check_sensi_orders>, '__call__': <function Objective.__call__>, '_call_unprocessed': <function Objective._call_unprocessed>, '_call_mode_fun': <function Objective._call_mode_fun>, '_call_mode_res': <function Objective._call_mode_res>, 'output_to_dict': <staticmethod object>, 'output_to_tuple': <staticmethod object>, 'get_fval': <function Objective.get_fval>, 'get_grad': <function Objective.get_grad>, 'get_hess': <function Objective.get_hess>, 'get_res': <function Objective.get_res>, 'get_sres': <function Objective.get_sres>, 'update_from_problem': <function Objective.update_from_problem>, 'check_grad': <function Objective.check_grad>, '__dict__': <attribute '__dict__' of 'Objective' objects>, '__weakref__': <attribute '__weakref__' of 'Objective' objects>})
__dir__() → list

default dir() implementation

__eq__

Return self==value.

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__gt__

Return self>value.

__hash__

Return hash(self).

__init__(fun: Callable = None, grad: Union[Callable, bool] = None, hess: Callable = None, hessp: Callable = None, res: Callable = None, sres: Union[Callable, bool] = None, fun_accept_sensi_orders: bool = False, res_accept_sensi_orders: bool = False, x_names: List[str] = None)

Initialize self. See help(type(self)) for accurate signature.

__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

__le__

Return self<=value.

__lt__

Return self<value.

__module__ = 'pypesto.objective.objective'
__ne__

Return self!=value.

__new__()

Create and return a new object. See help(type) for accurate signature.

__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__

Return repr(self).

__setattr__

Implement setattr(self, name, value).

__sizeof__() → int

size of object in memory, in bytes

__str__

Return str(self).

__subclasshook__()

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

__weakref__

list of weak references to the object (if defined)

check_grad(x: numpy.ndarray, x_indices: List[int] = None, eps: float = 1e-05, verbosity: int = 1, mode: str = 'mode_fun') → pandas.core.frame.DataFrame

Compare gradient evaluation: Firstly approximate via finite differences, and secondly use the objective gradient.

Parameters:
  • x – The parameters for which to evaluate the gradient.
  • x_indices – List of index values for which to compute gradients. Default: all.
  • eps – Finite differences step size. Default: 1e-5.
  • verbosity – Level of verbosity for function output. * 0: no output, * 1: summary for all parameters, * 2: summary for individual parameters. Default: 1.
  • mode – Residual (MODE_RES) or objective function value (MODE_FUN, default) computation mode.
Returns:

gradient, finite difference approximations and error estimates.

Return type:

result

check_sensi_orders(sensi_orders, mode) → None

Check if the objective is able to compute the requested sensitivities. If not, throw an exception.

Raises:
  • ValueError if the objective function cannot be called as
  • requested.
get_fval(x: numpy.ndarray) → float

Get the function value at x.

get_grad(x: numpy.ndarray) → numpy.ndarray

Get the gradient at x.

get_hess(x: numpy.ndarray) → numpy.ndarray

Get the Hessian at x.

get_res(x: numpy.ndarray) → numpy.ndarray

Get the residuals at x.

get_sres(x: numpy.ndarray) → numpy.ndarray

Get the residual sensitivities at x.

has_fun
has_grad
has_hess
has_hessp
has_res
has_sres
initialize()

Initialize the objective function. This function is used at the beginning of an analysis, e.g. optimization, and can e.g. reset the objective memory. By default does nothing.

static output_to_dict()

Convert output tuple to dict.

static output_to_tuple()

Return values as requested by the caller, since usually only a subset is demanded. One output is returned as-is, more than one output are returned as a tuple in order (fval, grad, hess).

update_from_problem(dim_full: int, x_free_indices: List[int], x_fixed_indices: List[int], x_fixed_vals: List[int])

Handle fixed parameters. Later, the objective will be given parameter vectors x of dimension dim, which have to be filled up with fixed parameter values to form a vector of dimension dim_full >= dim. This vector is then used to compute function value and derivatives. The derivatives must later be reduced again to dimension dim.

This is so as to make the fixing of parameters transparent to the caller.

The methods preprocess, postprocess are overwritten for the above functionality, respectively.

Parameters:
  • dim_full – Dimension of the full vector including fixed parameters.
  • x_free_indices – Vector containing the indices (zero-based) of free parameters (complimentary to x_fixed_indices).
  • x_fixed_indices – Vector containing the indices (zero-based) of parameter components that are not to be optimized.
  • x_fixed_vals – Vector of the same length as x_fixed_indices, containing the values of the fixed parameters.
class pypesto.problem.Problem(objective: pypesto.objective.objective.Objective, lb: Union[numpy.ndarray, List[float]], ub: Union[numpy.ndarray, List[float]], dim_full: Optional[int] = None, x_fixed_indices: Optional[Iterable[int]] = None, x_fixed_vals: Optional[Iterable[float]] = None, x_guesses: Optional[Iterable[float]] = None, x_names: Optional[Iterable[str]] = None)

Bases: object

The problem formulation. A problem specifies the objective function, boundaries and constraints, parameter guesses as well as the parameters which are to be optimized.

Parameters:
  • objective – The objective function for minimization. Note that a shallow copy is created.
  • ub (lb,) – The lower and upper bounds. For unbounded directions set to inf.
  • dim_full – The full dimension of the problem, including fixed parameters.
  • x_fixed_indices – Vector containing the indices (zero-based) of parameter components that are not to be optimized.
  • x_fixed_vals – Vector of the same length as x_fixed_indices, containing the values of the fixed parameters.
  • x_guesses – Guesses for the parameter values, shape (g, dim), where g denotes the number of guesses. These are used as start points in the optimization.
  • x_names – Parameter names that can be optionally used e.g. in visualizations. If objective.get_x_names() is not None, those values are used, else the values specified here are used if not None, otherwise the variable names are set to [‘x0’, … ‘x{dim_full}’]. The list must always be of length dim_full.
dim

The number of non-fixed parameters. Computed from the other values.

x_free_indices

Vector containing the indices (zero-based) of free parameters (complimentary to x_fixed_indices).

Type:array_like of int

Notes

On the fixing of parameter values:

The number of parameters dim_full the objective takes as input must be known, so it must be either lb a vector of that size, or dim_full specified as a parameter.

All vectors are mapped to the reduced space of dimension dim in __init__, regardless of whether they were in dimension dim or dim_full before. If the full representation is needed, the methods get_full_vector() and get_full_matrix() can be used.

__class__

alias of builtins.type

__delattr__

Implement delattr(self, name).

__dict__ = mappingproxy({'__module__': 'pypesto.problem', '__doc__': "\n The problem formulation. A problem specifies the objective function,\n boundaries and constraints, parameter guesses as well as the parameters\n which are to be optimized.\n\n Parameters\n ----------\n objective:\n The objective function for minimization. Note that a shallow copy\n is created.\n lb, ub:\n The lower and upper bounds. For unbounded directions set to inf.\n dim_full:\n The full dimension of the problem, including fixed parameters.\n x_fixed_indices:\n Vector containing the indices (zero-based) of parameter components\n that are not to be optimized.\n x_fixed_vals:\n Vector of the same length as x_fixed_indices, containing the values\n of the fixed parameters.\n x_guesses:\n Guesses for the parameter values, shape (g, dim), where g denotes the\n number of guesses. These are used as start points in the optimization.\n x_names:\n Parameter names that can be optionally used e.g. in visualizations.\n If objective.get_x_names() is not None, those values are used,\n else the values specified here are used if not None, otherwise\n the variable names are set to ['x0', ... 'x{dim_full}']. The list\n must always be of length dim_full.\n\n Attributes\n ----------\n\n dim:\n The number of non-fixed parameters.\n Computed from the other values.\n x_free_indices: array_like of int\n Vector containing the indices (zero-based) of free parameters\n (complimentary to x_fixed_indices).\n\n Notes\n -----\n\n On the fixing of parameter values:\n\n The number of parameters dim_full the objective takes as input must\n be known, so it must be either lb a vector of that size, or dim_full\n specified as a parameter.\n\n All vectors are mapped to the reduced space of dimension dim in __init__,\n regardless of whether they were in dimension dim or dim_full before. If\n the full representation is needed, the methods get_full_vector() and\n get_full_matrix() can be used.\n ", '__init__': <function Problem.__init__>, 'normalize_input': <function Problem.normalize_input>, 'fix_parameters': <function Problem.fix_parameters>, 'unfix_parameters': <function Problem.unfix_parameters>, 'get_full_vector': <function Problem.get_full_vector>, 'get_full_matrix': <function Problem.get_full_matrix>, 'get_reduced_vector': <function Problem.get_reduced_vector>, 'get_reduced_matrix': <function Problem.get_reduced_matrix>, 'print_parameter_summary': <function Problem.print_parameter_summary>, '__dict__': <attribute '__dict__' of 'Problem' objects>, '__weakref__': <attribute '__weakref__' of 'Problem' objects>})
__dir__() → list

default dir() implementation

__eq__

Return self==value.

__format__()

default object formatter

__ge__

Return self>=value.

__getattribute__

Return getattr(self, name).

__gt__

Return self>value.

__hash__

Return hash(self).

__init__(objective: pypesto.objective.objective.Objective, lb: Union[numpy.ndarray, List[float]], ub: Union[numpy.ndarray, List[float]], dim_full: Optional[int] = None, x_fixed_indices: Optional[Iterable[int]] = None, x_fixed_vals: Optional[Iterable[float]] = None, x_guesses: Optional[Iterable[float]] = None, x_names: Optional[Iterable[str]] = None)

Initialize self. See help(type(self)) for accurate signature.

__init_subclass__()

This method is called when a class is subclassed.

The default implementation does nothing. It may be overridden to extend subclasses.

__le__

Return self<=value.

__lt__

Return self<value.

__module__ = 'pypesto.problem'
__ne__

Return self!=value.

__new__()

Create and return a new object. See help(type) for accurate signature.

__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__

Return repr(self).

__setattr__

Implement setattr(self, name, value).

__sizeof__() → int

size of object in memory, in bytes

__str__

Return str(self).

__subclasshook__()

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

__weakref__

list of weak references to the object (if defined)

fix_parameters(parameter_indices: Union[Iterable[int], int], parameter_vals: Union[Iterable[float], float]) → None

Fix specified parameters to specified values

get_full_matrix(x: Optional[numpy.ndarray]) → Optional[numpy.ndarray]

Map matrix from dim to dim_full. Usually used for hessian.

Parameters:x (array_like, shape=(dim, dim)) – The matrix in dimension dim.
get_full_vector(x: Optional[numpy.ndarray], x_fixed_vals: Iterable[float] = None) → Optional[numpy.ndarray]

Map vector from dim to dim_full. Usually used for x, grad.

Parameters:
  • x (array_like, shape=(dim,)) – The vector in dimension dim.
  • x_fixed_vals (array_like, ndim=1, optional) – The values to be used for the fixed indices. If None, then nans are inserted. Usually, None will be used for grad and problem.x_fixed_vals for x.
get_reduced_matrix(x_full: Optional[numpy.ndarray]) → Optional[numpy.ndarray]

Map matrix from dim_full to dim, i.e. delete fixed indices.

Parameters:x_full (array_like, ndim=2) – The matrix in dimension dim_full.
get_reduced_vector(x_full: Optional[numpy.ndarray]) → Optional[numpy.ndarray]

Map vector from dim_full to dim, i.e. delete fixed indices.

Parameters:x_full (array_like, ndim=1) – The vector in dimension dim_full.
normalize_input(check_x_guesses: bool = True) → None

Reduce all vectors to dimension dim and have the objective accept vectors of dimension dim.

print_parameter_summary() → None

Prints a summary of what parameters are being optimized and what parameter boundaries are

unfix_parameters(parameter_indices: Union[Iterable[int], int]) → None

Free specified parameters