pypesto.objective

Objective

class pypesto.objective.AggregatedObjective[source]

Bases: ObjectiveBase

Aggregates multiple objectives into one objective.

__init__(objectives, x_names=None)[source]

Initialize objective.

Parameters:
  • objectives (Sequence[ObjectiveBase]) – Sequence of pypesto.ObjectiveBase instances

  • x_names (Sequence[str]) – Sequence of names of the (optimized) parameters. (Details see documentation of x_names in pypesto.ObjectiveBase)

call_unprocessed(x, sensi_orders, mode, kwargs_list=None, **kwargs)[source]

See ObjectiveBase for more documentation.

Main method to overwrite from the base class. It handles and delegates the actual objective evaluation.

Parameters:
  • kwargs_list (Sequence[Dict[str, Any]]) – Objective-specific keyword arguments, where the dictionaries are ordered by the objectives.

  • x (ndarray) –

  • sensi_orders (Tuple[int, ...]) –

  • mode (Literal['mode_fun', 'mode_res']) –

Return type:

Dict[str, Union[float, ndarray, Dict]]

check_mode(mode)[source]

See ObjectiveBase documentation.

Return type:

bool

Parameters:

mode (Literal['mode_fun', 'mode_res']) –

check_sensi_orders(sensi_orders, mode)[source]

See ObjectiveBase documentation.

Return type:

bool

Parameters:
  • sensi_orders (Tuple[int, ...]) –

  • mode (Literal['mode_fun', 'mode_res']) –

get_config()[source]

Return basic information of the objective configuration.

Return type:

dict

initialize()[source]

See ObjectiveBase documentation.

class pypesto.objective.AmiciObjective[source]

Bases: ObjectiveBase

Allows to create an objective directly from an amici model.

__call__(x, sensi_orders=(0,), mode='mode_fun', return_dict=False, **kwargs)[source]

See ObjectiveBase documentation.

Return type:

Union[float, ndarray, Tuple, Dict[str, Union[float, ndarray, Dict]]]

Parameters:
__init__(amici_model, amici_solver, edatas, max_sensi_order=None, x_ids=None, x_names=None, parameter_mapping=None, guess_steadystate=None, n_threads=1, fim_for_hess=True, amici_object_builder=None, calculator=None, amici_reporting=None)[source]

Initialize objective.

Parameters:
  • amici_model (Union[Model, ModelPtr]) – The amici model.

  • amici_solver (Union[Solver, SolverPtr]) – The solver to use for the numeric integration of the model.

  • edatas (Union[Sequence[ExpData], ExpData]) – The experimental data. If a list is passed, its entries correspond to multiple experimental conditions.

  • max_sensi_order (Optional[int]) – Maximum sensitivity order supported by the model. Defaults to 2 if the model was compiled with o2mode, otherwise 1.

  • x_ids (Optional[Sequence[str]]) – Ids of optimization parameters. In the simplest case, this will be the AMICI model parameters (default).

  • x_names (Optional[Sequence[str]]) – Names of optimization parameters.

  • parameter_mapping (Optional[ParameterMapping]) – Mapping of optimization parameters to model parameters. Format as created by amici.petab_objective.create_parameter_mapping. The default is just to assume that optimization and simulation parameters coincide.

  • guess_steadystate (Optional[bool]) – Whether to guess steadystates based on previous steadystates and respective derivatives. This option may lead to unexpected results for models with conservation laws and should accordingly be deactivated for those models.

  • n_threads (Optional[int]) – Number of threads that are used for parallelization over experimental conditions. If amici was not installed with openMP support this option will have no effect.

  • fim_for_hess (Optional[bool]) – Whether to use the FIM whenever the Hessian is requested. This only applies with forward sensitivities. With adjoint sensitivities, the true Hessian will be used, if available. FIM or Hessian will only be exposed if max_sensi_order>1.

  • amici_object_builder (Optional[AmiciObjectBuilder]) – AMICI object builder. Allows recreating the objective for pickling, required in some parallelization schemes.

  • calculator (Optional[AmiciCalculator]) – Performs the actual calculation of the function values and derivatives.

  • amici_reporting (Optional[RDataReporting]) – Determines which quantities will be computed by AMICI, see amici.Solver.setReturnDataReportingMode. Set to None to compute only the minimum required information.

apply_custom_timepoints()[source]

Apply custom timepoints, if applicable.

See the set_custom_timepoints method for more information.

Return type:

None

apply_steadystate_guess(condition_ix, x_dct)[source]

Apply steady state guess to edatas[condition_ix].x0.

Use the stored steadystate as well as the respective sensitivity ( if available) and parameter value to approximate the steadystate at the current parameters using a zeroth or first order taylor approximation: x_ss(x’) = x_ss(x) [+ dx_ss/dx(x)*(x’-x)]

Return type:

None

Parameters:
  • condition_ix (int) –

  • x_dct (Dict) –

call_unprocessed(x, sensi_orders, mode, edatas=None, parameter_mapping=None, amici_reporting=None)[source]

Call objective function without pre- or post-processing and formatting.

Returns:

result – A dict containing the results.

Parameters:
check_gradients_match_finite_differences(x=None, *args, **kwargs)[source]

Check if gradients match finite differences (FDs).

Parameters:

x (The parameters for which to evaluate the gradient.) –

Return type:

bool

Returns:

bool – Indicates whether gradients match (True) FDs or not (False)

check_mode(mode)[source]

See ObjectiveBase documentation.

Return type:

bool

Parameters:

mode (Literal['mode_fun', 'mode_res']) –

check_sensi_orders(sensi_orders, mode)[source]

See ObjectiveBase documentation.

Return type:

bool

Parameters:
  • sensi_orders (Tuple[int, ...]) –

  • mode (Literal['mode_fun', 'mode_res']) –

create_history(id, x_names, options)[source]

See history.generate.create_history documentation.

Parameters:
get_config()[source]

Return basic information of the objective configuration.

Return type:

dict

initialize()[source]

See ObjectiveBase documentation.

par_arr_to_dct(x)[source]

Create dict from parameter vector.

Return type:

Dict[str, float]

Parameters:

x (Sequence[float]) –

reset_steadystate_guesses()[source]

Reset all steadystate guess data.

Return type:

None

set_custom_timepoints(timepoints=None, timepoints_global=None)[source]

Create a copy of this objective that is evaluated at custom timepoints.

The intended use is to aid in predictions at unmeasured timepoints.

Parameters:
  • timepoints (Sequence[Sequence[Union[float, int]]]) – The outer sequence should contain a sequence of timepoints for each experimental condition.

  • timepoints_global (Sequence[Union[float, int]]) – A sequence of timepoints that will be used for all experimental conditions.

Return type:

AmiciObjective

Returns:

The customized copy of this objective.

store_steadystate_guess(condition_ix, x_dct, rdata)[source]

Store condition parameter, steadystate and steadystate sensitivity.

Stored in steadystate_guesses if steadystate guesses are enabled for this condition.

Return type:

None

Parameters:
class pypesto.objective.FD[source]

Bases: ObjectiveBase

Finite differences (FDs) for derivatives.

Given an objective that gives function values and/or residuals, this class allows to flexibly obtain all derivatives calculated via FDs.

For the parameters grad, hess, sres, a value of None means that the objective derivative is used if available, otherwise resorting to FDs. True means that FDs are used in any case, False means that the derivative is not exported.

Note that the step sizes should be carefully chosen. They should be small enough to provide an accurate linear approximation, but large enough to be robust against numerical inaccuracies, in particular if the objective relies on numerical approximations, such as an ODE.

Parameters:
  • grad (Optional[bool]) – Derivative method for the gradient (see above).

  • hess (Optional[bool]) – Derivative method for the Hessian (see above).

  • sres (Optional[bool]) – Derivative method for the residual sensitivities (see above).

  • hess_via_fval (bool) – If the Hessian is to be calculated via finite differences: whether to employ 2nd order FDs via fval even if the objective can provide a gradient.

  • delta_fun (Union[FDDelta, ndarray, float, str]) – FD step sizes for function values. Can be either a float, or a np.ndarray of shape (n_par,) for different step sizes for different coordinates.

  • delta_grad (Union[FDDelta, ndarray, float, str]) – FD step sizes for gradients, if the Hessian is calculated via 1st order sensitivities from the gradients. Similar to delta_fun.

  • delta_res (Union[FDDelta, float, ndarray, str]) – FD step sizes for residuals. Similar to delta_fun.

  • method (str) – Method to calculate FDs. Can be any of FD.METHODS: central, forward or backward differences. The latter two require only roughly half as many function evaluations, are however less accurate than central (O(x) vs O(x**2)).

  • x_names (List[str]) – Parameter names that can be optionally used in, e.g., history or gradient checks.

Examples

Define residuals and objective function, and obtain all derivatives via FDs:

>>> from pypesto import Objective, FD
>>> import numpy as np
>>> x_obs = np.array([11, 12, 13])
>>> res = lambda x: x - x_obs
>>> fun = lambda x: 0.5 * sum(res(x)**2)
>>> obj = FD(Objective(fun=fun, res=res))
BACKWARD = 'backward'
CENTRAL = 'central'
FORWARD = 'forward'
METHODS = ['central', 'forward', 'backward']
__init__(obj, grad=None, hess=None, sres=None, hess_via_fval=True, delta_fun=1e-06, delta_grad=1e-06, delta_res=1e-06, method='central', x_names=None)[source]
Parameters:
call_unprocessed(x, sensi_orders, mode, **kwargs)[source]

See ObjectiveBase for more documentation.

Main method to overwrite from the base class. It handles and delegates the actual objective evaluation.

Return type:

Dict[str, Union[float, ndarray, Dict]]

Parameters:
property has_fun: bool

Check whether function is defined.

property has_grad: bool

Check whether gradient is defined.

property has_hess: bool

Check whether Hessian is defined.

property has_res: bool

Check whether residuals are defined.

property has_sres: bool

Check whether residual sensitivities are defined.

class pypesto.objective.FDDelta[source]

Bases: object

Finite difference step size with automatic updating.

Reference implementation: https://github.com/ICB-DCM/PESTO/blob/master/private/getStepSizeFD.m

Parameters:
  • delta (Union[ndarray, float, None]) – (Initial) step size, either a float, or a vector of size (n_par,). If not None, this is used as initial step size.

  • test_deltas (ndarray) – Step sizes to try out in step size selection. If None, a range [1e-1, 1e-2, …, 1e-8] is considered.

  • update_condition (str) – A “good” step size may be a local property. Thus, this class allows updating the step size if certain criteria are met, in the pypesto.objective.finite_difference.FDDelta.update() function. FDDelta.CONSTANT means that the step size is only initially selected. FDDelta.DISTANCE means that the step size is updated if the current evaluation point is sufficiently far away from the last training point. FDDelta.STEPS means that the step size is updated max_steps evaluations after the last update. FDDelta.ALWAYS mean that the step size is selected in every call.

  • max_distance (float) – Coefficient on the distance between current and reference point beyond which to update, in the FDDelta.DISTANCE update condition.

  • max_steps (int) – Number of steps after which to update in the FDDelta.STEPS update condition.

ALWAYS = 'always'
CONSTANT = 'constant'
DISTANCE = 'distance'
STEPS = 'steps'
UPDATE_CONDITIONS = ['constant', 'distance', 'steps', 'always']
__init__(delta=None, test_deltas=None, update_condition='constant', max_distance=0.5, max_steps=30)[source]
Parameters:
get()[source]

Get delta vector.

Return type:

ndarray

update(x, fval, fun, fd_method)[source]

Update delta if update conditions are met.

Parameters:
Return type:

None

class pypesto.objective.NegLogParameterPriors[source]

Bases: ObjectiveBase

Implements Negative Log Priors on Parameters.

Contains a list of prior dictionaries for the individual parameters of the format

{‘index’: [int],

‘density_fun’: [Callable], ‘density_dx’: [Callable], ‘density_ddx’: [Callable]}

A prior instance can be added to e.g. an objective, that gives the likelihood, by an AggregatedObjective.

Notes

All callables should correspond to log-densities. That is, they return log-densities and their corresponding derivatives. Internally, values are multiplied by -1, since pyPESTO expects the Objective function to be of a negative log-density type.

__init__(prior_list, x_names=None)[source]

Initialize.

Parameters:
  • prior_list (List[Dict]) – List of dicts containing the individual parameter priors. Format see above.

  • x_names (Sequence[str]) – Sequence of parameter names (optional).

call_unprocessed(x, sensi_orders, mode, **kwargs)[source]

Call objective function without pre- or post-processing and formatting.

Return type:

Dict[str, Union[float, ndarray, Dict]]

Returns:

result – A dict containing the results.

Parameters:
check_mode(mode)[source]

See ObjectiveBase documentation.

Return type:

bool

Parameters:

mode (Literal['mode_fun', 'mode_res']) –

check_sensi_orders(sensi_orders, mode)[source]

See ObjectiveBase documentation.

Return type:

bool

Parameters:
  • sensi_orders (Tuple[int, ...]) –

  • mode (Literal['mode_fun', 'mode_res']) –

gradient_neg_log_density(x)[source]

Evaluate the gradient of the negative log-density at x.

hessian_neg_log_density(x)[source]

Evaluate the hessian of the negative log-density at x.

hessian_vp_neg_log_density(x, p)[source]

Compute vector product of the hessian at x with a vector p.

neg_log_density(x)[source]

Evaluate the negative log-density at x.

residual(x)[source]

Evaluate the residual representation of the prior at x.

residual_jacobian(x)[source]

Evaluate residual Jacobian.

Evaluate the Jacobian of the residual representation of the prior for a parameter vector x w.r.t. x, if available.

class pypesto.objective.NegLogPriors[source]

Bases: AggregatedObjective

Aggregates different forms of negative log-prior distributions.

Allows to distinguish priors from the likelihood by testing the type of an objective.

Consists basically of a list of individual negative log-priors, given in self.objectives.

class pypesto.objective.Objective[source]

Bases: ObjectiveBase

Objective class.

The objective class allows the user explicitly specify functions that compute the function value and/or residuals as well as respective derivatives.

Denote dimensions n = parameters, m = residuals.

Parameters:
  • fun (Callable) –

    The objective function to be minimized. If it only computes the objective function value, it should be of the form

    fun(x) -> float

    where x is an 1-D array with shape (n,), and n is the parameter space dimension.

  • grad (Union[Callable, bool]) –

    Method for computing the gradient vector. If it is a callable, it should be of the form

    grad(x) -> array_like, shape (n,).

    If its value is True, then fun should return the gradient as a second output.

  • hess (Callable) –

    Method for computing the Hessian matrix. If it is a callable, it should be of the form

    hess(x) -> array, shape (n, n).

    If its value is True, then fun should return the gradient as a second, and the Hessian as a third output, and grad should be True as well.

  • hessp (Callable) –

    Method for computing the Hessian vector product, i.e.

    hessp(x, v) -> array_like, shape (n,)

    computes the product H*v of the Hessian of fun at x with v.

  • res (Callable) –

    Method for computing residuals, i.e.

    res(x) -> array_like, shape(m,).

  • sres (Union[Callable, bool]) –

    Method for computing residual sensitivities. If it is a callable, it should be of the form

    sres(x) -> array, shape (m, n).

    If its value is True, then res should return the residual sensitivities as a second output.

  • x_names (Sequence[str]) – Parameter names. None if no names provided, otherwise a list of str, length dim_full (as in the Problem class). Can be read by the problem.

__init__(fun=None, grad=None, hess=None, hessp=None, res=None, sres=None, x_names=None)[source]
Parameters:
call_unprocessed(x, sensi_orders, mode, **kwargs)[source]

Call objective function without pre- or post-processing and formatting.

Return type:

Dict[str, Union[float, ndarray, Dict]]

Returns:

result – A dict containing the results.

Parameters:
get_config()[source]

Return basic information of the objective configuration.

Return type:

dict

property has_fun: bool

Check whether function is defined.

property has_grad: bool

Check whether gradient is defined.

property has_hess: bool

Check whether Hessian is defined.

property has_hessp: bool

Check whether Hessian vector product is defined.

property has_res: bool

Check whether residuals are defined.

property has_sres: bool

Check whether residual sensitivities are defined.

class pypesto.objective.ObjectiveBase[source]

Bases: ABC

Abstract objective class.

The objective class is a simple wrapper around the objective function, giving a standardized way of calling. Apart from that, it manages several things including fixing of parameters and history.

The objective function is assumed to be in the format of a cost function, log-likelihood function, or log-posterior function. These functions are subject to minimization. For profiling and sampling, the sign is internally flipped, all returned and stored values are however given as returned by this objective function. If maximization is to be performed, the sign should be flipped before creating the objective function.

Parameters:

x_names (Optional[Sequence[str]]) – Parameter names that can be optionally used in, e.g., history or gradient checks.

history

For storing the call history. Initialized by the methods, e.g. the optimizer, in initialize_history().

pre_post_processor

Preprocess input values to and postprocess output values from __call__. Configured in update_from_problem().

__call__(x, sensi_orders=(0,), mode='mode_fun', return_dict=False, **kwargs)[source]

Obtain arbitrary sensitivities.

This is the central method which is always called, also by the get_* methods.

There are different ways in which an optimizer calls the objective function, and in how the objective function provides information (e.g. derivatives via separate functions or along with the function values). The different calling modes increase efficiency in space and time and make the objective flexible.

Parameters:
  • x (ndarray) – The parameters for which to evaluate the objective function.

  • sensi_orders (Tuple[int, ...]) – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode (Literal['mode_fun', 'mode_res']) – Whether to compute function values or residuals.

  • return_dict (bool) – If False (default), the result is a Tuple of the requested values in the requested order. Tuples of length one are flattened. If True, instead a dict is returned which can carry further information.

Return type:

Union[float, ndarray, Tuple, Dict[str, Union[float, ndarray, Dict]]]

Returns:

result – By default, this is a tuple of the requested function values and derivatives in the requested order (if only 1 value, the tuple is flattened). If return_dict, then instead a dict is returned with function values and derivatives indicated by ids.

__init__(x_names=None)[source]
Parameters:

x_names (Sequence[str] | None) –

abstract call_unprocessed(x, sensi_orders, mode, **kwargs)[source]

Call objective function without pre- or post-processing and formatting.

Parameters:
  • x (ndarray) – The parameters for which to evaluate the objective function.

  • sensi_orders (Tuple[int, ...]) – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode (Literal['mode_fun', 'mode_res']) – Whether to compute function values or residuals.

Return type:

Dict[str, Union[float, ndarray, Dict]]

Returns:

result – A dict containing the results.

check_grad(x, x_indices=None, eps=1e-05, verbosity=1, mode='mode_fun', order=0, detailed=False)[source]

Compare gradient evaluation.

Firstly approximate via finite differences, and secondly use the objective gradient.

Parameters:
  • x (ndarray) – The parameters for which to evaluate the gradient.

  • x_indices (Sequence[int]) – Indices for which to compute gradients. Default: all.

  • eps (float) – Finite differences step size.

  • verbosity (int) – Level of verbosity for function output. 0: no output, 1: summary for all parameters, 2: summary for individual parameters.

  • mode (Literal['mode_fun', 'mode_res']) – Residual (MODE_RES) or objective function value (MODE_FUN) computation mode.

  • order (int) – Derivative order, either gradient (0) or Hessian (1).

  • detailed (bool) – Toggle whether additional values are returned. Additional values are function values, and the central difference weighted by the difference in output from all methods (standard deviation and mean).

Return type:

DataFrame

Returns:

result – gradient, finite difference approximations and error estimates.

check_grad_multi_eps(*args, multi_eps=None, label='rel_err', **kwargs)[source]

Compare gradient evaluation.

Equivalent to the ObjectiveBase.check_grad method, except multiple finite difference step sizes are tested. The result contains the lowest finite difference for each parameter, and the corresponding finite difference step size.

Parameters:
  • parameters. (All ObjectiveBase.check_grad method) –

  • multi_eps (Optional[Iterable]) – The finite difference step sizes to be tested.

  • label (str) – The label of the column that will be minimized for each parameter. Valid options are the column labels of the dataframe returned by the ObjectiveBase.check_grad method.

check_gradients_match_finite_differences(*args, x=None, x_free=None, rtol=0.01, atol=0.001, mode=None, order=0, multi_eps=None, **kwargs)[source]

Check if gradients match finite differences (FDs).

Parameters:
  • rtol (relative error tolerance) –

  • x (The parameters for which to evaluate the gradient) –

  • x_free (Indices for which to compute gradients) –

  • rtol

  • atol (absolute error tolerance) –

  • mode (function values or residuals) –

  • order (gradient order, 0 for gradient, 1 for hessian) –

  • multi_eps (multiple test step width for FDs) –

Return type:

bool

Returns:

bool – Indicates whether gradients match (True) FDs or not (False)

check_mode(mode)[source]

Check if the objective is able to compute in the requested mode.

Either check_mode or the fun_… functions must be overwritten in derived classes.

Parameters:

mode (Literal['mode_fun', 'mode_res']) – Whether to compute function values or residuals.

Return type:

bool

Returns:

flag – Boolean indicating whether mode is supported

check_sensi_orders(sensi_orders, mode)[source]

Check if the objective is able to compute the requested sensitivities.

Either check_sensi_orders or the fun_… functions must be overwritten in derived classes.

Parameters:
  • sensi_orders (Tuple[int, ...]) – Specifies which sensitivities to compute, e.g. (0,1) -> fval, grad.

  • mode (Literal['mode_fun', 'mode_res']) – Whether to compute function values or residuals.

Return type:

bool

Returns:

flag – Boolean indicating whether combination of sensi_orders and mode is supported

create_history(id, x_names, options)[source]

See history.generate.create_history documentation.

get_config()[source]

Get the configuration information of the objective function.

Return it as a dictionary.

Return type:

dict

get_fval(x)[source]

Get the function value at x.

Return type:

float

Parameters:

x (ndarray) –

get_grad(x)[source]

Get the gradient at x.

Return type:

ndarray

Parameters:

x (ndarray) –

get_hess(x)[source]

Get the Hessian at x.

Return type:

ndarray

Parameters:

x (ndarray) –

get_res(x)[source]

Get the residuals at x.

Return type:

ndarray

Parameters:

x (ndarray) –

get_sres(x)[source]

Get the residual sensitivities at x.

Return type:

ndarray

Parameters:

x (ndarray) –

property has_fun: bool

Check whether function is defined.

property has_grad: bool

Check whether gradient is defined.

property has_hess: bool

Check whether Hessian is defined.

property has_hessp: bool

Check whether Hessian-vector product is defined.

property has_res: bool

Check whether residuals are defined.

property has_sres: bool

Check whether residual sensitivities are defined.

initialize()[source]

Initialize the objective function.

This function is used at the beginning of an analysis, e.g. optimization, and can e.g. reset the objective memory. By default does nothing.

static output_to_tuple(sensi_orders, mode, **kwargs)[source]

Return values as requested by the caller.

Usually only a subset of outputs is demanded. One output is returned as-is, more than one output are returned as a tuple in order (fval, grad, hess).

Return type:

Tuple

Parameters:
update_from_problem(dim_full, x_free_indices, x_fixed_indices, x_fixed_vals)[source]

Handle fixed parameters.

Later, the objective will be given parameter vectors x of dimension dim, which have to be filled up with fixed parameter values to form a vector of dimension dim_full >= dim. This vector is then used to compute function value and derivatives. The derivatives must later be reduced again to dimension dim.

This is so as to make the fixing of parameters transparent to the caller.

The methods preprocess, postprocess are overwritten for the above functionality, respectively.

Parameters:
  • dim_full (int) – Dimension of the full vector including fixed parameters.

  • x_free_indices (Sequence[int]) – Vector containing the indices (zero-based) of free parameters (complimentary to x_fixed_indices).

  • x_fixed_indices (Sequence[int]) – Vector containing the indices (zero-based) of parameter components that are not to be optimized.

  • x_fixed_vals (Sequence[float]) – Vector of the same length as x_fixed_indices, containing the values of the fixed parameters.

property x_names: List[str] | None

Parameter names.

pypesto.objective.get_parameter_prior_dict(index, prior_type, prior_parameters, parameter_scale='lin')[source]

Return the prior dict used to define priors for some default priors.

index:

index of the parameter in x_full

prior_type:

Prior is defined in LINEAR=untransformed parameter space, unless it starts with “parameterScale”. prior_type can be any of {“uniform”, “normal”, “laplace”, “logNormal”, “parameterScaleUniform”, “parameterScaleNormal”, “parameterScaleLaplace”}

prior_parameters:

Parameters of the priors. Parameters are defined in linear scale.

parameter_scale:

scale in which the parameter is defined (since a parameter can be log-transformed, while the prior is always defined in the linear space, unless it starts with “parameterScale”)

Parameters:
  • index (int) –

  • prior_type (str) –

  • prior_parameters (list) –

  • parameter_scale (str) –