pypesto.hierarchical.relative

Relative data integration

Contains an implementation of the hierarchical inner subproblem for relative data. In this inner problem, the scaling factors, offsets, and noise standard deviations are optimized, conditional on the outer dynamical parameters. The inner problem can be solved analytically.

An example of parameter estimation with relative data can be found in pypesto/doc/examples/relative_data.ipynb.

The implementation in this package is based on:

  • Loos et al. 2018 (https://doi.org/10.1093/bioinformatics/bty514), who give an analytic solution for the inner problem for scaling factors and noise standard deviations, for Gaussian and Laplace noise, using forward sensitivity analysis (FSA).

  • Schmiester et al. 2020 (https://doi.org/10.1093/bioinformatics/btz581), who give an analytic solution for the inner problem for scaling factors, offsets and noise standard deviations, for Gaussian and Laplace noise, using adjoint sensitivity analysis (ASA). ASA allows to calculate gradients substantially more efficiently in high dimension.

class pypesto.hierarchical.relative.AnalyticalInnerSolver[source]

Bases: RelativeInnerSolver

Solve the inner subproblem analytically.

Currently, supports sigmas for additive Gaussian noise.

solve(problem, sim, sigma, scaled)[source]

Solve the subproblem analytically.

Parameters:
  • problem (InnerProblem) – The inner problem to solve.

  • sim (list[ndarray]) – List of model output matrices, as provided in AMICI’s ReturnData.y. Same order as simulations in the PEtab problem.

  • sigma (list[ndarray]) – List of sigma matrices from the model, as provided in AMICI’s ReturnData.sigmay. Same order as simulations in the PEtab problem.

  • scaled (bool) – Whether to scale the results to the parameter scale specified in problem.

Return type:

dict[str, float]

class pypesto.hierarchical.relative.NumericalInnerSolver[source]

Bases: RelativeInnerSolver

Solve the inner subproblem numerically.

Advantage: The structure of the subproblem does not matter like, at all. Disadvantage: Slower.

Special features: We cache the best parameters, which substantially speeds things up.

minimize_kwargs

Passed to the pypesto.optimize.minimize call.

n_cached

Number of optimized parameter vectors to save.

problem_kwargs

Passed to the pypesto.Problem constructor.

x_guesses

Cached optimized parameter vectors, supplied as guesses to the next solve call.

__init__(minimize_kwargs=None, n_cached=1, problem_kwargs=None)[source]
Parameters:
initialize()[source]

(Re-)initialize the solver.

sample_startpoints(problem, pars)[source]

Sample startpoints for the numerical optimization.

Samples the startpoints for the numerical optimization from a log-uniform distribution using the symmetric logarithmic scale.

Parameters:
Return type:

ndarray

Returns:

The sampled startpoints appended to the cached startpoints.

solve(problem, sim, sigma, scaled)[source]

Solve the subproblem numerically.

Parameters:
  • problem (InnerProblem) – The inner problem to solve.

  • sim (list[ndarray]) – List of model output matrices, as provided in AMICI’s ReturnData.y. Same order as simulations in the PEtab problem.

  • sigma (list[ndarray]) – List of sigma matrices from the model, as provided in AMICI’s ReturnData.sigmay. Same order as simulations in the PEtab problem.

  • scale – Whether to scale the results to the parameter scale specified in problem.

  • scaled (bool) –

Return type:

dict[str, float]

class pypesto.hierarchical.relative.RelativeAmiciCalculator[source]

Bases: AmiciCalculator

A calculator that is passed as calculator to the pypesto.AmiciObjective.

__call__(x_dct, sensi_orders, mode, amici_model, amici_solver, edatas, n_threads, x_ids, parameter_mapping, fim_for_hess, rdatas=None)[source]

Perform the actual AMICI call, with hierarchical optimization.

The return object also includes the simulation results that were generated to solve the inner problem, as well as the parameters that solver the inner problem.

Parameters:
  • x_dct (dict) – Parameters for which to compute function value and derivatives.

  • sensi_orders (tuple[int]) – Tuple of requested sensitivity orders.

  • mode (Literal['mode_fun', 'mode_res']) – Call mode (function value or residual based).

  • amici_model (Union[Model, ModelPtr]) – The AMICI model.

  • amici_solver (Union[Solver, SolverPtr]) – The AMICI solver.

  • edatas (list[ExpData]) – The experimental data.

  • n_threads (int) – Number of threads for AMICI call.

  • x_ids (Sequence[str]) – Ids of optimization parameters.

  • parameter_mapping (ParameterMapping) – Mapping of optimization to simulation parameters.

  • fim_for_hess (bool) – Whether to use the FIM (if available) instead of the Hessian (if requested).

  • rdatas (list[ReturnData]) – AMICI simulation return data. In case the calculator is part of the pypesto.objective.amici.InnerCalculatorCollector, it will already simulate the model and pass the results here.

Returns:

inner_result – A dict containing the calculation results: FVAL, GRAD, RDATAS and INNER_PARAMETERS.

__init__(inner_problem, inner_solver=None)[source]

Initialize the calculator from the given problem.

Parameters:
  • inner_problem (AmiciInnerProblem) – The inner problem of a hierarchical optimization problem.

  • inner_solver (InnerSolver | None) – A solver to solve inner_problem. Defaults to pypesto.hierarchical.solver.AnalyticalInnerSolver.

calculate_directly(x_dct, sensi_orders, mode, amici_model, amici_solver, edatas, n_threads, x_ids, parameter_mapping, fim_for_hess, rdatas=None)[source]

Calculate directly via solver calculate methods.

This is possible if the forward method is used, and the Hessian is not requested. In this case, the objective function and gradient are computed directly using the solver methods.

Parameters:
call_amici_twice(x_dct, sensi_orders, mode, amici_model, amici_solver, edatas, n_threads, x_ids, parameter_mapping, fim_for_hess)[source]

Calculate by calling AMICI twice.

This is necessary if the adjoint method is used, or if the Hessian is requested. In these cases, AMICI is called first to obtain simulations for the calculation of the inner parameters, and then again to obtain the requested objective function and gradient through AMICI.

Parameters:
initialize()[source]

Initialize.

class pypesto.hierarchical.relative.RelativeInnerProblem[source]

Bases: AmiciInnerProblem

Inner optimization problem for relative data with scaling/offset.

xs

Mapping of (inner) parameter ID to InnerParameters.

data

Measurement data. One matrix (num_timepoints x num_observables) per simulation condition. Missing observations as NaN.

edatas

AMICI ExpDatas for each simulation condition.

__init__(**kwargs)[source]
check_edatas(edatas)[source]

Check for consistency in data.

Currently only checks for the actual data values. e.g., timepoints are not compared.

Parameters:

edatas (list[ExpData]) – A data set. Will be checked against the data set provided to the constructor.

Return type:

bool

Returns:

Whether the data sets are consistent.

static from_petab_amici(petab_problem, amici_model, edatas)[source]

Create an InnerProblem from a PEtab problem and AMICI objects.

Return type:

RelativeInnerProblem

Parameters:
  • petab_problem (Problem) –

  • amici_model (Model) –

  • edatas (list[ExpData]) –