pypesto.hierarchical.relative
Relative data integration
Contains an implementation of the hierarchical inner subproblem for relative data. In this inner problem, the scaling factors, offsets, and noise standard deviations are optimized, conditional on the outer dynamical parameters. The inner problem can be solved analytically.
An example of parameter estimation with relative data can be found in pypesto/doc/examples/relative_data.ipynb.
The implementation in this package is based on:
Loos et al. 2018 (https://doi.org/10.1093/bioinformatics/bty514), who give an analytic solution for the inner problem for scaling factors and noise standard deviations, for Gaussian and Laplace noise, using forward sensitivity analysis (FSA).
Schmiester et al. 2020 (https://doi.org/10.1093/bioinformatics/btz581), who give an analytic solution for the inner problem for scaling factors, offsets and noise standard deviations, for Gaussian and Laplace noise, using adjoint sensitivity analysis (ASA). ASA allows to calculate gradients substantially more efficiently in high dimension.
- class pypesto.hierarchical.relative.AnalyticalInnerSolver[source]
Bases:
RelativeInnerSolver
Solve the inner subproblem analytically.
Currently, supports sigmas for additive Gaussian noise.
- solve(problem, sim, sigma, scaled)[source]
Solve the subproblem analytically.
- Parameters:
problem (
InnerProblem
) – The inner problem to solve.sim (
list
[ndarray
]) – List of model output matrices, as provided in AMICI’sReturnData.y
. Same order as simulations in the PEtab problem.sigma (
list
[ndarray
]) – List of sigma matrices from the model, as provided in AMICI’sReturnData.sigmay
. Same order as simulations in the PEtab problem.scaled (
bool
) – Whether to scale the results to the parameter scale specified inproblem
.
- Return type:
- class pypesto.hierarchical.relative.NumericalInnerSolver[source]
Bases:
RelativeInnerSolver
Solve the inner subproblem numerically.
Advantage: The structure of the subproblem does not matter like, at all. Disadvantage: Slower.
Special features: We cache the best parameters, which substantially speeds things up.
- minimize_kwargs
Passed to the pypesto.optimize.minimize call.
- n_cached
Number of optimized parameter vectors to save.
- problem_kwargs
Passed to the pypesto.Problem constructor.
- x_guesses
Cached optimized parameter vectors, supplied as guesses to the next solve call.
- sample_startpoints(problem, pars)[source]
Sample startpoints for the numerical optimization.
Samples the startpoints for the numerical optimization from a log-uniform distribution using the symmetric logarithmic scale.
- Parameters:
problem (
InnerProblem
) – The inner problem to solve.pars (
list
[InnerParameter
]) – The inner parameters to sample startpoints for.
- Return type:
- Returns:
The sampled startpoints appended to the cached startpoints.
- solve(problem, sim, sigma, scaled)[source]
Solve the subproblem numerically.
- Parameters:
problem (
InnerProblem
) – The inner problem to solve.sim (
list
[ndarray
]) – List of model output matrices, as provided in AMICI’sReturnData.y
. Same order as simulations in the PEtab problem.sigma (
list
[ndarray
]) – List of sigma matrices from the model, as provided in AMICI’sReturnData.sigmay
. Same order as simulations in the PEtab problem.scale – Whether to scale the results to the parameter scale specified in
problem
.scaled (bool) –
- Return type:
- class pypesto.hierarchical.relative.RelativeAmiciCalculator[source]
Bases:
AmiciCalculator
A calculator that is passed as calculator to the pypesto.AmiciObjective.
- __call__(x_dct, sensi_orders, mode, amici_model, amici_solver, edatas, n_threads, x_ids, parameter_mapping, fim_for_hess, rdatas=None)[source]
Perform the actual AMICI call, with hierarchical optimization.
The return object also includes the simulation results that were generated to solve the inner problem, as well as the parameters that solver the inner problem.
- Parameters:
x_dct (
dict
) – Parameters for which to compute function value and derivatives.sensi_orders (
tuple
[int
]) – Tuple of requested sensitivity orders.mode (
Literal
['mode_fun'
,'mode_res'
]) – Call mode (function value or residual based).edatas (
list
[ExpData
]) – The experimental data.n_threads (
int
) – Number of threads for AMICI call.parameter_mapping (
ParameterMapping
) – Mapping of optimization to simulation parameters.fim_for_hess (
bool
) – Whether to use the FIM (if available) instead of the Hessian (if requested).rdatas (
list
[ReturnData
]) – AMICI simulation return data. In case the calculator is part of thepypesto.objective.amici.InnerCalculatorCollector
, it will already simulate the model and pass the results here.
- Returns:
inner_result – A dict containing the calculation results: FVAL, GRAD, RDATAS and INNER_PARAMETERS.
- __init__(inner_problem, inner_solver=None)[source]
Initialize the calculator from the given problem.
- Parameters:
inner_problem (
AmiciInnerProblem
) – The inner problem of a hierarchical optimization problem.inner_solver (
InnerSolver
|None
) – A solver to solveinner_problem
. Defaults topypesto.hierarchical.solver.AnalyticalInnerSolver
.
- calculate_directly(x_dct, sensi_orders, mode, amici_model, amici_solver, edatas, n_threads, x_ids, parameter_mapping, fim_for_hess, rdatas=None)[source]
Calculate directly via solver calculate methods.
This is possible if the forward method is used, and the Hessian is not requested. In this case, the objective function and gradient are computed directly using the solver methods.
- Parameters:
x_dct (dict) –
mode (Literal['mode_fun', 'mode_res']) –
edatas (list[ExpData]) –
n_threads (int) –
parameter_mapping (ParameterMapping) –
fim_for_hess (bool) –
rdatas (list[ReturnData]) –
- call_amici_twice(x_dct, sensi_orders, mode, amici_model, amici_solver, edatas, n_threads, x_ids, parameter_mapping, fim_for_hess)[source]
Calculate by calling AMICI twice.
This is necessary if the adjoint method is used, or if the Hessian is requested. In these cases, AMICI is called first to obtain simulations for the calculation of the inner parameters, and then again to obtain the requested objective function and gradient through AMICI.
- class pypesto.hierarchical.relative.RelativeInnerProblem[source]
Bases:
AmiciInnerProblem
Inner optimization problem for relative data with scaling/offset.
- xs
Mapping of (inner) parameter ID to
InnerParameters
.
- data
Measurement data. One matrix (num_timepoints x num_observables) per simulation condition. Missing observations as NaN.
- edatas
AMICI
ExpData
s for each simulation condition.