Derivative-based optimisation and inference for cardiac electrophysiology models

Michael Clerx1, David Augustin2, Alister Dale-Evans2, Gary Mirams1
1University of Nottingham, 2University of Oxford


Abstract

The action potential (AP) of cardiac myocytes is commonly modelled using a nonlinear and stiff system of ordinary differential equations (ODEs). (Re)calibration of AP models, for example to create patient or cell-specific models, requires running forward simulations and defining some error or likelihood function that quantifies the mismatch between model predictions and training data. This objective function is then passed to an optimiser (for model calibration) or sampler (for uncertainty quantification). Many optimisation and sampling algorithms rely on knowing the derivatives of this objective function with respect to the parameters (the sensitivities). Because evaluating the objective function involves numerical integration of the ODE system, without special consideration the sensitivity of the output to parameters is typically unknown and so derivative-free optimisation and sampling algorithms must be used. However there are a number of derivative-based optimisation and sampling algorithms which are heavily used in modern data science (e.g. Hamiltonian Monte Carlo).

In this study, we extend our simulation tool, Myokit, with the capability to rapidly calculate derivatives of the ODE solution (using the CVODES library), and couple it to our inference tool, PINTS, to calculate sensitivities of error functions and likelihoods. We perform benchmarking on models of the AP as well as models of single ion channels to quantify the overhead of evaluating sensitivities in addition to solving the forward problem. We then compare the performance of state-of-the art derivative-free optimisation and sampling methods with derivative-dependent methods. In some cases the expected trade-off between reduced number of iterations but increased cost per iteration is observed. But in other cases we find that the heuristic methods’ black-box treatment of the objective function provides an additional benefit — a reduced sensitivity to noise in the experimental data — which makes them the methods of choice even when derivatives are known.