End-to-end learning in hybrid numerical models involves solving an optimization problem that integrates the model’s solver. In many fields, these solvers are written in low-abstraction programming languages that lack automatic differentiation. This work presents a practical approach to solving the optimization problem by efficiently approximating the gradient of the end-to-end objective function.
- Said Ouala
- Bertrand Chapron
- Ronan Fablet