Hi everyone,
As a start, I just want to say that I’m not an expert either in mathematics nor coding. In fact, I’m a computational biologist which tries to develop different kind of tools to understand the cell’s behaviour.
Currently, I’m working on a framework, which allows the user to simulate a fermentation process using the ODE solver solve_ivp. During the simulation, the user basically tracks different kind of metabolites that can be produced by the cell. The right hand side function technically calculates the metabolite change over the time, which in general in the form of metabolite production - metabolite consumption by the cell, mind you, the metabolite production/consumption are the results of the solving of a linear programming (LP) probelm (we call this a genome scale model). So at each iteration, LP is solvedm and ists result is used to calculate dydt.
In many fermentation process, it is very often that the conditions fluctuate so much. (especially a fermentation process, where you feed the cell culture). In this condition, you can imagine that some specific metabolites will be consumed to zero concentration and being added again later on. This happens super fast (magnitude of second) and from my experience, this seems to “confuse” the solver’s internal error estimation and causes my program to “stuck” during integration process.
So far I tested different methods from solve_ivp including RK45, BDF, Radau, LSODA, even explicit euler. Out of my experience, RK45 works the best, but like I have said before, it’s very instable, depending on the initial conditions and how many metabolites I pass as an array to the solver.
I have observed two different “stuck” behaviour of the solvers:
- Solver keeps calling the right hand side definition for the same time step
- Solver stops completely and the terminal froze
I’m really lost at the moment and really don’t have any idea what’s going on.
If anyone has some idea on what’s going on/what I can do to improve the solver’s performance that would be very great.
My colleague suggested this might be because in the concentration array, the tracked metabolite’s concentrations have different magnitudes which is not good for the solver (the solver likes to work with entriy values with similar magnitude) But I really doubt that’s an issue at all.
I’d really appreciate any comment/feedback/suggestion/idea from the community.
Thanks a lot!