User Trigger-able Backstopping in SLSQP

I’ve created a PR in SciPy Optimize to allow users to trigger backtracking in SLSQP. This is standard functionality in advanced optimizers such as SNOPT (See Section 6.2 paragraph) and IPOPT (Section 2.5, first paragraph after algorithm) . This type of backtracking is required for use with large scale optimization frameworks such as OpenMDAO.

Currently, I’m mirrored the OpenMDAO implementation with the the implementation of an AnalysisError. I’ve done this rather than taking the general case of allowing a user to pass nan (like IPOPT) or just generally respond to ValueErrors because I’ve like to ensure the change is fully backwards compatible and won’t affect any programs currently using SciPy SLSQP. As the AnalysisError class is newly added, it’s impossible for a user to hit the newly added functionality in legacy code. I’m very open to suggestions however on alternative implementations.

Some personal background - I’ve spent a number of years working in the aerospace industry developing optimization tools. Many of these involve optimization of iterative processes (buckling analysis, implicit ODE integration for trajectory simulation, etc) and calling black box tools that will only evaluate in a strict domain. In the case of optimization over iterative processes, if/when these processes diverge, it can often cause an optimization to derail if the step is accepted. This feature would allow a user to trigger backtracking to recover from this. In the second case of strict domain allowances, this can cause issues with many optimizers that don’t guarantee feasibility at each iteration, as a single iteration outside of a feasible domain could trigger an optimization failure if the step were outside the allowable region of the external tool. In the past, when I’ve needed to work around these types of limitations, I’ve used SNOPT, however this is a very expensive tool, and I think bringing this functionality to a widely available optimizer/distribution like SciPy’s SLSQP would bring a great deal of value to the community.

Thanks,
Andrew

Thanks for proposing this @andrewellis55. The principle makes sense to me, and it’s encouraging to see that your PR required only a few lines of code to implement this for SLSQP. A few questions that this raised for me:

  • Are there some benchmark functions that could be added to our set of benchmarks (see here), or a few real-world examples with code/data available? That would make it much easier to measure optimizer performance (success/failure, number of function evaluations) when adding this to more optimizers, as well as have confidence that it works as advertised for enough cases to begin with.
  • Would this require a change to how the callback keyword in minimize works or anywhere else? I’d think that there are usage patterns where users rely on intermediate results being valid when the end result has success=True.
  • Is it in principle possible to add this to all minimize methods, or only a subset of them?