Dtype dependent default rtol for xp_assert_close

Hi team,

We have recently just merged TST: add dtype dependent default rtol to xp_assert_close by j-bowhay · Pull Request #20625 · scipy/scipy · GitHub to help with adding array API support. Specifically, this PR adds dtype-dependent tolerances. Quoting from the PR:

Different array libraries have different default floating dtypes and as such the default rtol=1e-7 of xp_assert_close can be too stringent when using the default type. This leaves two options:

  1. Specify the dtype of the input data eg. xp.asarray(..., dtype=xp.float64)
  2. Specify a lower value of rtol

The first is potentially as undesirable as it is reasonable to expect the user to be using the default type most of the time. The second is potentially undesirable as it makes the test unnecessarily less stringent for some libraries.

As for the choice of value:

Good question. It is a nice number that puts us about halfway between sqrt(eps) and the old 1e-7 default for floats, and it was on my mind because it’s used in the definition of tolerances in optimize._zeros.py. And yeah, it happened to be enough so that no old tests started failing. I can add a comment that any multiple that keeps us under the old default would be OK. sqrt(eps) can be theoretically motivated in some contexts.

If you would like to motivate a different choice of value or approach please feel free to open a PR or discuss!


The dtype-dependent rtol seems good to me. Tangentially related, should we have a (small) default atol? In various places, we have to add atol arguments due to expected values of 0.0.

I’ve seen tiny, the smallest normal float of the dtype, used. I wouldn’t make it much bigger, though. It isn’t as common to need atol since it’s only really relevant when the reference value is zero. In those few cases, I think it’s OK to require to require that the developer specify rather than for the test to be weaker than they intended. (That said, most of my rationale would come from the default atols and rtols of user-facing algorithms, and that might not apply here. I’m aware of the fundamental differences between atols and rtols, but here we’re discussing a matter of developer convenience. I suppose if everyone were aware of the change, it would not be so much worse than suddenly relaxing rtol for float32.)

I am not in favor of a nonzero atol. That just moves the problem from “zero reference values show a test error” to “any very smal lvalues (e.g., for special functions or the tails of stats distributions) are now effectively untested”. The atol=0 issue is more explicitly visible and easy to resolve when it does happen, so is preferable.

1 Like

FWIW I’d recommend to use explicit atol/rtol regardless of the default.