Hello,
I have been looking at the SciPy implementation and test suite of some of the more obscure special functions which now live in the xsf library. One function set of interest to me are the Mathieu functions mathieu_cem, mathieu_sem, mathieu_modcem1, mathieu_modsem1, etc. The algorithms used for these functions comes from the book by Zhang and Jin. Their implementation is known to be buggy (several SciPy bugs have been filed against it, cf Erroneous outputs for scipy.special.mathieu_cem and scipy.special.mathieu_sem · Issue #14543 · scipy/scipy · GitHub). That’s fine – I know there is work underway to fix the impls including some students of mine. This has been discussed with members of the SciPy team who are porting the special fcns to xsf.
Regarding testing, I see SciPy uses golden values living in parquet files. For example, the test tables for mathieu_cem live in the directory xsf/xsref/tables/scipy_special_tests/cem. Testing is done using pytest. According to the readme in the xsref directory, the golden values are generated using mpmath, which is a famous multiprecision numerics package for Python.
That all sounds good, but mpmath does not include an implementation of the Mathieu functions. My question is, how were the parquet files generated to test the Mathieu functions?
A related question is, where does the source code live which creates all the parquet files used to test SciPy?
Best regards,
Stuart Brorson
Northeastern University
Boston, Mass, USA
1 Like
If I’m understanding the code correctly, the tables for testing xsf’s cem were generated by SciPy’s mathieu_cem. (This is my first time looking at this code; you should take my advice with a grain of salt and check everything I say.)
It seems like the tables are generated by src/xsref/tables.py. This uses src/xsref/_reference/_functions.py to implement reference implementations for xsf. The cem reference function is implemented like this:
@reference_implementation(scipy=ufuncs.mathieu_cem)
def cem(m: Real, q: Real, x: Real) -> Tuple[Real, Real]:
"""Even Mathieu function and its derivative."""
raise NotImplementedError
In src/xsref/_reference/_framework.py, we can see how a NotImplementedError would be handled.
except Exception as e:
if self.scipy_func is not None:
result = self.scipy_func(*args)
message = (
f"Reference implementation {func.__name__} falling"
f" back to SciPy\nwith args: {args},\ndue to exception: "
f"{type(e).__name__}"
)
message += f", {e}" if str(e) else ""
warnings.warn(message, XSRefFallbackWarning)
else:
raise e
In other words, if there’s an exception, use the SciPy function as a reference.
Based on this, it seems that it is therefore a characterization test, rather than test that the output is necessarily correct.
Thanks for the explanation. It makes sense that the golden values used in testing were generated by the implementation itself. That is, the values returned by the Mathieu implementation are sometimes erroneous, but the tests don’t check actual accuracy, only reproducibility.
We’re working on some new implementations for the Mathieus and plan to provide a test suite using independent, verifiable test methods along with any implementation we manage to produce. Is it required to produce parquet tables for SciPy testing? Or is just a suite of stand-alone tests acceptable?
Best regards,
Stuart
I’m not sure if it’s required - I don’t know enough about xsf’s development process to say.
I will say that if you can write a Python function that calculates cem()
, you are already 90% of the way done with producing updated parquet tables. It doesn’t need to be a closed-form solution. For example, our implementation of bdtri()
uses a root-solver to find the point where bdtr(k, n, p) = y
. The difficult part is writing the implementation and checking that it is correct. Updating the tables can be done by an automated program. I’m sure the xsf maintainers would be willing to run the table-updating script if you can provide an implementation.
PS: You may want to file an issue on the xsf repo if you don’t get a reply here.
Thanks. We can already calculate the Mathieu characteristic numbers (i.e. eigenvalues) a(q) and b(q), the expansion coeffs A(q) and B(q), the angular functions ce and se , and the radial functions (“modified Mathieu functions of the first kind”) Ce and Se.
However, I would want any submission we provide to SciPy to include tests which validate against external standards like golden values computed using e.g. Mathematica or GSL, as well as using known identities (we have only a few of those). I will be sure that whatever tests we develop will enable somebody to dump golden values to a file which can then be fed to our impls as part of automated testing.
Thanks for your helpful answers. I can file an issue at the xsf repo per your suggestion as we get closer to a complete solution.
Best regards,
Stuart
1 Like