Several special functions started upcasting low precision integer inputs to float64 in v1.14.0

Running the following snippet in 1.14.0 vs 1.13.1 yields output of different types

import numpy as np
import scipy
import scipy.special as ss

print(scipy.__version__)

x_test = np.array([1, 2, 3], dtype="int8")
print(ss.psi(x_test).dtype)  # float64

I detected a couple other functions, that used to output float32 (for “input<int32”) and now do float64. Was this a conscious decision? It’s giving us some minor headaches in the PyTensor project, where we have to know in advance what dtype scipy will output

@steppi and @izaid have the latest and greatest about this issue but historically there were no single precision versions of any special functions as far as I know. The downcasting probably was due to either internal upcasting and then downcasting by f2py if the function was wrapped Fortran code or alternatively it has been done via the C/Cython infrastructure.

After translating all functions from Fortran to C, then special functions and then ufuncs are also rewritten in C++ and ufuncs mechanism is modified.

Hence, I would cautiously say that you can switch to float64 in all outputs. But I’ll leave it to folks pinged above for a definite answer.

I think @ilayn explained this, but I don’t think there was even a conscious decision to use float32 for integers before. The situation now, which hopefully is more standard, is that functions in special will be ufuncs. Many of them now have explicit float32 versions, which didn’t exist before. If you want float32, do give float32 as input.

In this case, where you are giving integers as input, I’d say your output is expected. I would think most people would prefer float64 output given integer input rather than float32. The decision is made by the ufunc, which uses the first matching signature in the list. We have put that to the float64 one.

Yeah I believe it may not have been conscious, but it was pretty consistent for at least 7 years :slight_smile: We inherited Theano, and the last time they had to address things was here: Fix chi2sf, it always return float64! · Theano/Theano@c4b04ae · GitHub where Chi2.sf stood out for being the only used method that upcast small integers to float64.

Perhaps it could have counted as breaking compatibility change in the release notes.

We will probably opt to explicitly downcast the outputs, because in our context keeping float64 buffers can be a luxury / waste. Also importantly, we don’t want to have a lower pin on such a recent version of Scipy, so we need to make sure users with both older and more recent versions get the same results.

Anyway, thanks for clarifying why this is happening.