I am still a bit on the fence for a deprecation that is currently on NumPy main and scheduled for release with NumPy 1.24. The reason for this change is related to NEP 50, that in the future, I want:
np.array(1, dtype=np.int8) + 5000
to raise an error, because this should (approximately) be the same as:
np.array(1, dtype=np.int8) + np.int8(5000)
While currently the operation returns an
int16 result. So the first example
np.int8(1) + 5000 should error, but for the explicit second one we do have a choice. The error seems convenient for implementation, but not necessary.
So, due to the above, we decided to deprecate all out-of-bounds integer assignment and conversions for Python integers. These (with some rare exceptions) worked previously. The main examples of things that will fail are:
np.array(, dtype="int8") # will fail np.int16(5000000) # will fail # And assignment arr = np.zeros(3, dtype="int8") arr = 50000 # will fail # As well as the unsigned ones: np.array([-1], dtype="uint8") # will fail np.uint8(-1) # will fail
While NumPy usually allows e.g.
np.array(5000).astype(np.int8) and would continue to do so (an unsafe cast).
The reason why I am a bit unsure is, that I think this change doesn’t affect libraries much, but those are the most likely to give feedback on failures normally.
So, bringing this up again as a poll, because formulating an opinion is hard, but overwhelming gut feeling aggregated in a poll may be good information:
- Yes, fully agree
- Yes, but I am unsure
- No, but I don’t expect issues
- No, strongly disagree
One thing that you sometimes see is the use of
-1 together with unsigned integers (to get the maximum integer). Assuming we do the deprecation, we could except the scalar creation functions such as
np.uint8(-1)be an exception (assuming deprecation)
- No exception: It is surprising and not helpful enough
- Exception for small negative integers
- Allow even
np.uint8(300) == np.uint8(44)(current behavior)