Poll: Future NumPy behavior when mixing arrays, NumPy scalars, and Python scalars

In this poll we ask for your opinion on how NumPy should work when arrays, NumPy scalars, and Python scalars are mixed.
The poll is divided into three parts:

  1. Asking you increasingly complex questions about what answer you expect from certain NumPy operations.
  2. Asking two trickier question around possible design choices (somewhat independently from your preferred choices).
  3. Asking your opinion on the feasibility of these changes with respect to breaking backward compatibility.

Please answer these questions without checking the actual NumPy behavior (some of it will be explained). We are interested in what you would think the best possible behavior is.

Note that in the following type_arr denotes a 1-D NumPy arr, i.e.

uint8_arr = np.arange(100, dtype=np.uint8)  # always 1-D for simplicity
float32_arr = np.linspace(0, 1, 100, dtype=np.float32)

And so on. All NumPy scalars will be written as uint8(1), float32(3.), etc. (I further use int64 as the default integer, which is not true e.g. on windows.)

Many of the question are targeted to integers, because the issues are more pronounced, however, similar considerations always exists for float32 vs float64.

(Votes in the following questions are public, you can edit votes by clicking on “show results”.)

Part 1: Please mark your preferred result:

The following are some basic operations, please mark the result dtype that you would expect.

uint8_arr + int32(3) == ?
  • uint8_arr
  • int32_arr.
  • Something else.

0 voters


uint8_arr + 3 == ?
  • uint8_arr
  • int64_arr (default integer)
  • Something else.

0 voters


uint8(3) + 3 == ?
  • uint8
  • int64 (default integer)
  • Python integer
  • Something else.

0 voters


What happens if we add an np.asarray() call?

uint8_arr + np.asarray(3) == ?
  • uint8
  • int64 (default integer)
  • Something else.

0 voters


float32(3) + 3.1415 == ?
  • float32
  • float64
  • Python float
  • Something else.

0 voters


uint8_arr + 3000 == ?
  • uint8_arr with overflow due to uint8(3000) overflowing
  • An exception because 3000 does not fit into a uint8
  • int64_arr (default integer)
  • Something else.

0 voters


Python operators behave mostly the same as the corresponding NumPy functions. For example + and np.add do largely the same thing. What do you expect in the following example?

np.add(uint8_arr, 3) == ?
  • Identical results to uint8_arr + 3 (whichever that is)
  • The same result as uint8_arr + np.asarray(3)
  • All are identical: np.add(uint8_arr, 3) == uint8_arr + 3 == uint8_arr + np.asarray(3)
  • Something else.

0 voters


Finally, one tricky floating point comparison question (note that floating point equality is always prone to difficulties and in many cases discouraged)

float32(0.31) == 0.31
float32_arr([0.31]) == 0.31
  • Both should return True. (For the array that means [True])
  • Both should return False (because float64(float32(0.31)) != float64(0.31))
  • The scalar case should return False, the array case [True].
  • Something else.

0 voters

Part 2: The tricky questions about possible design choices

Operators vs. Functions

Ignoring your answer to the previous questions. Given with the following behaviour:

uint32(3) + 4 == uint32
uint32(3) + np.asarray(4) == int64  # because `np.asarray(4)` is `int64`

What do think would be acceptable for the following operation:

np.add(uint32(3), 4) == ?
  • np.add must behave the same as +
  • np.add must behave the same as uint32(3) + np.asarray(4)
  • Either option seems acceptable

0 voters

Scalar behaviour

NumPy currently behaves the following way:

# For integers:
uint8_arr + 3 == uint8_arr
uint8(3) + 3 == int64  # Both are scalars, so the Python integer "wins"
# Same for floats:
float32_arr + 3.1415 == float32_arr
float32(3) + 3.1415 == float64

If you answered that uint8 and float32 are the correct results in the scalar case, you may agree to wanting to modify this. However, this may silently break or modify code results in cases such as this:

def analyze_value(scalar):
     return scalar * 3 + 6  # some operation written with Python scalars.

data = np.load("array_containing_uint16")

value = data[0]

result = analyze_value(value)

In this case, the result would have previously been an int64 (or int32 on some systems), or float64 if the data was a float32 array. After the change, the result would be:

  • A uint16 (or error) this may lead to incorrect results
  • A float32, which will lead to reduced precision (in extreme cases incorrect results could be possible)

Part 3: General opinion about changes:

We would like to fix the expectations in NumPy to preserve types more faithfully, although depending on your above choices these may change differently.

The main expected backward compatibility issues are the following:

  • Some floating point equality/comparison checks may behave different due to different floating point precisions (compare the floating point equality questions)
  • Some operations would return more precise values, which might be wrong occasionally or take up additional memory.
  • “Scalar values” that come from a low-precision storage (the above problem) may lead to wrong or less precise results.

Overall, we expect a major version release would be necessary to signal the extend of these changes. In your opinion, are these changes feasible (or which of them seem feasible)?

Changes in floating point comparisons are acceptable:

(Note: “with conditions” here and below means that you think there need to be some specific things done. For example an optional warning to find potential issues, or testing large downstream packages, etc.)

  • in a major release
  • with conditions with conditions
  • not acceptable

0 voters

Increased precision is acceptable:

  • in a major release
  • with conditions with conditions
  • not acceptable

0 voters

Reduced precision for “scalar values” coming from a storage array is viable:

  • in a major release
  • with conditions with conditions (e.g. an optional warning to check if a script may be affected)
  • only if an overflow warning occurs in all affected integer cases. (Requires “special operators” )
  • not acceptable

0 voters

(“special operators” means that np.add(uint8(3), 4) would behave differently from uint(3) + 4 as in the above question.)

4 Likes

That was stressful :sweat_smile:

:wave: hi everyone! :joy: And good luck to the NumPy Team! :joy:

2 Likes

I found it much more straightforward to answer when I thought about it as a higher level question: “When an operation mixes values of two different precisions, what should happen?”

  1. Should the lower precision be upgraded (making the code ‘just work’ more of the time, despite ‘adding additional precision/information’ to the lower precision input)?

  2. Or, should the higher precision input be downgraded to match the lower, in analogy to engineering math with significant figures? (Roughly, your result can only be as precise as your least precise input.)

This dovetails with the related question of, ‘How do things change when one of the inputs is a native Python type, which (primarily with ints) doesn’t explicitly carry precision information?’ (I can’t really think of a better approach than NumPy’s current ‘treat each Python type as a specific default NumPy type’.)

It looks from the survey that NumPy’s current behavior is a complicated blend of these two approaches to handing discrepant precision
I strongly support the team’s notion of making everything consistent, and actually I’m not sure which approach I think is best, in the end.

The engineering purist in me strongly prefers (1)—that’s how I answered the survey—but the Pythonista in me feels that (2) is more consistent with Python’s overall philosophy of being forgiving and friendly to beginners. (Most people won’t care intimately about precision, after all, and will just use the defaults, which IIUC are the highest precision NumPy offers?)
 Those who do care a lot about precision will likely be fastidious in their code anyways, to make sure the precision handling is how they want/need it.)

2 Likes

I had intentionally mainly asked questions about expectations for now. There are three design axis that are possible (this is probably be a bit too brief to follow, so don’t hesitate to ask for clarification):

The main choice is the first one, but once you look at the backcompat issues or further, the other two design axes do come up as well.

How to deal with Python scalars (int, float, and complex)?

Python scalars can be considered:

  • Weak: that is a Python integer is “any integer” and is converted if necessary. When mixed with a NumPy object we try to demote it.
  • Strong: We assign float64 to Python float, and the default integer to Python integers
  • value-based: (What we currently have – or rather attempt to do) we check the value to decide what “dtype” it could be.
  • Mixed: we could have strong ints, but weak floats.

Operators vs. functions

If we have “weak” logic, we could limit it to operators because it may be bit more obvious (and useful there). But the difference is strange. So this opens up some possibilities but also fields a bit strange.

NumPy scalars could be special

This is a possibility, albeit one with traps along the way. We could decide that NumPy scalars do behave slightly different to 0-D arrays.

(Note that currently 0-D arrays behave like NumPy scalars and Python scalars, which means that this “value-based” stuff kicks in unexpectedly sometimes).

I had intentionally mainly asked questions about expectations for now.

I apologize if posting my thoughts will skew what you were hoping to get from the survey! I found that I couldn’t really answer well until I’d put some sort of framework around the questions, and thought that it might be helpful.

1 Like

There’s one question here that tests the intuition of those who have been using NumPy for a while:

np.uint8(255) + 1

vs

np.array([255], dtype=np.uint8) + 1

NumPy currently treats these differently, but I would argue that we would significantly simplify expectations and implementation if we do the same thing in both cases: upcast to an array type that can hold the result most of the time. The issue really only arises around uint8, because for uint16/32 overflow is much harder to reach—but almost any operation on uint8 sends you out of the data range!

This is akin to @btskinn’s model (1): cast so that the result is what most users expect it to be, and do so consistently between scalars and arrays.

Why is a change necessary? Because we are getting rid of value-based casting (i.e., being able to look at the values inside arrays), and therefore should operate a bit more conservatively to avoid surprising behavior.

P.S. Old behavior would still be easy to achieve: x += 1.

Not at all, I like to hear reasons or thoughts! It is just the reason why I did not try to provide much “scaffolding” myself :).

1 Like

I personally prefer Python scalars are weaker, I treat them more like “digital literal” than “an int64/float64 data”.
When I use NumPy array with non-default int/float dtype, only because there are some reasons/constrains about data, and I don’t want some simple arithmetic operation to break them.

I think for operations involving NumPy types and Python scalars, the NumPy type should always take precedence by default - reading expressions with Python scalars as ‘shortcut’ for implicit conversion to the NumPy type. This may be more future proof in case there is future changes to, e.g., Python default float values (e.g., float128 - although then there may be more trouble).

That said, other languages, such as Fortran, will convert to higher precision, but there precision is well-defined in a unified framework (and programmers would try to avoid costly change of precision types), for NumPy vs. Python, it is different worlds coming together.

Making Python numerics “weak” was my initial angle as well. But the issue of backwards compatibility may be smaller for other choices and I am unsure about the practicality if most users don’t expect this for the scalar cases.
(However, NumPy is currently terrible at distinguishing 0-D arrays from scalars so making them behave different from 0-D arrays is its own can of worms, unless we attack that as well.)

I basically said it above I guess. The central issue, is that NumPy currently tends towards “weak” logic (with that value inspecting component) if the array is not 0-D but uses “strong” logic if the other object is a NumPy scalar or 0-D array. So however we align this inconsistency, something changes.
(Plus, NumPy currently uses “weak” logic even for NumPy scalars and 0-D arrays which have an explicit type.)

If we use “strong” logic more often, we bloat some memory (mainly). As StĂ©fan points out, hopefully, many users will be using x += 1 which side-steps the issue.

If we use “weak” logic more (relevant for scalars mainly) we sometimes stay with a lower precision in places that currently upcast silently and that might (silently!) break code. The options to help with seem pretty limited:

  • For integers and Python operators, we can warn on all overflows. But I am not sure we can pull it off for general ufuncs.
  • I think an optional warning that tells you when this (probably) happened should work OK in practice, but seems likely too noisy by default.

You raise an excellent point - about using in-place operations. I think it would be more intuitive if regular addition with a Python scalar should behave the same as an in-place operation: x+=1 should be the same value(s) as x=x+1. I think it is very confusing if NumPy scalars behave differently from NumPy arrays. I often write code that is supposed to be usable with both scalars and arrays. And I would expect to the the same result. (Finding out about [()] was a great revelation.)

I’m confused. I would like know what the elements of uint8_arr are before answering many of the questions. However, the way uint8_arr is reused in a question and its answer, I don’t think I can assume that it is concretely defined as in the instructions.

@markcampanelli one point is that we don’t know: NumPy cannot reasonably take into account the actual values. The current problem is that NumPy does take into account the actual values if the array is 0-D (or it is a scalar) – and this leads to confusion and inconsistencies. However, it takes into account the values of the inputs not the potential ouputs (if say, there was an overflow).

Checking the values would be machinery that:

  • Would slow down things considerably
  • Would require far more complex logic than we currently use

Neither are desirable, and personally I don’t think we help users by doing so.

To given an example, the current behaviour is the following:

uint8_arr + 1 == uint8
uint8_arr + 300 == uint16
uint8_arr - 1 == uint8
uint8_arr + (-1) == int16

is the current “value inspecting” logic we have. (You can replace the Python integers with np.asarray(..., dtype=np.int64) and you will get the same results!)
In my opinion this looks fine on first sight maybe, but gets surprising if the input is a variable of unknown value: Suddenly, you have a lot of “magic” boundaries.

(This will probably not help, but a different angle.) From a “typing” perspective, a uint8 array is not a container for 64bit numbers stored in a way that can only hold 0-255. If that was the case the only choice we would have is that any uint8 + uint8 should return an integer that can definitely hold the result (capped at int64). I.e. when in doubt, NumPy would always go to the larger precision for integers.
However, the above would make it very tough to do math in say int16 when you know that is what you want. (EDIT/NOTE: That is actually possible in principle I could create an int64[uint8] dtype, but I am not sure how easy it would be to ingrain it all the way into NumPy, right now.)