API design for functions returning multiple objects

I just encountered SciPy’s code style document for returning more than one object from a function: Code and Documentation Style Guide - The Missing Bits — SciPy v1.11.4 Manual

To summarize, they want functions to return a “MyResultObject” which is explicitly not iterable and forces users to use attributes.

I agree with the arguments they present and I kind of like it, but the style still feels a bit weird and new to me. Though we already do something similar with RegionProperties.

I’m curious what you think about this, especially in light of skimage2.

2 Likes

Just to explain a bit the rational and how we came to recommend that.

In SciPy we have statistical tests and these return typically pvalue and statistics. Originally, we used to return a tuple so one could do pvalue, statistics = result. But as functionalities, and needs evolved, we wanted to add things like confidence interval, power.

To not break existing code, we had added a weird object that would allow to unpack certain fields, but also hold other attributes. Sounds nice, but this is quite some magic and not really standard Python.

Instead, what we are suggesting now is to use plain normal objects which don’t hide fancy behaviours. Easy to understand for users, we can add anything we want in this object, easy maintenance. This is a general move some of us have been pushing: go back to basics with more idiomatic Python and less magic.

Of course, the nature of some functions might be that you are sure about returning an iterable. Which is fine. We just want devs to be more careful with return structures.

3 Likes

Yeah we discussed a similar pattern in the repo here and in the mailing list here. The only difference is that for most skimage functions there is a strong concept of the primary output (an image, a segmentation, some coordinates) and secondary outputs (error measure, residuals, intermediate results for diagnostics), so we were thinking of result, extras = phase_cross_correlation(image0, image1), where extras would be the result object.

I also think in many situations even that is too much — e.g. when chaining functions together. So, even though I was rebuked by Gaël when I proposed it (I can’t find the thread now :joy:), I wouldn’t mind exploring again the idea of:

result_image = func(input_image)
result_image, extras = func.extras(input_image)

Yes, but that’s why we got issue 3180 and added regionprops_table:sweat_smile: So I think there is something to be said for using standard types for most return values, and dataclass types only in exotic situations.

imho, we should mostly deal in objects that satisfy the data-apis (mostly array, some dataframe), and then use bunch/resultobj stuff for extras. And then the question remains of how to expose the extras.

Anyway, to be clear, I’m not opposing the idea, but I do want us to examine all the options very closely here…

1 Like

Perhaps fixing chaining is easier than figuring out ways to signal whether you do or don’t want extras.

I.e.,

pipe(image, into=[
  (func1, {'alpha': 2},
  (func2, {'sigma': 3}),
  (func3)
])

Which is no less readable than

func3(func2(func1(image, alpha=2), sigma=3))

Except that the pipe invocation allows you to always return image, extras. We had some minor concerns that extras could, in some cases, be costly to compute, but I suspect that doesn’t happen often.

As you know I love the pipe syntax, but I don’t think it makes sense to write our own pipe when there exist standard ways to work with this. But with the standard pipe you’d want to interleave tz.pluck(0) calls in there, which is messy. (Note: with “standard” I mean in functional programming in general — I’m well aware that Python support for functional programming is middling, but that’s not what we’re talking about, since we’re discussion a rather non-Pythonic convention anyway.)

I also think many functions should not return extras (e.g. gaussian filter), so the pipe function would have to what, introspect function return annotations and do different things based on that? That seems a little too magical for my tastes.

I haven’t had much experience with pipes. I’m curious though why we should take this pattern into consideration when we do API design? It seems pretty magical, and harder to debug to me.

Using Stéfan’s example I’d actually prefer

filtered1 = func1(image, alpha=2)
filtered2 = func2(filtered1, sigma=3)
filtered3 = func3(filtered2)

This even gives opportunity to assign helpful names to intermediate results. Is there some advantage to turning this into a pipe?


To get back to the main topic: If an intermediate or extra result is interesting enough to return it, maybe that should be reason enough to make it its own function. Or, I think I suggested that earlier somewhere, make the algorithm and its steps it its own class

class PhaseCrossCorrelation:
    """Efficient subpixel image translation registration by cross-correlation.

    Parameters
    ----------
    ...

    Attributes
    ----------
    shift
    error
    phasediff
    """

    def __init__(self, reference_image, target_image, *, ...):
        ...

    @property
    def shift(self):
        # Necessary computation happens here
        return shift

    @property
    def error(self):
        # Necessary computation happens here
        return error

    @property
    def phasediff(self):
        # Necessary computation happens here
        return phasediff

From the user’s side this looks similar to what SciPy proposes. However, we only need to compute stuff, once the attribute is actually requested by the users. And thinking of the if-else logic in some of our bigger “magic” functions (wraps), the algorithm’s structure might actually be clearer with the class.

There’s a lot of computing and style constraints here and we are running up against each of them. Re the class approach: one thing that most libraries are moving away from is using properties that can be expensive to calculate: users who type obj.attr usually expect it to be an instant operation, so it is sneaky and surprising to hide a long-running computation in there. So most libraries suggest that if you have something that takes a while to compute, you should use obj.func() to indicate that computation is happening, and properties should be reserved for convenient access to O(1) computations.

I generally dislike custom classes, because with libraries that use them, I’m always fumbling around thinking, ok now wtf is this thing? Even though it’s been a constraint in some situation, I think a huge part of why people like scikit-image is that things are obvious and familiar.

So, I still think the main constraint in our API should be that our return objects satisfy some combination of:

  1. they are NumPy arrays,
  2. they can be converted to a NumPy array through __array__,
  3. they satisfy the array-API.

This should cover plain NumPy arrays, CuPy/Dask/Torch/etc arrays where they make sense, and xarray NamedArrays which I really like. Maybe our custom result objects can have the main result accessible in __array__? Or we use NamedArrays with the extras in .attrs? :grimacing:

But overall I’m still leanings towards multi-function.

The pipe discussion brings back good old R/tidyverse memories for me: 4 Pipes | The tidyverse style guide :smiling_face:

1 Like

The equivalent to what I was suggesting would be:

filtered1 = func1(image, alpha=2)[0]
filtered2 = func2(filtered1, sigma=3)[0]
filtered3 = func3(filtered2)[0]

or

filtered1 = func1(image, alpha=2, extras=False)
filtered2 = func2(filtered1, sigma=3, extras=False)
filtered3 = func3(filtered2, extras=False)

The pipeline would do what Juan suggested: pluck the first item. But I agree that doing hidden expensive computations will never fly, so the point is moot.

The other alternative is to make all function signatures output_image, extra, where rest is a named tuple. If extras=True, then the named tuple can be populated with extra info. If extras=False, then that return is empty.

However you do it, you need:

(a) A way to signal to the function that it needs to do the extra work.
(b) A way to send that info back to the user.
(c) A way to keep type signatures consistent, i.e. you don’t want function returns to change based on a keyword arg, if it can be helped. (Perhaps worth thinking about how important this is; but I suspect mypy would hate it.)

It would be good to sort this out for 2.0. Any further ideas around solutions / workarounds / leave-it-alone?

Another prototype idea from the community call is to use hooks:

y = gaussian(x)

meta_dict = {}
y = gaussian(x, hook=meta_dict)

If the hook exists, calculate extra info and pop it in there.