Typing scikit-image

Hey y’all,

under our current EOSS5 grant, we aim to annotate the types of our public API. Over the past few weeks I’ve tried different approaches and tried to familiarize myself with mypy, and Python’s typing machinery. To keep you in the loop, here are my current thoughts and goals:

(a) Keep our source code as unaffected as possible. Ideally I don’t want to introduce changes that we wouldn’t have if we didn’t add types. However, if typing reveals problems in our code that I would have fixed even without typing, I might do so. In some cases and only for non-API code, it might be useful to tweak code a little bit if it makes it a lot easier to add types. This might be tricky to get right, but I’m sure that gets easier with experience.

(b) To keep the disruption minimal (see (a)), use stub files to add types. I plan to use mypy and stubcheck for a start to keep stubs and real code in sync.

(c) Run mypy only on test suite for now. The idea is that we mostly care about typing to make our public API more useful to users. Our test suite is a code base that already makes heavy use of that API. If we don’t care enough to add a test for it, we probably don’t care yet about adding types. I really hope that this keeps the changes I have to make to our actual code base due to typing more minimal because internal typing isn’t checked by mypy (at least I think so).

(d) Gradually type our library. I’ll start with skimage._shared and skimage.util as these are widely used throughout or library.

(e) In the beginning be lenient with our types. In most cases I’ll add symbols such as Image, GrayScaleImage, Mask, Labels that are effectively just different names for “I accept a NumPy array”. Not sure yet if and where accepting ArrayLike will become a problem. If it does I propose to deprecate implicit conversion to ndarray. I think we had this discussion elsewhere already but I don’t remember where. I hope that in the future we can then gradually introduce structural typing (PEP-544).

All these are supposed to allow for a more gradual transition and smaller PRs.

Related resources and inspiration:

2 Likes

There are also a few things I am undecided on.

(f) I’ve already encountered quite a few places were inconsistencies in using _ to mark private vs public leads to problems with stubgen. I plan to refactor / fix these along the way even if this introduces more churn. I think postponing these might lead to even more work long-term or we forget it again. Thoughts?

(g) In Generate table of scikit-image's entire runtime API by lagru · Pull Request #6905 · scikit-image/scikit-image · GitHub I’ve made an effort to inspect our full API in an automated way. Effectively I already collect most of the information that would be necessary to generate basic stubs. I could start working on rules to transform our Parameter annotations (NumPy docstring style) into valid types. This could have large advantages: it could provide automatic (initial) stubs, it would force us to clean up our docstrings and would be a solution that keeps our type annotations and docstring types in sync long-term. However, I’d be effectively reimplementing our own version of stubgen and I am hesitant to commit time to this without feedback. A solution like this might also be very useful to the wider community. I’m not aware of similar solutions yet but I may have totally missed some.

Thanks for summarizing your approach here, Lars, that’s very helpful.

One quick question that came up before: if a user type annotated their image with the ndarray type numpy provides, will they still be able to use it with skimage functions without causing mypy to become unhappy?

Yes, for a start we’d probably use Annotated:

from typing import Annotated
import numpy as np
from numpy.typing import NDArray

Image = Annotated[NDArray, "Image"]
Mask = Annotated[NDArray[np.bool_], "Mask"]

def foo(x: Image, m: Mask) -> Image:
    return x[m]

Though, I don’t think this will catch type errors like passing Image to a parameter annotated with a Mask yet.

Probably it won’t but it will be possible (:sweat_smile:) to write a plugin to mypy that uses the checks, and anyway it might be enough for other use cases (e.g. autogenerating UIs from napari). We should write some kind of parser that validates that the annotations are from a limited set, so we don’t have “Maskk” and other typos in there.

Yeah, and once we’ve added the symbols switching them out should be somewhat easier.

This touches on my question (g) above. If we don’t just adopt type annotation syntax for our docstrings such a tool would have some way to map between docstring descriptions and official types, e.g. int, optionalOptional[int].

Oh no, I think we should indeed adopt exact type annotations in our docstrings. Also btw, in current Python you can (and should) write that int | None.

1 Like

@stefanv and me had a quick discussion yesterday. The gist of it was that indeed it seems natural to use an approach that uses our type description in docstrings as a source of truth, either to validate or sync stub files (g).

Before happily inventing our own special wheel, I’m currently investigating if there are similar solutions out there that might be adaptable. E.g. currently trying to get gramster/docs2stubs to work.

1 Like

I didn’t know about docs2stubs! I love it! :heart_eyes: I particularly like that array-like can get normalised to ArrayLike. That’s pretty crazy. The path to human-readable docs is looking clearer every day! I would consider it in-scope for the grant to contribute as-needed to docs2stubs, @lagru. :blush:

Good to know. The tool seems pretty flexible and somewhat intended to deal with the current state of the ecosystem. It seems pretty alpha right now, so let’s wait a bit for input from the author.

Your last comment suggests that you now favor using human readable type descriptions in docstrings instead of the full typing syntax?

Your last comment suggests that you now favor using human readable type descriptions in docstrings instead of the full typing syntax?

It’s a good question. I don’t really know. I like human readable as long as it’s not ambiguous, or hard to understand. Those are both relatively vague concepts so I guess I’d have to see some proposals to properly decide. But I do like “array-like” better than “ArrayLike”. By, like, a lot.

1 Like

I am not sure if you looked into this already, but NewType might be helpful here, which would catch things like passing an “Image” to a “Mask” parameter but keep the runtime implementation the same, as an NDArray or NDArraylike.

Hi @saulshanabrook, could you expand a bit on how that would work?

I am thinking of the following situation:

import typing
import numpy as np

Image = typing.NewType('Image', np.ndarray)
Mask = typing.NewType('Mask', np.ndarray)

array = np.random.random((50, 50))
image = Image(array)
mask_array = np.random.random((50, 50))
mask = Mask(mask_array)


def zero(image : Image, mask : Mask) -> Image:
    if not image.shape == mask.shape:
        raise ValueError("Image and mask shapes must match")

    image[mask] = 0
    return image


zero(image, mask)

# Fails mypy
zero(array, mask_array)

Hey Stefan,

Yeah, your example looks right. Users would need to wrap any incoming arrays in the new types (which area no-op at runtime), to describe semantically what they represent. Did you have a specific question about it?

Here is a standalone version of your example.

What we’d like to see is that, if you pass a NumPy array instead of an Image or a Mask, you are fine. If, however, you pass in a Mask instead of an Image, the system should alert you. I.e., if you do not want to use types explicitly, you shouldn’t be penalized for it.

We do not control where our users get their image arrays from, so as long as they are, in fact, arrays, with the required shape and type, we’ll work with them.

Hey Stefan,

Yeah, I think you understand it the same way as I do.

Either you have to have folks who use a static type checker to wrap incoming arrays in the NewType domain-specific types and as a result, get nice checks to verify that they aren’t passing in the wrong ones.

Or, you can do what you did above and not give any errors to users with static type checkers, but provide some documentation, and possibly allow you to special case it with a type checker plugin.

I’m at the PyCon sprints this week and asked Alex Waywood about this, who works on maintaining the type shed, and he had the same take.

One feature that could add what you need is a “not type”, but that is currently held up and might not be implemented for a while.

If I understand not types (big if), they would be at a minimum very cumbersome here — image/mask is a simple example, but we have (at least) image, labels, mask, and coordinates. Defining each as not any of the other three sounds painful. :sweat_smile:

Did Alex have any advice for how to handle our requirement? It cannot be that unusual.

I bumped into him again at the airport and he had another idea. One way to mostly get the behavior we are looking for is to use overloads to disallow the other types, see: mypy Playground

There are a few downsides with this approach:

  1. It requires a new overload for each argument
  2. Each overload requires us to do the negation manually, putting in all types besides the one we want
  3. Currently the only way in mypy to mark an overload as unsupported is to use Never as a return type. Then, when you try to use the value it will complain. Once PEP 702 deprecated support lands in Mypy (Add support for PEP 702 (`@deprecated`) · Issue #16111 · python/mypy · GitHub) we could use the @deprecated decorator on the overloads we want to throw errors on.

Alex suggested opening a thread in the typing category of the Python discourse to discuss this issue and see if others have ideas: Typing - Discussions on Python.org

That’s not pretty but maybe feasible until we come up with a better solution since we intend to generate the stubs automatically…?