A policy on generative AI assisted contributions

I’d be happy to see a SPEC about this - I think it can be an informative SPEC (similar to an informative PEP) that doesn’t propose a single solution to adopt, but contains at least the following:

  1. Prior art - what did other well-known open source projects decide and why?

  2. Legal status - link to relevant sources, and summarize if there’s any expectation of more legal clarity arising in the near to medium term.

  3. Lay out a set of distinct options that projects can choose from, with their main pros and cons. E.g.:

    • Option 1: do nothing for now (status quo)
    • Option 2: forbid all generated code by an LLM-based tool
    • Option 3: add guidance to PR template & docs that contributors may choose to use such tools, but need to be aware and take responsibility for copyright of the code they submit
    • Option 4: … ?

None of that is project-specific, and people can contribute new arguments and options if they find there are gaps in such a SPEC.

A Discourse thread only is a poor medium for iterating to a solution on such a complex topic. I completely agree with @rkern that it’s largely about values and preferences - it’s just a lot easier to get a sense of those and also help people less knowledgeable about the subject form an opinion if there’s a structured document to read and refer to. Once a good SPEC exists, a SciPy-specific choice can more easily be made.