I am increasingly seeing vibe-coded PRs from novice contributors and I started a discussion on the SymPy mailing list. Right now there are not too many of these but they are quite blatant and I think that this is definitely going to become worse over time.
Really these are just low quality PRs but I don’t think it is reasonable to engage with them in the same way as a low quality PR that was actually written by a human. For example if you explain in detail what the problems are with the code then I think that there is a good chance that the “author” is going to type your feedback into the LLM and have it try to vibe-code an improved PR. I have tried it and I really don’t enjoy talking to LLMs if actually trying to do anything and talking to an LLM via something like GitHub comments with someone who is just typing those comments into an LLM is soul-destroying.
When reviewing novice PRs the situation is generally that it would be far easier to just write the code without the novice so there is a net loss in terms of effort on the maintainer side. The flipside though is that the novice hopefully learns something from the experience and improves over time. This model is predicated on there being a certain effort-exchange ratio though e.g. that the novice does not open the PR in the first place without putting in some effort beforehand to understand the codebase, the workflow, the issue they are trying to fix and so on. I think it is pretty much possible now to say “Claude, write some code and open a PR to fix issue #12345” so any kind of technical barrier is gone and you can open a spam PR having spent zero time trying to understand anything.
I also don’t think that this is helpful to novices because in the PR process the AI is helping them with all of the wrong things like writing the code rather than thinking about how to write the code. A student at my University recently asked what is “the purpose of Maths” given that computers (e.g. SymPy, Maple etc) can do all of the exercises we ask them to do by hand in their entry-level Maths exam. My answer was that you have to understand how to do some things manually before you can learn how to make effective use of a computer for more complex problems. Having the computer do your homework is like getting a robot to lift weights for you at the gym.
I don’t think that the vibe-code PR authors are malicious. People want to contribute to open source and why wouldn’t a novice believe the hype that this is how code is written in the age of AI? Although unintentional though the resulting spam is effectively abusive to open source projects and maintainers.
We need some kind of guidance or policy so that contributors understand what is reasonable. I don’t know how to write any policy/guidance about “responsible use of AI” though without first addressing the basic questions from above like whether AI-generated code is allowed at all and what it means for copyright.