I’ve been seeing more and more open source maintainers throwing up their hands over AI generated pull requests. Going so far as to stop accepting PRs from external contributors.
This week we’re going to begin automatically closing pull requests from external contributors. I hate this, sorry. pic.twitter.com/85GLG7i1fU
— tldraw (@tldraw) January 15, 2026
Ghostty is getting an updated AI policy. AI assisted PRs are now only allowed for accepted issues. Drive-by AI PRs will be closed without question. Bad AI drivers will be banned from all future contributions. If you’re going to use AI, you better be good. https://t.co/AJRX79S8XD
— Mitchell Hashimoto (@mitchellh) January 22, 2026
AI is killing Open Source and it’s saddening. Basically, a bunch of people who now believe they’re geniuses because of LLMs have been spamming OSS projects with junk submissions causing some maintainers to limit contributions from the general public.
— ASH🪄 (@ahmxrd) February 7, 2026
If you’re an open source maintainer, you’ve felt this pain. We all have. It’s frustrating reviewing PRs that not only ignore the project’s coding conventions but also are riddled with AI slop.
But yo, what are we doing?! Closing the door on contributors isn’t the answer. Open source maintainers don’t want to hear this, but this is the way people code now, and you need to do your part to prepare your repo for AI coding assistants.
I’m a maintainer on goose which has more than 300 external contributors. We felt this frustration early on, but instead of pushing well-meaning contributors away, we did the work to help them contribute with AI responsibly.
1. Tell humans how to use AI on your project
We created a HOWTOAI.md file as a straightforward guide for contributors on how to use AI tools responsibly when working on our codebase. It covers things like:
- What AI is good for (boilerplate, tests, docs, refactoring) and what it’s not (security critical code, architectural changes, code you don’t understand)
- The expectation that you are accountable for every line you submit, AI-generated or not
- How to validate AI output before opening a PR: build it, test it, lint it, understand it
- Being transparent about AI usage in your PRs
This welcomes AI PRs but also sets clear expectations. Most contributors want to do the right thing, they just need to know what the right thing is.
And while you’re at it, take a fresh look at your CONTRIBUTING.md too. A lot of the problems people blame on AI are actually problems that always existed, AI just amplified them. Be specific. Don’t just say “follow the code style”, say what the code style is. Don’t just say “add tests”, show what a good test looks like in your project. The better your docs are, the better both humans and AI agents will perform.
2. Tell the agents how to work on your project
Contributors aren’t the only ones who need instructions. The AI agents do too.
We have an AGENTS.md file that AI coding agents can read to understand our project conventions. It includes the project structure, build commands, test commands, linting steps, coding rules, and explicit “never do this” guardrails.
When someone points their AI agent at our repo, the agent picks up these conventions automatically. It knows what to do and how to do it, what not to touch, how the project is structured, and how to run tests to check their work.
You can’t complain that AI-generated PRs don’t follow your conventions if you never told the AI what your conventions are.
3. Use AI to review AI
Investing in an AI code reviewer as the first touchpoint for incoming PRs has been a game changer.
I already know what you’re thinking… they suck too. LOL, fair. But again, you have to guide the AI. We added custom instructions so the AI code reviewer knows what we care about.
We told it our priority areas: security, correctness, architecture patterns. We told it what to skip: style and formatting issues that CI already catches. We told it to only comment when it has high confidence there’s a real issue, not just nitpick for the sake of it.
Now, contributors get feedback before a maintainer ever looks at the PR. They can clean things up on their own. By the time it reaches us, the obvious stuff is already handled.
4. Have good tests
No, seriously. I’ve been telling y’all this for YEARS. Anyone who follows my work knows I’ve been on the test automation soapbox for a long time. And I need everyone to hear me when I say the importance of having a solid test suite has never been higher than it is right now.
Tests are your safety net against bad AI-generated code. Your test suite can catch breaking changes from contributors, human or AI.
Without good test coverage, you’re doing manual review on every PR trying to reason about correctness in your head. That’s not sustainable with 5 contributors let alone 50 of them, half of whom are using AI.
5. Automate the boring gatekeeping with CI
Your CI pipeline should also be doing the heavy lifting on quality checks so you don’t have to. Linting, formatting, type checking all should run automatically on every PR.
This isn’t new advice, but it matters more now. When you have clear, automated checks that run on every PR, you create an objective quality bar. The PR either passes or it doesn’t. Doesn’t matter if a human wrote it or an AI wrote it.
For example, in goose, we run a GitHub Action on any PR that involves reusable prompts or AI instructions to ensure they don’t contain prompt injections or anything else that’s sketchy.
Think about what’s unique to your project and see if you can throw some CI checks at it to keep quality high.
I understand the impulse to lock things down but y’all we can’t give up on the thing that makes open source special.
Don’t close the door on your projects. Raise the bar, then give people (and their AI tools) the information they need to clear it.