It’s been a while! Over three years since my last post, to be exact.
Jesus Christ, how things have changed. The main one being, for me, the unavoidable impact of AI on software engineering, both as a practice and in the resulting tools and products we build.
I am, as many senior-leaning engineers are, ambivalent about whether AI is making us more productive coders, and especially whether it’s even worth it when LLMs are wreaking havoc on economies, lower-wage employment, intellectual property law, and the environment. However, ambivalent as I may be, it would be a bad idea to completely ignore this tidal wave of change. Complete avoidance would make my stubborn parts happy, but it feels irresponsible to be in a position where friends and family ask me things about LLMs and agents and humans-in-the-loop and not have something useful to say about it.
I have a long reading list of good links about all of this that I’ll share sometime. (Hopefully less than three years from now.) For now, just an anecdote, and a thought:
To stay familiar with developments in the AI space, I try to follow the big developments, and experiment with tools that are made available to me. My current round of experiments is using Github’s Copilot agents to automate tedious tasks that have been on my backlog for literal years, to see if I can get agents to knock a few dusty things off my list while I work on other things.
A teammate saw what I was doing, and we had a couple good chuckles about some traps and blind spots that the agents walk right into, like (smartly) writing unit tests to validate its changes, but then (dumbly) not noticing that unit test globbing patterns prevented the CI jobs from even running the tests it wrote, which would have failed on Windows in this case had they run.
The recommendation for solving this is to give your agents some durable memory by instructing them to write any learnings that might be useful to them later to an AGENTS.md that lives in the repository. A smart idea, clever in its simplicity, to ensure that future agents have a more fully-formed context about the code they’re editing.
The other side of this coin is that, for many senior engineers, the mere presence of an AGENTS.md or CLAUDE.md file in a repository serves as a signal that the agents have been here, and the code in this project is of dubious quality at best. I tend to fall into this camp; knowing that there are software projects whose code is 100% vibe-coded with very few, if any, human checks and balances is nightmare fuel. Some admit to it, but the total impact is impossible to measure. A file hinting that code has been vibed in this project is, then, a dark signal.
Another thought just struck me today, though, and comes from the perspective of my current role as a maintainer of heavily-used open source software projects: while an agents file may be a hint that makes us curmudgeons roll our eyes and step away in disgust, the dark forest of vibe coders exists, and they’re probably opening PRs on your projects. Some people are probably vibe coding without even knowing it, because LLM-powered autocomplete is enabled in their IDE by default or something. In that reality, an AGENTS.md might also be the best protection you have against agents and IDEs making dumb mistakes that are, often, very hard to notice during a code review. If you maintain projects that welcome third-party contributions, you deserve to at least know that you’ve given the agents some railings to lean on.
You might not trust vibe coders, but if you can gently guide the vibes, maybe it’s worth the cringe or two you’ll get from seasoned engineers.
published 2026-01-22