The Invisible Governor – by Simon Minton


For as long as organisations and governments have existed, effort has been the invisible governor on what gets built, what gets reformed, and what gets attempted. Ideas die not because they are bad, but because the cost of testing them is too high. Backlogs grow not because nobody cares, but because nobody has the bandwidth. This constraint has been so universal, so deeply embedded in how we plan and prioritise, that we have stopped noticing it. It is simply the water we swim in.

That constraint is now collapsing. The emergence of autonomous AI agents – systems that can be given a goal and trusted to deliver it continuously, at high quality, with minimal human intervention – has fundamentally altered the economics of knowledge work. This is not a projection about what might happen in five years. It is a description of what is already happening, today, at the leading edge of software engineering. And what begins in software will not remain in software.

This briefing sets out what has changed, why it matters, and what decision-makers in both the private and public sectors need to understand now – before the window for coherent response narrows further.

In late 2025, a series of capability upgrades to the leading AI coding agents crossed a threshold that practitioners immediately recognised as qualitatively different from what came before. The most prominent of these tools is Claude Code, Anthropic’s autonomous coding agent, though similar capabilities are emerging across the industry.

AI coding assistants have existed for several years. The good ones were genuinely useful: tools that might make a competent engineer two or five times more productive. But the early-2026 generation is not a better assistant. It is, for most practical purposes, a replacement for the engineer. The system can be given a well-defined goal – build this feature, fix this system, refactor this architecture – and trusted to deliver it autonomously, at production quality, correcting its own errors along the way.

The numbers bear this out. Claude Code crossed one billion dollars in revenue by November 2025. Word-of-mouth exposure surged thirteen percentage points between late December and January 2026, according to data from Caliber. In mid-January, a Google principal engineer publicly stated that Claude had reproduced a year of architectural work in a single hour. Microsoft – which sells its own competing product, GitHub Copilot – has reportedly adopted Claude Code internally across major engineering teams. The creator of Claude Code, Boris Cherny, tweeted that he and his team now complete near 100{2d86e206823e03865fa8be0e21e9a6ee7441ee8bfd1fad108302568e2129cc3f} of work using Claude.

I can speak to this from direct experience. Over the past six weeks, I have built six internal tools that have materially improved how I work. Each of these would previously have been a multi-week project requiring sustained coding effort. Each now took between two and six hours, with Claude Code doing the vast majority of the implementation work. In aggregate, I am completing what would previously have constituted roughly a year of professional development work every two to three weeks.

I want to be precise about what I mean by this, because the claim sounds extraordinary. I do not mean that I am doing rough, prototype-quality work at high speed. I mean that the finished output – the code, the architecture, the test coverage, the documentation – is at or above the standard I would previously have produced myself, and it is being delivered at a rate that no human team could match.

The natural objection is that this is a story about software engineering, and software engineering is a special case. Code is structured, testable, and unambiguous. Surely the messier domains of law, finance, medicine, and policy are different.

They are different – for now. But the gap is closing faster than most observers appreciate, and understanding why requires seeing what code actually is. Code is not a technical curiosity. It is a well-defined way of expressing process: sequences of reasoning, transformation, verification, and execution, arranged to achieve some outcome. All knowledge work, when you examine it closely enough, reduces to process of some form. The difference between software engineering and, say, contract review or financial analysis or policy drafting is not a difference in kind. It is a difference in how explicitly the process has been formalised.

The evidence that AI agents are already moving into adjacent domains is substantial. In law, firms are deploying AI for contract analysis, due diligence, and compliance review. A survey of nearly 2,300 knowledge workers published in late 2025 found that 81 per cent had used AI-powered tools to start or edit their work at least once. In accounting and audit, agentic AI systems are now performing invoice processing, reconciliation, anomaly detection, and compliance workflows with decreasing human oversight. In finance, 68 per cent of hedge funds now use AI for market analysis and trading strategies, and AI-managed robo-advisory assets exceed 1.2 trillion dollars globally.

These are not speculative applications. They are in production today. The pattern is consistent: AI enters a domain, initially handling routine and well-structured tasks, and then – as the models improve and as practitioners learn how to direct them – moves progressively up the complexity curve. In accountancy, Deloitte and others are already describing a future in which the mid-tier of knowledge work is hollowed out, leaving an “hourglass” workforce concentrated at junior (AI-supervisory) and senior (strategic) levels. The same structural pressure will apply across every knowledge profession.

Claude Code costs approximately two hundred dollars per month at the individual tier. A competent senior software engineer in a major market commands a salary well north of $250,000 per annum. For roughly one per cent of that cost, an organisation can now access something that produces not ten times the output, but orders of magnitude more.

This is not a productivity gain in any conventional sense. It is a repricing of labour. The historical parallel is not the introduction of better tools to an existing workforce – it is the introduction of the power loom, which did not make weavers more productive but rendered the economics of hand-weaving untenable.

The comparison is not exact, of course. The power loom took decades to diffuse through the textile industry, constrained by the capital required to build factories and install machinery. AI agents face no such constraint, which brings us to the second shift.

Every previous industrial revolution placed its capital burden squarely on the balance sheets of individual firms. Factories had to be built. Machines had to be purchased. Infrastructure had to be installed and maintained. The result was that adoption was slow, uneven, and gated by access to capital.

This time, the capital costs are borne upstream. A small number of companies – Anthropic, OpenAI, Google, Meta, and xAI among them – have invested tens of billions of dollars in model training, infrastructure, and compute. Their customers need invest almost nothing. Capability is rented instantly, scaled elastically, and priced at a level that is functionally trivial for any organisation of meaningful size.

This collapses the time between intent and execution. When a factory had to be built, there was a multi-year lag between a firm deciding to adopt a new technology and actually deploying it. When capability can be rented by the hour, the lag is measured in days or weeks. The traditional buffers that gave firms, workers, and governments time to adapt – the lead time of capital investment – are largely absent. Gartner’s own hype cycle analysis now places AI agents at the Peak of Inflated Expectations, but with the critical caveat that unlike most technologies at that point, the underlying capability is already production-grade.

Deep, specific expertise still matters. There are moments when an AI agent needs to be pointed toward exactly the right documentation, the right regulatory nuance, the right domain-specific edge case. A skilled practitioner directing an AI agent will outperform an unskilled one.

But expertise is no longer a durable moat. It is a depreciating asset. Each time an expert’s knowledge is used to direct an AI system, that interaction becomes training data, documentation, or a codified workflow. The knowledge transfers from the individual to the system. What was once a career’s worth of accumulated insight becomes, progressively, a set of instructions that anyone can invoke.

Consider what has already happened in software engineering. Two years ago, deep expertise in a specific programming framework or infrastructure stack was a genuinely scarce and valuable commodity. Today, Claude Code can navigate those frameworks with a fluency that matches or exceeds most human practitioners, because the collective expertise of the field has been absorbed into the model. The experts who remain most valuable are those who can identify which problems to solve – not those who know how to solve them. The distinction between strategic judgement and technical execution has never been starker.

The three shifts above are important, but none of them is the most important thing. The most important thing is this: effort itself has ceased to be a binding constraint on what can be attempted.

This is the highest-impact change in the entire transition, and it is the one that decision-makers most consistently fail to internalise. For decades, the limiting factor on what got built, reformed, or investigated was not imagination or even resources in the abstract. It was the sheer human effort required to move from idea to execution. The cost-benefit calculation killed ideas before they were ever tested. The internal tool that would save three hundred hours a year was never built because it would take eight hundred hours to create. The policy analysis that might have revealed a better approach was never conducted because no team had the capacity. The legacy system that everyone knew was failing was tolerated because replacing it was a two-year project.

That calculus has changed, decisively. When the effort required to build, test, and deploy a solution drops by one or two orders of magnitude, the entire landscape of what is “worth doing” is redrawn.

For businesses, the implications are immediate. Every firm of any age has a backlog of “too hard” work: internal tools never built, legacy systems never replaced, process inefficiencies tolerated for years because the effort to fix them exceeded the pain of living with them. That backlog is now liquidatable. The firms that move first will not merely gain efficiency. They will gain the compounding advantage of having fixed problems that their competitors are still living with.

For governments, the implications are more profound and more uncomfortable. State capacity has always been constrained by administrative effort: the drafting, coordination, review, consultation, and enforcement that any policy action requires. This is not a failure of intent. It is a structural feature of governance, one that shapes what policies are feasible and what reforms are attempted. When the effort required for these activities collapses, governments face a stark choice. Those that harness the change will find that policy interventions previously dismissed as “too complex” or “too resource-intensive” become achievable. They could, for example, conduct regulatory impact assessments in days rather than months, maintain living legislative codifications that update automatically as new case law emerges, or run continuous compliance monitoring across entire sectors. Those that do not adapt will find themselves outpaced not just by other states, but by private actors whose execution velocity has accelerated by orders of magnitude.

The lag between the leading edge – where I am writing from now – and mainstream adoption is twelve to twenty-four months. This estimate is grounded in three observations.

First, the adoption curve for AI is compressing dramatically relative to historical precedent. Deloitte’s 2026 Tech Trends report notes that a leading generative AI tool reached one hundred million users in two months, compared to fifty years for the telephone and seven years for the internet. More pointedly, AI agent adoption does not require hardware procurement, physical infrastructure, or long implementation cycles. It requires a subscription and a willingness to experiment. The adoption friction is near zero.

Second, the “holiday effect” of December 2025 demonstrated how fast adoption can move when practitioners have time to experiment. Claude Code’s viral adoption over the winter break showed that the barrier to adoption is not capability or cost. It is attention and habit. As awareness spreads through professional networks, adoption follows rapidly.

Third, competitive pressure will force the hand of laggards. Once early adopters demonstrate order-of-magnitude productivity gains, organisations that fail to follow will face an existential cost disadvantage. This is not a technology that firms can afford to “wait and see” on, because by the time the results are visible, the gap may already be insurmountable. Microsoft’s data shows global generative AI adoption reached 16.3 per cent of the world’s population in the second half of 2025, up from 15.1 per cent in the first half – and that figure understates enterprise adoption, which is moving considerably faster.

Twelve to twenty-four months is not a comfortable window. It is barely enough time to understand the problem, let alone respond to it in any coordinated way.

If you run a business, the question is not whether to adopt AI agents but how fast you can integrate them into your operations without destabilising what already works. Begin with the backlog. Every organisation has one: the list of things that everyone agrees should be done but nobody has the resources to do. That list is now your highest-return investment. A single competent generalist, working with current AI tools, can liquidate years of accumulated technical and operational debt in months.

If you manage or advise on policy, the imperative is to understand that the administrative effort constraint – the single biggest structural limitation on state capacity – is dissolving. This creates opportunity and risk in equal measure. The opportunity is a step-change in what government can accomplish. The risk is that private-sector actors, unconstrained by the deliberative processes of democratic governance, will move so much faster that the regulatory environment becomes permanently reactive.

If you are a knowledge worker, the honest assessment is this: your value as a pure executor of well-defined tasks is declining rapidly. The value that remains – and it is real and substantial – lies in judgement, in the ability to identify which problems matter, in the capacity to navigate ambiguity and politics and human relationships, and in the wisdom to know when the machine’s output is wrong. These are not skills that most organisations currently hire for, train for, or reward. That will need to change.

Effort, until now, has been the invisible governor on everything we build, everything we govern, and everything we choose not to attempt. It has shaped our institutions, our strategies, our career structures, and our sense of what is possible.

The governor has been removed.

What follows from that fact –in business, in government, and in the structure of professional life – will be determined by how quickly and how honestly we reckon with it. The temptation will be to wait for certainty, for the technology to mature, for best practices to emerge, for someone else to go first. That instinct is understandable. It is also, in this case, a serious strategic error.

It is always worth remembering that silicon – as in rocks, albeit very complicated rocks – is what we are teaching to do all of this.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *