Economists vs Technologists on AI


You can listen to this podcast on Spotify, Apple Podcasts, or wherever else you get your podcasts. Or you can watch this conversation on YouTube.

There is a lot of noise, hype, slop, think pieces, and vibes-based analysis on the economic impacts of AI. The headlines you might have read, that AI will ‘transform’ or ‘supercharge’ economic development, track with what we are hearing from Silicon Valley circles.

The now famous AI 2027 forecast made the following prediction for 2029: “Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.” That’s in their positive scenario. And in the more worrying scenario, which has all of humanity extinct by the mid-2030s, at least by 2029 there is “an end to poverty”.

Sam Altman said in an interview: “I think it’d be good to end poverty. Maybe you think we should stop the technology that can do that.”

And Dario Amodei in his essay Machines of Loving Grace: “Overall, I am optimistic about quickly bringing AI’s biological advances to people in the developing world. I am hopeful, though not confident, that AI can also enable unprecedented economic growth rates and allow the developing world to at least surpass where the developed world is now.

In our new series of Ideas of Development, our goal is to help you think like an economist about the impacts of AI, cut through the noise, and learn about the key opportunities and challenges for low- and middle-income countries in the AI age.

Through 10 episodes, we are going to try to cover the economic research and thinking that we feel is particularly important and relevant.

For these episodes, we are extremely lucky to be joined by co-host Deena Mousa, who is a researcher and grantmaker at Coefficient Giving (you might have heard of them as Open Philanthropy). Deena’s day job involves figuring out how to allocate scarce resources across different problems in global health and development.

AI comes into that work in a couple of ways.

  1. Coefficient Giving are increasingly being asked to assess projects that use AI in areas like health, agriculture, or government systems in low- and middle-income countries.

  2. AI itself might change some of the core constraints that development economics has traditionally focused on, things like shortages of skilled labour, weak state capacity, or limited access to expertise.

Deena has also been writing about automation and productivity. For example, why very capable AI systems haven’t translated into large productivity gains in some sectors, or why replacing a task on paper is very different from replacing a job in practice. She’s also written about how these dynamics look different in lower-resource settings, where the counterfactual often isn’t a human expert, but no service at all.

For the purposes of this series, when we talk about AI, we are mostly referring to machine-learning systems (e.g LLMs) that can perform or augment specific tasks: things like prediction, classification, pattern recognition, or decision support. Generally, we are not talking about a concept of general intelligence that can do everything a human can do – although we do get into that in some episodes!

That distinction matters a lot for development, because task-level tools interact very differently with labour markets and institutions than full job replacement would. And a lot of the confusion in public debates comes from sliding between those meanings without realising it.

The type of claims quoted at the start, from those who spend their days working on AI, are usually framed in terms of technological capability, i.e. what AI systems might be able to do. But they often skip over the economic and institutional steps that would need to happen for those capabilities to translate into broad-based improvements in living standards.

And importantly, most of these claims are implicitly based on rich-country contexts. They assume things like reliable infrastructure, functioning labour markets, and strong institutions, and then extrapolate those assumptions to low- and middle-income countries, where the constraints are often very different.

So it’s not that these statements are obviously wrong. It’s that they rely on a lot of unstated assumptions that are doing a huge amount of work. The mechanism between AI improving and poverty disappearing is left unspecified, and once you start asking what has to happen in between, the story becomes much more complicated and much more context-dependent.

We don’t have clean, well-identified answers yet about the macroeconomic impacts of AI, and anyone who claims otherwise is overstating what the evidence can support.

What we can do is distinguish between better and worse arguments. We can ask which claims are grounded in historical experience, which rely on strong assumptions, and where the biggest uncertainties actually lie.

If you can describe the world in the way it is today very accurately, that can almost seem, or come off like, a forecast, because it’s a hard thing to do and very few people can do it, and because so much of what happens in the immediate future is extrapolation of existing constraints and incentives.” Deena Mousa

Our goal is to be precise about what we know, what we don’t, and what would need to be true for different futures to materialise.

Following the approach of economists, we will focus on mechanisms rather than end states. Instead of starting with “Will AI end poverty?”, we’ll ask questions like: what specific tasks does AI change? Who adopts it? And how do those changes propagate through firms, labour markets, and institutions?

Second, we’re going to pay close attention to context. The same technology can have very different effects depending on labour abundance, state capacity, infrastructure, and existing inequalities, all of which look very different in low- and middle-income countries than in Silicon Valley.

In the current debate, you have technologists making very confident claims, and economists responding with a lot of caution, sometimes to the point of sounding evasive.

A big part of this mismatch comes down to what each group is trained to optimise for. Technologists are often focused on capability, what a system can do under ideal conditions. And in that world, progress really can be rapid and visible. You run a benchmark, the numbers go up, the model gets better. That creates a very tangible sense of momentum.

Economists, by contrast, tend to worry less about what’s technically possible and more about what actually happens once a technology is embedded in a real economy. That means asking questions about adoption, incentives, coordination, institutions. All of these things tend to move much more slowly and much less cleanly.

So when a technologist says “this system can now do X”, they’re usually right in a narrow sense. But when someone then jumps from that to “this will replace all people who can do X”, economists get uncomfortable, because there’s a huge chain of assumptions connecting those two statements.

Policymakers are operating under time pressure. They want guidance, and economists often respond with some version of “it depends”.

But there’s an important distinction between acting under uncertainty and pretending uncertainty doesn’t exist. Economists tend to be very wary of false confidence, i.e. making strong claims that later turn out to be wrong.

So if you’re a decision-maker, you might reasonably ask: what kind of evidence should actually change your mind?

Well, first there is an important difference between signals that sound impressive and signals that are economically meaningful. For example, large user numbers or impressive benchmark results tell us something about interest or capability, but they don’t necessarily tell us much about productivity, wages, or living standards.

What would be much more informative is evidence about sustained adoption and integration into everyday workflows, and whether that adoption is actually changing the outcomes we care about.

And many of the most exciting use cases are those in low- and middle-income countries, where AI might relax binding constraints, rather than just improving already well-functioning systems.

For example: does it meaningfully extend access to expertise where none existed before? Does it allow scarce professionals, like doctors, teachers, and civil servants, to serve more people without degrading quality? Or does it mostly only benefit settings that already had relatively strong capacity?

Each episode focuses on a specific part of the puzzle, places where the economics is genuinely unclear, and where context really matters.

We start by looking backward. In our next episode, we are talking to Bruno Caprettini to look at one of the most common historical analogies people make when talking about AI: the industrial revolution. We’ll talk in more detail about how that played out economically. What did technological change actually look like when it first unfolded? How long did it take for living standards to rise? And what kinds of disruption and backlash showed up along the way? We’ll also discuss in what ways AI is similar and different to this period of history, and how far the analogy really goes.

From there, we move to one of the first hurdles for AI adoption in low- and middle-income countries: energy and infrastructure. With Rose Mutiso, we ask whether AI can take off in Africa, and what the key constraints currently are.

Then, we turn to measure the real-world impact of AI with Anton Korinek. We discuss benchmarks, and how to read and interpret their results, which are not as straightforward as the headline.

And after that, six more episodes will cover a bunch of other important topics. If this sounds like it’s up your street, subscribe to our Substack or our channel wherever you get your podcasts.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *