The Economy After Intelligence
Everyone ‘knows’ AGI will either make us all unemployed or fabulously wealthy. Except, a rather brilliant (and chilling) paper from a Yale economist suggests it’s neither.
It says the economy will boom, and our wages… won’t. A bit awkward.
I’ve been digging into this 2025 paper, “We Won’t Be Missed,” and it’s fascinating. The premise: AGI arrives and can do all economically valuable work. And the ‘compute’ to run it gets cheaper and more abundant over time.
So, what happens to us fleshy, rather expensive humans?
The whole argument hinges on a masterstroke of a distinction. The paper splits all work into two types:
1️⃣ Bottleneck Work: The truly essential stuff. Producing energy, logistics, scientific discovery. The economy literally cannot grow unless this work gets done.
2️⃣ Accessory Work: The ‘nice-to-haves’. Arts, fine dining, hospitality… maybe even writing witty Twitter threads. (Gulp).
Now, you might think AGI will just take the grunt work, leaving the important strategic stuff to us.
Wrong.
To achieve maximum growth, the economy must automate all the bottlenecks. It can’t be held back by us. So AGI systematically takes over everything that is mission-critical.
So… are we all fired and sent home?
Surprisingly, no. The model shows people still work. We either help out with the ‘bottleneck’ tasks or get shuffled off to ‘accessory’ jobs that aren’t worth the electricity to automate.
But that’s not the interesting part.
Here’s where it gets properly weird. Your future salary isn’t based on your skill, your years of experience, or how ‘important’ your job feels.
It’s capped by one thing: the cost of the computational resources needed to do your job instead of you.
Imagine that. As compute gets exponentially cheaper, the value of replicating your work plummets. The economy is soaring, productivity is off the charts… but your wage is pegged to a falling technological cost.
You’re not obsolete, you’re just… replicable. And replicable is cheap.
This leads to the paper’s most brutal conclusion: The share of national income that goes to labour (i.e., salaries) collapses towards ZERO.
All the wealth, all the gains from this incredible boom, flow to the owners of the compute.
Splendid.
Here’s what this means for you. Next time you see a headline about a new AI model smashing a benchmark, don’t just ask “Will that take my job?”
Ask: “How much would it cost to run that model 24/7?”
Because that figure might just be your future salary cap.
Now, the paper isn’t all doom. It notes that society as a whole gets richer, and we could still find meaning in ‘accessory’ work.
But the central economic role of human labour as the engine of growth? Gone. We become passengers, not pilots.
The paper’s title is “We Won’t Be Missed.” Not because we’re replaced, but because the economy will chug along just fine, growing faster than ever, whether we show up for work or not.
Completely changes how I think about the ‘future of work’. Makes you wonder what we should really be planning for, doesn’t it?
REBUTTAL
Why artificial general intelligence won’t make humans economically irrelevant — it will make us economically essential
In a recent paper that has circulated widely among economists and technologists, Yale economist Pascual Restrepo paints a stark picture of humanity’s economic future. Once artificial general intelligence arrives — machines capable of any intellectual task humans can perform — our wages will stagnate, our share of economic output will dwindle to nothing, and we will become, in his memorable phrase, economically negligible. “We won’t be missed,” he concludes, imagining an economy that continues growing through computational power while humans occupy themselves with economically marginal “accessory work,” like art or hospitality, compensated at rates that never increase.
It’s a rigorous analysis, built on sound economic principles and careful mathematical modeling. It’s also, I believe, fundamentally wrong — not in its mathematics, but in its premises about what an economy actually is and what AGI will actually do.
Restrepo’s error isn’t technical but philosophical. He assumes that AGI changes how we produce things but not what we value or why we value it. He imagines an economy where hyperintelligent machines manufacture goods and services with unprecedented efficiency, while human preferences remain frozen in their current state — we just want the same cars, houses, and healthcare, only produced by machines instead of people. But this misses the most profound transformation that AGI enables: not the automation of work, but the evolution of value itself.
Consider what money actually is. For most of history, humans have compressed the rich complexity of what we value — beauty, status, security, connection, meaning — into a single number: price. This brutal simplification was necessary because our cognitive limitations made it impossible to track and negotiate value in all its dimensions. A loaf of bread costs three dollars whether you’re buying it to feed your hungry child, to complete a religious ritual, or to throw at a terrible performer. The market doesn’t care about the difference; it can’t. Money flattens these distinctions into a single metric.
But what happens when every economic actor — every person, every business, every institution — has access to an artificial intelligence that can model preferences in hundreds of dimensions simultaneously? An AI that knows not just that you need bread, but why you need it, what trade-offs you’re willing to make, how this purchase connects to your other values and relationships? Suddenly, the great compression that money represents becomes unnecessary. We can preserve the full complexity of human values in our economic exchanges.
This isn’t a minor upgrade to market capitalism. It’s a fundamental phase transition in how human societies coordinate — as significant as the original invention of money itself.
To see why this matters, let’s return to Restrepo’s framework. He divides all work into two categories: “bottleneck” work that’s essential for economic growth (producing food, energy, infrastructure) and “accessory” work that’s nice but not necessary (art, hospitality, therapy). His key insight is that AGI will eventually automate all bottleneck work, since these tasks are what enable growth. Humans might still do some accessory work, but only because we have “too many workers” relative to demand — hardly a recipe for human flourishing.
This classification makes sense if you believe economic value flows primarily from material production. But what if the real bottlenecks in a post-AGI economy aren’t about producing things but about determining what things are worth producing? What if the scarcest resources aren’t compute cycles but authentic human preferences, genuine relationships, and meaningful experiences?
Think about a therapist in Restrepo’s model. He acknowledges that therapy requires “the human touch” but argues that with enough computational power, an AI system could simulate this perfectly — perhaps by deploying vast resources to emulate “the best therapists in the world.” The only reason human therapists might survive is if the computational cost of this simulation exceeds their wages.
But this fundamentally misunderstands what people value in therapy. When someone seeks help processing trauma or navigating life changes, they’re not purchasing “therapy services” that could be delivered by either human or machine. They’re entering into a particular kind of relationship — one that derives its meaning partly from being with another conscious being who has also struggled, suffered, and grown. An AI might provide better advice, more consistent availability, even superior therapeutic techniques. But it cannot provide the specific value of shared human experience, because that value emerges from consciousness itself, not from any behavior that consciousness produces.
The same principle applies across countless domains. A teacher isn’t valuable because they transfer information (which AI can do better) but because they model what it means to be a curious, growing mind. A nurse provides not just medical care but the irreplaceable comfort of one mortal being caring for another. A chef offers not just nutrition but the creative expression of culture and tradition through food.
These aren’t “accessory” functions that happen to resist automation due to high computational costs. They’re core to what humans actually value, once our basic material needs are met. And AGI, rather than replacing these functions, amplifies their importance by enabling us to articulate and exchange these complex values in ways we never could before.
Here’s where the coordination revolution becomes crucial. Today, if your neighbor’s leaf blower disrupts your baby’s nap, you have two options: suffer in silence or engage in an awkward confrontation that might damage your relationship. There’s no efficient way to communicate that you’d happily pay $50 for quiet during naptime on weekdays but not weekends, or that you’d tolerate the noise in exchange for help with yard work.
But imagine if everyone has an AI agent that knows their preferences intimately and can negotiate with other agents instantly. Your agent knows exactly how much you value quiet at different times, what trade-offs you’re willing to make, and how this fits into your broader web of relationships and values. Your neighbor’s agent knows the same about them. In milliseconds, these agents can discover mutually beneficial arrangements that would take humans hours to negotiate — if they could manage it at all.
This isn’t just about solving noise disputes. It’s about enabling a form of economic coordination that preserves the full richness of human values instead of compressing them into price signals. The construction worker’s agent routing trucks around neighborhoods with sleeping babies. The restaurant’s agent creating personalized atmospheres based on each table’s dining preferences. The employer’s agent designing work arrangements that optimize not just for productivity but for each employee’s learning goals, family obligations, and life aspirations.
In this economy, humans aren’t valued for their computational power (which machines exceed) or their physical labor (which robots can replicate). They’re valued for being the source of preferences themselves — the conscious agents whose values and relationships create the entire purpose of economic activity.
But wait, you might reasonably object. If every human preference can be modeled and predicted by AI, if agents can negotiate on our behalf without our involvement, don’t we become economically superfluous? What’s the difference between an economy that satisfies human preferences and one where humans actively participate?
The answer lies in understanding preferences not as fixed wants but as expressions of consciousness that evolve through experience and relationship. Your preference for Thai food over Italian isn’t just a data point to be optimized; it might be tied to memories of a trip with your grandmother, influenced by a documentary you watched about sustainable agriculture, or shifted by a conversation with a friend who’s opening a restaurant. These preferences don’t just exist — they develop through the ongoing process of living, relating, and choosing.
This is why humans remain economically essential even in a world of superintelligent machines: we’re not just preference-havers but preference-creators. Every choice we make, every relationship we form, every experience we have generates new values that ripple through the network of AI-mediated coordination. The economy doesn’t just satisfy static human preferences; it evolves with human consciousness.
Consider how this plays out in different types of work. A software engineer in Restrepo’s model becomes obsolete once AI can code better. But in a coordination economy, that engineer becomes a preference architect — someone who understands both human needs and technical possibilities well enough to shape how AI systems evolve. They’re valued not for writing code but for bridging human meaning and machine capability.
A farmer might not physically tend crops (robots handle that), but they become a agricultural experience designer, creating relationships between people and food that preserve cultural traditions while embracing sustainable practices. They’re valued not for their labor but for their role in defining what agriculture means in their community.
Even artists and entertainers — whom Restrepo relegates to low-wage “accessory work” — become central economic actors. In a world where material needs are easily met, the creation of meaning, beauty, and connection becomes the primary economic activity. These creators don’t just produce content to be consumed; they catalyze experiences that help others develop their own consciousness and preferences.
This vision might sound utopian, but it emerges from straightforward economic logic combined with plausible technological capabilities. If AGI can model complex preferences, if it can coordinate between millions of agents simultaneously, if it can preserve value complexity instead of reducing it to prices, then the economy naturally evolves toward organizing around consciousness rather than production.
The transition won’t be smooth. As Restrepo correctly notes, the period when some work is automated while other work isn’t could create jarring inequalities. Workers whose skills happen to be harder to automate might see temporary wage spikes, while others face sudden displacement. The risk of social disruption is real.
But the destination isn’t the economic irrelevance of humanity. Instead, we’re heading toward what might be called a “consciousness economy” — one where human awareness, creativity, and relationship are the scarce resources that everything else organizes around. In this economy, we’re not valued for what we can produce (machines do that better) but for who we are and who we’re becoming.
The real challenge isn’t economic but developmental. Can human consciousness evolve quickly enough to wisely guide the AI systems we’re creating? Can we develop governance structures that handle multi-dimensional value instead of reducing everything to profit? Can we maintain human agency and meaning in a world where machines can satisfy our every preference before we’ve even articulated it?
These are hard questions, but they’re fundamentally different from Restrepo’s concern about humans becoming economically negligible. The question isn’t whether humans will have economic value in an AGI world, but whether we’ll evolve quickly enough to handle the strange new forms that value will take.
There’s a deeper irony in Restrepo’s title. “We won’t be missed,” he writes, imagining an economy that hums along perfectly well without human contribution. But this assumes that the economy has some purpose independent of human flourishing — as if GDP growth were valuable in itself rather than as a means to human ends.
An economy where humans “won’t be missed” isn’t an economy at all. It’s just machines moving resources around. The moment we remove human consciousness — with its preferences, relationships, meanings, and values — the entire edifice becomes purposeless motion.
The AGI economy won’t make humans irrelevant. It will reveal what’s been true all along: that human consciousness is the only source of economic value. Everything else — all the production, all the computation, all the coordination — is just machinery in service of conscious experience.
We won’t be missed because we’ll never leave. We’ll be at the center of an economy reorganized around what actually matters: not the production of things, but the flourishing of conscious beings. The transformation ahead isn’t about replacement but about revelation — finally building an economy that reflects the full richness of human value.
The compute will serve consciousness, not the other way around. And in that economy, every human won’t just have value — they’ll be the reason for value itself.

