the nuance | AI & jobs: beyond hype & panic
How to think clearly about AI and jobs when nobody knows what's coming
Welcome to the nuance – a space to think clearly about polarizing topics.
In this edition: Making sense of AI and jobs when nobody (including the people building it) actually knows what's coming.
The AI jobs debate feels stuck between “robots will do everything, you’re toast” and “innovation always creates more than it destroys.” But beneath the buzzy headlines about the death of white-collar work and the grim future, here’s the deeper question: How do societies absorb technological disruption when it happens faster than humans and institutions can adapt – and when nobody actually knows what’s coming?
We’re well past debating if AI will affect employment – it already is. This is about navigating a technology that could eliminate entire job categories within years (or decades, more on that below) while also potentially creating unprecedented abundance.
On one side:
Techno-optimism treats jobs displacement as a temporary friction. “Technology always creates more jobs than it destroys—adapt or get left behind.” Every concern gets met with printing press analogies. Question the pace or ask about transition costs, and you’re blocking progress.
On the other side:
Labor catastrophism that treats AI as an extinction event for work itself. “We’re creating a permanent underclass while tech billionaires capture all the gains.” Suggest historical patterns might repeat, and you’re naive or complicit.
Result:
Both positions assume we know the timeline – despite that being the one thing nobody can confidently tell you. “Jobs will adapt” and “mass displacement” could both be true, just on different timelines. If adaptation takes 5 years and disruption takes 2, we’re in crisis mode. If disruption unfolds gradually over 10 or 15 years, markets can adjust.
The binary flows between “this is fine” and “catastrophe” when the actual answer is “it depends on pace” – and nobody knows the pace.
This is genuinely hard because multiple realities exist simultaneously – and some directly contradict each other:
Historical precedent says we adapt
Every technology from trains to electricity to spreadsheets displaced work temporarily and created more opportunity long-term. Excel didn’t eliminate spreadsheet jobs – it made everyone a spreadsheet worker. Unemployment is still under 5%. When techno-optimists say “we’ve been here before,” they’re citing history, which is hard to refute.
But the speed might actually be different
The telephone was patented in 1876. It didn’t reach 50% of American households until the 1940s, 70 years. Electricity took four decades to disperse across America. ChatGPT hit 100 million users in two months—the fastest technology adoption in history. “We adapted before” assumes humans and institutions can absorb shock at any pace. Can they absorb it at this pace?
The technology itself is genuinely uncertain
In 2019, GPT-2 struggled with coherent paragraphs. By 2023, GPT-4 passed the bar exam. By early 2025, Claude Code built a functional website with a playable video game in 90 seconds. Ask AI researchers what capabilities we’ll have in 2028 and you’ll get answers ranging from “better chatbots” to “most knowledge work automated.” Your career planning depends on which scenario unfolds. And nobody knows for sure.
No trusted source on this
Who do you even trust here? Dario Amodei, CEO of Anthropic, said AI could “wipe out half of all entry-level white-collar jobs.” Tech CEOs warning about their own products – are they being honest or managing liability? Economists cite historical precedent but largely missed 2008. Labor advocates often cry wolf when it comes to automation. The people building it don’t know, the people studying it are behind, and there’s no institution with credible predictions because the technology is moving faster than research cycles.
When you read about AI and jobs, or argue about it, you’re already filtering through a particular lens: innovation, speed, dignity, markets, or power.
Here’s how to spot which one – both in yourself and in what you’re consuming and discussing:
Innovation — Trusts that historical patterns will repeat and we’ll adapt at pace. Points to unemployment data, previous automation waves, and how innovation makes us better. Operating from innovation means betting the strongest evidence we have, history, will hold – even with a uniquely powerful technology.
Speed — Recognizes that pace matters as much as the technology itself. Focuses on whether adaptation speed can match disruption speed, and sees genuine friction between technology’s tempo and human adjustment timelines. Emphasizing velocity means arguing that “we adapted before” ignores crucial differences in how fast this is moving.
Dignity — Sees work as more than income – it’s identity, structure, and purpose. Worries about what happens when your profession no longer exists, and points out what Universal Basic Income (UBI) can’t solve: the existential question of meaning in daily life.
Markets — Believes that fighting inevitable technological progress just delays better outcomes. Trusts markets to reallocate resources and argues that protecting obsolete work strangles the productivity gains that could fund everything else. Prioritizing efficiency means seeing resistance as choosing stagnation.
Power — Focuses on who captures the productivity gains from AI. Argues that without intervention, every efficiency gain flows upward while displacement costs flow down. Operating from power means framing this as a question of wealth concentration going forward, not just job loss.
Each lens is a bet on which uncertain future actually unfolds.
We’ve covered the five lenses and the competing realities underneath them. Here’s how to sharpen your thinking:
Challenge your certainty
Ask yourself: What would change my mind?
If you’re betting on innovation (history repeats), what timeline would make you wrong? Two years of disruption? Five?
If you’re betting on velocity (speed breaks the pattern), what would prove adaptation is working?
Spot what you’re trading off
Every lens elevates something and sacrifices something else. If you prioritize markets (trusting markets to reallocate), you’re dismissing concerns about work as identity.
If you prioritize dignity (protecting meaning and purpose), you might stall productivity gains that could fund solutions. If you prioritize power (wealth redistribution), you’re assuming intervention works better than markets. Name what you’re willing to give up.
Here’s how it sounds in practice:
“I think we’ll adapt because history says we do. But I also see the speed problem – if disruption hits in 2 years and retraining takes 5, that’s real crisis. I’m betting on the longer timeline, but I’m watching for signs I’m wrong.”
“I’m worried about the speed, but it’s true that we’ve always figured it out eventually. My concern is ‘eventually’ might mean a decade of chaos for millions of people. That seems worth trying to soften even if the long-term outcome is fine.”
“I care about markets, and I care about dignity. Those are in tension. I think we capture the productivity gains AND address the meaning problem, but that requires being honest that UBI alone doesn’t solve the second part.”
Think for yourself.
j
Here’s how I think about this: AI forces us to grapple with uncertainty at high speed. But everything has always been uncertain. Societal-scale change is constant, it’s just in our face now.
I’ve never believed this grand human experiment is meant to go down in flames. But the possibility is activating. And maybe that’s the point. AI—as concept, as threat, as mirror—is forcing us to look more deeply at what it means to be human. And it’s arriving precisely when society is at its most lonely, distracted, and depressed.
They say never waste a crisis. Same applies here. So beyond the question of adaptability or timing, if the promise of AI is to change everything, what do we want to become?







