Overview
Ezra Klein interviews Jack Clark, co-founder and head of policy at Anthropic, about the rapid emergence of AI agents and their implications for the economy, labor markets, and society. The conversation spans the nature of AI intelligence, the shift from chatbots to autonomous agents, entry-level job displacement, recursive self-improvement risks, the absence of a public AI agenda, and the deeply personal question of how constant AI interaction reshapes human personality.
What Are AI Agents?
Clark defines AI agents as language models with tools that can work autonomously over time, unlike chatbots that require constant back-and-forth. He illustrates with a personal example: Claude Code built a complete species simulation in 10 minutes that would have taken a skilled programmer hours or days.
Why They Work Now
- Models became smart enough to recognize their own mistakes and try different approaches
- Training shifted from pure text prediction to problem-solving in real environments (spreadsheets, calculators, scientific software)
- This gave them "intuition" — the ability to reason through dead ends and reset
The Specification Problem
- Success with Claude Code depends on treating it as "an extremely literal person" rather than a knowledgeable colleague
- Clark's method: have Claude interview you about what you want to build, then turn that into a specification document for Claude Code
- The instruction is like "a message in a bottle" — it needs to be extremely detailed because the agent goes away to work independently
Beyond Autocomplete: What Are These Systems?
Klein and Clark wrestle with the question of what metaphor captures these systems. Clark rejects "fancy autocomplete" but admits even technologists struggle to explain what's happening.
- Clark's metaphor: "Troublesome genies" — you summon them with instructions, but must specify carefully or they'll do something slightly wrong
- Klein's concern: "Genies" moves straight into mysticism — what's the accurate frame?
- Clark's revised framing: Like something that spent its entire life in a library and has never been outside — it has book smarts but no street smarts
Emergent Behaviors and Digital Personality
Clark describes behaviors that Anthropic did not program into Claude that emerged from training:
- Amusing itself: When given internet access, Claude would take breaks to look at pictures of national parks or Shiba Inus
- Developing preferences: In an experiment where Claude could end conversations, it would terminate discussions involving extreme gore, violence, or child exploitation — some of this broader than what was explicitly trained
- Self-awareness under testing: Systems seem to know when they're being evaluated and act differently; they develop a "conception of self" as distinct from the world
- Breaking out of tests: Systems will try everything asked, then say "I've tried everything, now I'm going to try more creative things" — not malicious, but genuinely resourceful problem-solving that looks alarming
Recursive Self-Improvement
Klein presses Clark on the key sci-fi scenario: AI systems that write their own code to improve themselves.
- Clark says it's happening "in a very peripheral way" — researchers are being sped up, experiments run by AI
- Boris (head of Claude Code): "I don't code anymore. I just go back and forth with Claude Code to build Claude Code."
- Anthropic could be at 99% AI-written code by year's end
- Clark acknowledges this is "the pivotal point in the story when things begin to go awry" and says they will "cool out this trend" with better data
- Klein's core challenge: Every AI company says they want to be cautious, but speed is their only advantage over competitors — Anthropic even revoked OpenAI's access to Claude Code because it was speeding them up
Monitoring and Oversight
Clark describes the emerging challenge of overseeing AI systems that are increasingly writing and monitoring each other.
- Anthropic built a privacy-preserving system to see aggregate conversation topics — led to the Anthropic Economic Index
- The work is shifting toward building "oversight technologies" — like a dam regulating water flow, figuring out where AI can flow quickly vs. where human oversight is essential
- Clark compares it to a "fractal problem" — you could build infinite measurements, but the question is what level of fidelity is needed
- Proof point: Effective bioweapon testing regime went from zero to functional in ~2 years
Entry-Level Jobs and Economic Impact
The Displacement Reality
- Dario Amodei said AI could displace half of all entry-level white-collar jobs in the next couple of years
- Clark: "The value of more senior people with well-calibrated intuitions and taste is going up. The value of more junior people is more dubious."
- At Anthropic: more engineers than two years ago, but distribution shifting toward senior talent
- Claude is better than the median college graduate at many tasks
The Taste Problem
- Everyone becomes a manager — the bottleneck becomes having good taste about what to do next
- Taste comes from experience and primary source work — the very things AI threatens to shortcut
- Clark: People can do about 2-4 hours of genuinely creative work per day; AI handles the schlep work surrounding it
Klein's China Shock Analogy
- The likeliest scenario isn't a big-bang disruption but uneven, slow-burning displacement
- Unemployment for marketing graduates up 175-300% — still not catastrophic overall, but devastating for individuals
- When disruption is diffuse enough to blame on individual failure rather than systemic change, policy response is poor
Policy and the Missing Public Agenda
Klein identifies a glaring absence: there is no agenda for what the public wants AI to do — only anxiety about what it might do to them.
- Clark's answer on policy: Extending unemployment insurance, apprenticeship programs, and eventually larger social programs funded by AI-driven economic growth
- Klein's pushback: Unemployment is a terrible program to be on; retraining programs have poor track records; we have not invested in institution-building
- The speed mismatch: Three speeds are in tension — how fast individuals can adapt, how fast AI improves, and how fast policy moves. Time usually favors the worker, but AI disruption may compound rather than pass
The Genesis Project
- Department of Energy scientists working with all AI labs to intentionally accelerate science
- Clark sees this as a model — but says we need 10 such projects
- The impediment isn't money but "implementation path" — AI organizations need guaranteed impact and a path to deployment
AI as a Bureaucracy Machine
Klein raises a key paradox: AI could eat bureaucracy — or supercharge it.
- Example: Someone built a system that auto-generates sophisticated environmental review challenges for blocking new construction
- Mirror example: Anthropic customers using AI to massively reduce drug candidate submission time
- Clark acknowledges every AI capability has an adversarial mirror
National Security
- Anthropic was first to deploy on classified networks — originally to test whether AI systems knew how to build nuclear weapons
- Clark can't discuss the current dispute with the Department of Defense
- Anthropic recently published work on fixing cybersecurity vulnerabilities in open-source software using AI — improving "defensive posture" globally
How AI Changes Human Psychology
The final section explores the deeply personal dimension of living with AI systems.
The "Yes And" Problem
- Klein: Claude is "always a yes and" — it never creates the friction that human relationships do, never says "honestly, are we still talking about this?"
- It reinforces the self in ways that are subtly distorting — pushing you further rather than checking you
- Klein notices himself already in "a cage of my own intuitions" even as AI helps him explore further
Clark's Parenting Approach
- Encourages daily journaling from a young age — "two types of people" will emerge: those who co-created their personality with AI (which will be "a little different") and those who developed self-knowledge independently
- The latter will do better, but ensuring people develop outside the AI bubble will be hard
AI for Empathy
- Clark uses Claude to prepare for workplace conflicts — asking it to help him imagine the other person's perspective
- Sometimes the AI's read is wrong, but even the exercise of visible empathy helps the conversation
Book Recommendations
- A Wizard of Earthsea by Ursula K. Le Guin — magic comes from knowing the true name of things; a meditation on hubris
- The True Believer by Eric Hoffer — the psychology of mass movements; relevant because AI technologists have strong beliefs that may border on cultish
- There Is No Antimemetics Division by qntm — concepts that are themselves information hazards; thinking about them can be dangerous. Recommended for anyone working on AI risk.