Nate B. Jones argues that AI didn't create a meetings problem — it exposed a team size problem. By dramatically increasing per-person output, AI has made the coordination cost of oversized teams catastrophically expensive. The solution isn't firing people; it's restructuring into five-person "strike teams" and expanding organizational ambition to match the new capacity.
The number of communication pathways in a group grows exponentially: 5 people = 10 pathways, 10 people = 45, 20 people = 190. Robin Dunbar's 1992 research on primate neocortex size established layered limits: 5 for core group, 15 for deep trust, 50 for working relationships, 150 for stable social connections. Military research, Jeff Bezos's two-pizza team rule, and Fred Brooks's The Mythical Man-Month (1975) all converge on the same answer: the human brain can sustain deep coordination with about five people.
AI didn't rewire our brains — it changed the consequences of getting team size wrong. Before AI, a five-person team produced X output. Adding a sixth person had diminishing returns but manageable coordination cost. After AI, the same five-person team produces 5–10X more output. AI-native companies (Lovable, Midjourney, ElevenLabs) show revenue per employee 5–10X higher than traditional SaaS benchmarks. When each person generates $2M+ in value instead of $250K, the coordination cost of adding person #6 is measured in millions of lost productivity.
AI made volume cheap. The scarce resource is now correctness — whether what you shipped is architecturally sound, strategically coherent, and right for the customer. A 2025 Harvard Business School study of 776 P&G professionals found that AI-augmented teams were 3X more likely to produce ideas in the top 10% of quality. AI also broke functional silos, extending each person's competence into adjacent domains. Five generalist architects using AI outperform 10 narrow specialists.
One person with full AI toolkit on a defined exploration mission. Zero coordination overhead. Great for answering: Is this technology viable? Is this market real? Peter Steinberger demonstrated the model by building OpenClaw in 60 days using 4–10 coding agents simultaneously. The limit: scouts lack peer verification, so production-quality work requires additional eyes.
Five people with AI, optimizing for correctness. Every person's AI output passes through at least one other brain with shared context. A team of five covers product, engineering, design, data, and domain expertise — the minimum surface area for a complete decision. Below five: blind spots. Above five: silos. At five: nowhere to hide.
The standard narrative — "do the same work with fewer people" — is a failure of imagination. If 500 people each became 5–10X more capable, the correct response isn't "run the company with 50." It's "we now have the capacity of 2,500–5,000 people — what was previously impossible?" Companies like Lovable (45 people, unicorn status) and Midjourney (100 employees, dominating visual creation) didn't use AI to shrink. They used small teams to think enormously big.
The layers follow Dunbar ratios: 5 people per strike team, 3–4 strike teams per domain (coordinated by one person focused on inter-team coherence), 3–4 domains per strategic objective. Management layers thin dramatically — AI handles project tracking, fewer humans need coordination. But the "taste layer" becomes critical: people who obsess over correctness standards and define what makes the organization uniquely valuable.
In a five-person team, every person occupies one of 10 communication pathways. A mediocre contributor's judgment gets amplified by AI, generating verification burdens on everyone else. They don't just underperform — they consume the team's most precious resource: shared attention required for correctness. This is why the hiring bar transforms from "can this person do the job?" to "can this person be one of five whose taste and judgment will be amplified 10–100X by AI?"
Give people a real scout mission: a problem the company has been ignoring, full AI tooling, one week, clear objective, zero check-ins. Test for: Can they define the problem without a spec? Do they know what "right" looks like at the architectural level? Do they default to action or permission? The results may not match current performance review rubrics — coordination skills are overhead in a strike team, while the "frustrating" people who build without asking may be exactly who you need.
Toby Lutke required every Shopify team to prototype with AI before beginning any real build. AI fluency became part of performance reviews. Teams must demonstrate why AI cannot do a task before requesting headcount. The deeper insight: every AI prototype generates a data point on AI capabilities in that domain, creating a pre-built test harness for evaluating new models. The person who prototypes 10 times and fails 7 builds more skill than the person who attends 10 AI strategy meetings.