Cole Medin's thesis: running a single coding agent is fine for a 2x boost, but a 10x jump requires running many agents at once. Borrowing from Dan Sullivan and Benjamin Hardy's 10x Is Easier Than 2x, the 10x framing forces you to build a real system instead of just typing faster at one Claude Code window.
Spinning up five Claude Code or Codex sessions without structure does not work. They step on each other's files, override each other's changes, and collide on ports and databases. The fix is a disciplined workflow built on git worktrees, GitHub issues as specs, and pull requests as validation artifacts.
Every implementation starts from a GitHub issue (or a Linear/Jira ticket). The issue scopes the work so multiple agents can fan out without ambiguity. Cole's pattern: use one agent session to split a sprint's worth of work into a batch of issues, then fan out and give each issue to its own agent.
Each agent gets its own local copy of the codebase via a git worktree, so nobody overwrites anyone else. Claude Code supports worktrees natively:
claude --worktree issue-10 or shorthand claude -w issue-10 spins up a session inside .claude/worktrees/issue-10/.w.sh / w.ps1) that runs a worktree-setup command before launching the agent.Inside each worktree the agent runs your usual plan-build-validate loop (commands, skills, frameworks like GitHub Spec Kit, BMad, etc.). The only thing that changes per session is the issue number: "Use the GitHub CLI to view issue N and help me make a plan for it." Every implementation must end with a pull request, because the PR is the handoff to validation.
Never let an agent review its own work in the same conversation. In-context bias makes the review cheerleading rather than scrutiny: "it's like asking a kid to grade their own homework." The discipline:
/clear to wipe the context window./review-pr) that pulls the PR diff, compares it to the issue, and spins up specialized sub-agents./codex-adversarial-review to have Codex critique Claude's work.When a bug shows up, do not just fix the bug. Fix the system that allowed the bug. After reviews surface issues, ask the agent: "Issue XYZ came up. What in our rules, skills, workflows, or CLAUDE.md would prevent this next time?" Over time this evolves your .claude/ layer so agents get more self-sufficient and you become less of a bottleneck.
Because issue in → PR out, you can also retroactively diff the PR against the original issue to spot where the agent drifted from the plan.
Static code analysis is not enough. Agents should actually start the app and use it like a user before merging. That exposes five problems the naive setup does not solve:
Cole's startup command hashes the worktree name into a unique port offset from a base port (e.g., base 4000 → 4161 for one worktree, 4107 for another). Every worktree's app runs on its own port, and the agent can launch and exercise the app without colliding with siblings or main.
The worktree setup script installs node modules (or the equivalent) at creation time, before the agent starts. That keeps the agent focused on the feature and avoids mid-run reinstalls during validation.
Code isolation is not enough. If all five worktrees point at the same database, migrations and seed changes from one will break the others. Cole uses Neon database branching: the setup script creates a Neon branch per worktree that copies the schema and data from production. Each agent can insert, migrate, and mutate freely without touching prod or other branches. A per-worktree SQLite file is the free, local equivalent.
Reliable end-to-end agents burn tokens. The mitigation is to stop defaulting to the top-tier model. Use /model in Claude Code (or the equivalent elsewhere) to drop to Haiku or Sonnet for cheap, fast work: codebase analysis, web research, even code review. Sub-agents invoked by skills can also be pinned to a smaller model ("use Haiku for a sub-agent that does research for X").
If you are burning hours reviewing PRs and fixing the same classes of issue, that is the trigger for Pillar 5. Add validation steps, tighten rules, or expand skills so agents catch those problems before the PR ever reaches you.
Cole demonstrates the workflow on a GitHub issue triage dashboard backed by a Neon Postgres database. With five open issues, he:
w script (which also creates Neon branches and installs deps)./clear then /review-pr for a clean-context review./codex-adversarial-review for a second opinion from Codex.The endgame: "five issues at breakfast, five pull requests ready to review and merge before lunch."
claude -w <name> and running multiple sessions in parallel.worktree-setup script that creates the worktree, installs deps, creates a DB branch, and assigns a unique port./review-pr command that pulls the PR diff, compares it to the linked issue, and spins up sub-agents for deep review./model to route research and review work to Haiku or Sonnet; save the top-tier model for planning and hard implementation.