Parallel Claude Code + Git Worktrees: This Setup Will Change How You Ship

Cole Medin · Apr 23, 2026

Study Guide

Overview: Why Parallel Agents, Not Just One

Cole Medin's thesis: running a single coding agent is fine for a 2x boost, but a 10x jump requires running many agents at once. Borrowing from Dan Sullivan and Benjamin Hardy's 10x Is Easier Than 2x, the 10x framing forces you to build a real system instead of just typing faster at one Claude Code window.

Spinning up five Claude Code or Codex sessions without structure does not work. They step on each other's files, override each other's changes, and collide on ports and databases. The fix is a disciplined workflow built on git worktrees, GitHub issues as specs, and pull requests as validation artifacts.

The Five Pillars of Parallel Agentic Development

Pillar 1: The Issue Is the Spec

Every implementation starts from a GitHub issue (or a Linear/Jira ticket). The issue scopes the work so multiple agents can fan out without ambiguity. Cole's pattern: use one agent session to split a sprint's worth of work into a batch of issues, then fan out and give each issue to its own agent.

  • Input to implementation: the issue/ticket.
  • Output of implementation: a pull request.
  • Input to validation: that pull request.

Pillar 2: Git Worktrees for Code Isolation

Each agent gets its own local copy of the codebase via a git worktree, so nobody overwrites anyone else. Claude Code supports worktrees natively:

  • claude --worktree issue-10 or shorthand claude -w issue-10 spins up a session inside .claude/worktrees/issue-10/.
  • For agents without native worktree support, wrap creation in a script (Cole uses w.sh / w.ps1) that runs a worktree-setup command before launching the agent.

Pillar 3: Plan, Build, Validate in Each Worktree

Inside each worktree the agent runs your usual plan-build-validate loop (commands, skills, frameworks like GitHub Spec Kit, BMad, etc.). The only thing that changes per session is the issue number: "Use the GitHub CLI to view issue N and help me make a plan for it." Every implementation must end with a pull request, because the PR is the handoff to validation.

Pillar 4: Validation in a Fresh Context Window

Never let an agent review its own work in the same conversation. In-context bias makes the review cheerleading rather than scrutiny: "it's like asking a kid to grade their own homework." The discipline:

  • Run /clear to wipe the context window.
  • Run a dedicated review command (Cole ships a /review-pr) that pulls the PR diff, compares it to the issue, and spins up specialized sub-agents.
  • For extra thoroughness, run an adversarial review from a second agent. Cole uses the Codex plugin for Claude Code and runs /codex-adversarial-review to have Codex critique Claude's work.

Pillar 5: The Self-Healing AI Layer

When a bug shows up, do not just fix the bug. Fix the system that allowed the bug. After reviews surface issues, ask the agent: "Issue XYZ came up. What in our rules, skills, workflows, or CLAUDE.md would prevent this next time?" Over time this evolves your .claude/ layer so agents get more self-sufficient and you become less of a bottleneck.

Because issue in → PR out, you can also retroactively diff the PR against the original issue to spot where the agent drifted from the plan.

The Hard Problems of Parallel End-to-End Validation

Static code analysis is not enough. Agents should actually start the app and use it like a user before merging. That exposes five problems the naive setup does not solve:

  1. Port conflicts when five copies of the app all want the same port.
  2. Dependency install time in every new worktree.
  3. Database collisions when parallel branches mutate the same DB.
  4. Token blowout from long end-to-end agent runs.
  5. PR pile-up where the human becomes the bottleneck.

Port Conflicts: Deterministic Port Assignment

Cole's startup command hashes the worktree name into a unique port offset from a base port (e.g., base 4000 → 4161 for one worktree, 4107 for another). Every worktree's app runs on its own port, and the agent can launch and exercise the app without colliding with siblings or main.

Dependency Install: Do It Up Front

The worktree setup script installs node modules (or the equivalent) at creation time, before the agent starts. That keeps the agent focused on the feature and avoids mid-run reinstalls during validation.

Database Branching: Worktrees for Your Data

Code isolation is not enough. If all five worktrees point at the same database, migrations and seed changes from one will break the others. Cole uses Neon database branching: the setup script creates a Neon branch per worktree that copies the schema and data from production. Each agent can insert, migrate, and mutate freely without touching prod or other branches. A per-worktree SQLite file is the free, local equivalent.

Token Blowout: Match Model to Task

Reliable end-to-end agents burn tokens. The mitigation is to stop defaulting to the top-tier model. Use /model in Claude Code (or the equivalent elsewhere) to drop to Haiku or Sonnet for cheap, fast work: codebase analysis, web research, even code review. Sub-agents invoked by skills can also be pinned to a smaller model ("use Haiku for a sub-agent that does research for X").

PR Pile-Up: Push More Into the AI Layer

If you are burning hours reviewing PRs and fixing the same classes of issue, that is the trigger for Pillar 5. Add validation steps, tighten rules, or expand skills so agents catch those problems before the PR ever reaches you.

The Demo Setup

Cole demonstrates the workflow on a GitHub issue triage dashboard backed by a Neon Postgres database. With five open issues, he:

  1. Creates five worktrees via his w script (which also creates Neon branches and installs deps).
  2. Points each Claude Code session at one issue: "use the GitHub CLI to view issue N and make a plan."
  3. Tells each session: "go with what you recommend and implement all the way to a pull request."
  4. In each worktree, /clear then /review-pr for a clean-context review.
  5. Runs /codex-adversarial-review for a second opinion from Codex.
  6. Fixes issues, reviews personally, merges, and rolls to the next batch.

The endgame: "five issues at breakfast, five pull requests ready to review and merge before lunch."

Key Takeaways

  • Aim for 10x, not 2x. That forces you to build a system instead of cranking one agent harder.
  • GitHub issues are the input; pull requests are the output. Everything else is plumbing.
  • Git worktrees are non-negotiable for parallel work. Claude Code has them natively; wrap a script for everything else.
  • Validation must happen in a fresh context. Same-window review is self-grading.
  • Cross-agent review (Claude reviewing Codex, or vice versa) catches what one model misses.
  • Isolate ports and databases, not just code. Neon branches (or per-worktree SQLite) are the database equivalent of a worktree.
  • Pre-install dependencies during worktree setup to keep agents focused.
  • Use smaller models (Haiku, Sonnet) for research, analysis, and review to control token burn.
  • Treat every recurring bug as a signal to evolve the AI layer, not just patch code.
  • Retro-diff each PR against its issue to audit where the agent drifted.

Action Items

  1. Adopt issues (or tickets) as the canonical unit of work for agent sessions.
  2. Practice launching Claude Code with claude -w <name> and running multiple sessions in parallel.
  3. Write a worktree-setup script that creates the worktree, installs deps, creates a DB branch, and assigns a unique port.
  4. Build a /review-pr command that pulls the PR diff, compares it to the linked issue, and spins up sub-agents for deep review.
  5. Install the Codex plugin for Claude Code (Anthropic plugin marketplace) and use adversarial review on important PRs.
  6. When a class of bug recurs, update CLAUDE.md, a skill, or a command so the next agent catches it.
  7. Use /model to route research and review work to Haiku or Sonnet; save the top-tier model for planning and hard implementation.
YouTube