Run Claude, Codex, and Qwen in Parallel on the Same Codebase
A workflow guide for running three AI coding agents at once in a SpaceSpider grid, with each pane working on a different slice of the same repository.
April 18, 2026 · 6 min read
The problem
Most AI coding sessions look the same. You open one terminal, start one agent, hand it one task, and sit there waiting. If the agent gets stuck in a loop on a type error, you wait. If it needs to run tests, you wait. If it decides to read every file in the repo before writing a single line, you wait. The bottleneck is not the model — it is that you, the human operator, can only drive one agent at a time in a single tab.
The other problem is that agents have different strengths. Claude Code is strong on tool use and long reasoning chains. Codex is fast at generating boilerplate and scaffolding. Qwen Code has good coverage of mid-tier languages and tends to produce terse diffs. If you want to exploit those differences, you need them open simultaneously, looking at the same working tree. Tabs don't cut it — you lose context every time you switch. What works is a visible grid where all three are running at once on the same disk, and you glance between them like dashboards.
The grid setup
A 2x2 grid on a single monitor. Top-left runs Claude Code in the repo root. Top-right runs Codex pointed at the same directory. Bottom-left runs Qwen Code. Bottom-right is a plain shell for running builds, git, and rg without polluting any agent's context window.
All four panes share one working directory — in SpaceSpider, that is just the folder you picked when you created the space. None of the agents touch a git worktree or a sandbox; they all edit the same files. That is the point. The moment Claude writes to src/auth.ts, Codex sees it on its next tool call. Conflicts are rare in practice because you assign each agent a disjoint slice of work (see step 3 below), and the shell pane lets you sanity-check the working tree at any time.
Step by step
- Create a new space in SpaceSpider. Pick the repo folder — for this walkthrough, assume
~/code/api-gateway, a Node + TypeScript service with about 40 files. - In the wizard, choose the 2x2 preset. Assign Claude Code to pane 1, Codex to pane 2, Qwen Code to pane 3, shell to pane 4.
- Before starting the agents, divide the work. Open a scratch note and write three tickets. Example: (a) "Add rate limiting middleware to
src/middleware/", (b) "Write integration tests for the existing/usersroutes", (c) "Port the legacy callback-stylesrc/db/query.jsto async/await". One per agent. - In pane 1 (Claude), paste ticket (a) with the usual context prompt: "Read
src/middleware/andsrc/app.ts, then add a token-bucket rate limiter with a per-IP default of 60 req/min. Write a Jest test." - In pane 2 (Codex), paste ticket (b): "Generate integration tests for
src/routes/users.tscovering the happy path and four error cases. Use supertest." - In pane 3 (Qwen), paste ticket (c): "Convert
src/db/query.jsfrom callbacks to async/await. Preserve the exported function signatures so callers don't break. Output the full file." - Let them run. While Claude is thinking, watch Codex output scroll in the corner of your eye. When Qwen finishes first (it usually will on small files), check its diff in the shell pane:
git diff src/db/query.js. - When any agent finishes, review the output in that pane, accept or reject the edits, and hand it the next ticket. Do not let idle panes sit — the whole point is to keep three agents saturated.
- At the end of the session, run
npm testin the shell pane against all three sets of changes together. Any collisions will show up here. In practice they rarely do if tickets are well-scoped.
What this unlocks
Throughput that no single-agent workflow can match. A careful human driver can keep three agents productive for a full hour and come out with three merged features instead of one.
Model comparison in real time. You start to notice which agent solves which kind of problem faster. After a week you'll know, for your codebase, whether Claude or Codex handles your specific TypeScript idioms better.
A natural checkpoint system. The shell pane is always there for git status, git diff, and npm test. You never have to stop an agent to check the state of the tree.
Parallel exploration. Sometimes you don't want three different tickets — you want the same ticket attempted three ways. See the first variation below.
Variations
Same ticket, three attempts. Give all three panes the identical prompt — "Refactor src/auth/session.ts to use the new IdentityProvider interface." Let them finish, then read the three diffs side by side and pick the best one. This is expensive in tokens but the quality bump on hard problems is real.
3+1 layout on an ultrawide. Three narrow panes on the left (one per agent), one wide pane on the right running vim or your editor of choice. The wide pane becomes the review surface — you read diffs there, hand-edit final touches, and commit. Works well on 34" monitors.
Solo + two watchers. Claude Code in a 2/3-width pane on the left driving the main task. Codex and Qwen in two small panes on the right, each running git diff HEAD in a loop or watching test output. They become passive reviewers you can escalate to with a quick paste.
Caveats
The three agents can step on each other if you hand them overlapping files. If Claude is editing src/app.ts and you ask Codex to add a route there, you will get a merge conflict on your own disk. The fix is simply disciplined ticket-splitting — treat each pane like a separate contributor and assign non-overlapping directories when you can.
Token cost scales linearly. Three agents running for an hour costs roughly three times what one agent costs. Worth it for the throughput, but not a workflow for casual use. See the cost-optimization page for strategies.
Context-switching has a real cognitive tax. After 90 minutes of monitoring three panes, most people get tired. Two-agent grids (1x2) are a gentler entry point before you graduate to three.
FAQ
Do the agents know about each other? No. Each pane is an independent CLI process. They share a working directory but not a conversation. This is usually what you want — it prevents cascading hallucinations where one agent's mistake poisons another's reasoning.
What happens if two agents edit the same file at the same time? Last write wins at the filesystem level. In practice, both agents re-read the file before each edit, so the second one will see the first one's changes and usually adapt. Still, avoid this by scoping tickets.
Can I save the grid layout and reuse it?
Yes. Each space in SpaceSpider persists its grid config in spaces.json, so reopening the space brings back the same pane assignments. You still have to re-launch each CLI, but the layout is one click.
Related reading:
- Grid layouts reference — presets from 1 to 9 panes and when to use each
- Getting started with SpaceSpider
- Claude Code vs Codex in a real repo
- Blog: Why one AI agent is not enough anymore
Keep reading
- Multi-Model Code Review: Catch What Any Single AI MissesA review workflow that pipes the same diff through three AI coding CLIs side by side, surfacing bugs and smells that any one model would overlook.
- Agentic Refactoring: Break a Big Refactor Into Parallel PanesA tutorial for splitting a large refactor across multiple AI panes, coordinating through directory-scoped tickets, and merging results without breaking the build.
- Debugging With AI: Three Hypotheses in Three PanesA debugging workflow that runs three parallel AI agents on the same bug, each exploring a different hypothesis, with a shared shell for log inspection.
- Frontend and Backend AI Pair on the Same Feature, Side by SideA full-stack development workflow with dedicated AI panes for the frontend, the backend, and a live API tester, all sharing the same repo and feature branch.
- Cost-Optimized AI Coding: Cheap Model for Grunt Work, Smart Model for Hard CallsA cost-aware development workflow that routes routine edits to cheaper AI CLIs and reserves premium models for architecture decisions and hard debugging.
- Team Workflows: Shared AI Coding Grids for Pairing and ReviewA case study on how a six-person team uses SpaceSpider grids for pair programming, PR review, and on-call rotations, with shared layouts committed to the repo.