Team Workflows: Shared AI Coding Grids for Pairing and Review

A case study on how a six-person team uses SpaceSpider grids for pair programming, PR review, and on-call rotations, with shared layouts committed to the repo.

April 18, 2026 · 6 min read

The problem

Team workflows around AI coding tools are under-documented. Most teams adopt AI agents at the individual level — one engineer installs Claude Code, loves it, tells a colleague, they install it too, and within a month everyone is using it but in completely different ways. One person has seven slash commands aliased; another has a complex .mdc rules directory; a third just pastes prompts from memory. When a bug drops at 2am and the on-call engineer joins the incident channel, they can't borrow anyone else's AI setup because nobody documented theirs. The tools promised leverage, and delivered it, but only for individuals.

The opportunity is to make the grid itself the unit of sharing. A "grid layout" is a concrete, documentable artifact — 4 panes, each with a specific CLI, a specific starting directory, a specific first prompt. Commit that layout to the repo and suddenly your team has a reusable workflow. New hires open the space and are instantly productive. On-call engineers open the "incident" space and are looking at the same four panes their predecessor used last time. This is the kind of workflow codification that teams have done for CI and dev containers for years; the grid is the natural extension to AI tools.

The grid setup (per workflow)

Pair review grid (2x2). Pane 1: Claude Code reading the PR diff. Pane 2: Codex reading the same diff. Pane 3: a shell with gh pr checkout <n> already run. Pane 4: the PR description and review checklist in a scratch file. Two humans on screen-share drive this jointly.

Incident response grid (3x2). Pane 1: Claude Code looking at logs. Pane 2: Claude Code looking at the offending service's code. Pane 3: a shell pane tailing the live logs. Pane 4: a shell pane with the database client attached to the read replica. Pane 5: a shell pane with the deploy tool ready to roll back. Pane 6: a scratch pane with the incident template.

Onboarding grid (2-pane). Pane 1: Claude Code rooted at the repo with a system prompt loaded from docs/onboarding-prompt.md — that file tells the agent to explain the architecture and answer questions in plain language. Pane 2: a shell pane for exploring the tree. A new hire sits with this grid on day one.

Step by step (team rollout)

  1. Nominate one person to own the grid layouts. Small teams often make this the tech lead; larger teams rotate it quarterly. This person is not defining policy — they're curating a shared library.
  2. That person creates a spaces/ directory at the repo root. Inside, commit a README.md that lists each grid layout and what it's for. Keep the entries short: "spaces/review.md — use this when reviewing a PR. Open SpaceSpider, create a new space rooted at this repo, pick the 2x2 preset, follow the first prompts in the file."
  3. For each workflow, write a markdown file: spaces/review.md, spaces/incident.md, spaces/onboard.md. The file lists each pane's CLI, its starting prompt, and any context files it should load.
  4. Individual engineers create SpaceSpider spaces that follow the layout. Because SpaceSpider persists space definitions in spaces.json per-user, you can't share the spaces themselves directly — you share the instructions and everyone recreates the layout locally. Two minutes per space. One-time.
  5. When someone invents a new useful layout — say, the backend lead comes up with a three-pane "new-endpoint" workflow — they submit a PR adding spaces/new-endpoint.md. The team reviews and merges it like any other code change.
  6. Use the grids in team rituals. Review meeting: everyone opens the review grid. Incident: the on-call runbook says "open the incident grid first." Onboarding: day-one checklist includes "open the onboard grid with your tech lead."
  7. When prompts go stale — the codebase evolved, the prompts no longer match — update them in the same repo. Prompts are code, and they rot like code.
  8. At quarterly retros, ask: "Which grids are we using? Which do we ignore?" Delete dead layouts. The repo is not a graveyard.
  9. Optional: standardize which model goes in which pane across the team. Not required, but useful for parity during reviews — if everyone runs the same models in the same slots, reproducing an issue is easier.

What this unlocks

Transferable workflow knowledge. A new hire can read spaces/incident.md before their first on-call shift and understand exactly what the first five minutes of an incident look like. This is hard to teach through oral tradition; it becomes easy when the workflow is an artifact.

Faster ramp for AI-skeptical teammates. Engineers who haven't adopted AI coding tools yet often bounce off because the initial setup is opaque. Handing them a committed, working grid — "open this, paste this, run" — gets them to the first useful output in minutes.

Institutional memory that survives turnover. When the engineer who figured out the good refactor grid leaves, their grid stays. You don't lose their workflow.

Lower variance in how AI is used across the team. Not zero variance — individual tweaks are fine and welcome — but the baseline workflow is shared, so PR reviews all look roughly comparable in depth, and incidents all start with the same eight minutes of setup.

Variations

Hot-seat grid for pair programming. Two engineers, one screen, one grid. Left half is "driver's AI pane" — whoever's driving types prompts here. Right half is "navigator's scratch" — the other engineer's notes, second opinions, and shell commands. They swap every 25 minutes.

Async review grid. Rather than two humans on a call, async reviewers open the review grid, record their findings as comments, and close. Each reviewer sees the same AI output because the grid prompts are committed. Debates over "your AI said X but mine said Y" go away when the inputs are identical.

Small team, single grid. A team of two or three doesn't need a library of grids. One committed grid at spaces/default.md — a sensible 2x2 with Claude, Codex, a shell, and a scratch pane — is enough to get most of the value without the governance overhead.

Caveats

Committing prompts to a shared repo means you cannot include secrets or proprietary client instructions in them. Write prompts that reference config files instead of inlining anything sensitive.

Not everyone will use the shared grids. Some engineers prefer their own setups and that's fine. The grids are a default, not a mandate. Teams that try to enforce AI workflows usually breed resentment.

Grid files can get out of date faster than you expect. Build it into the rhythm — any PR that significantly changes directory structure should also update affected grid files, the same way it would update import paths.

FAQ

Can we share the actual spaces.json file? Technically yes, but the file path bakes in per-user locations and it's not designed for it. The markdown description approach is simpler and more portable.

What about pairing across timezones? The committed layouts plus a screen-recording of one person walking through the grid once is usually enough. Some teams record a five-minute "this is how we do PR reviews now" video and link it from the grid file.

Does this work with only one AI CLI subscription per person? Yes. Grids can specify "run whichever CLI you have" in some panes. You lose the multi-model benefit on those panes but keep the layout and workflow discipline.


Related reading:

Keep reading