Claude Skills: The Workflow Pattern That Replaces Most Prompts
A hands-on tutorial on Claude Skills, the pattern that replaces repeated prompts with reusable, composable playbooks that raise your baseline output.
April 1, 2026 · 7 min read
If you're still copy-pasting the same prompt into Claude every time you start a new task, you've missed the single biggest productivity feature added in the last year. Skills turn your favorite prompts into named, reusable, composable playbooks — and once you have a few, most of your "prompting" goes away.
This tutorial walks through what skills actually are, when to reach for them, and the handful of skills that have earned permanent slots in my workflow. The pattern is simple; the payoff is disproportionate.
What a skill is
A skill is a markdown file that Claude loads on demand. It sits in your project at .claude/skills/<name>.md (or the global equivalent). When you invoke it by name, Claude reads it as an instruction set and applies it to the task at hand.
Think of skills as named prompts with structure. They can include:
- A role or persona ("You are reviewing this PR as a senior backend engineer").
- Steps to follow.
- Things to check for.
- Output format.
- Pointers to other skills or docs.
The important property: skills are text. They're diff-able, version-controlled, and shareable. You write them once and everyone's agent uses them.
When to create a skill
The test is simple: if you've pasted the same prompt three times in a week, it's a skill. If two different developers would write similar instructions for the same task, it's a skill.
Good skill candidates I've seen in real repos:
- "Review this PR according to our team checklist."
- "Write tests for this module following our conventions."
- "Draft release notes from this commit range."
- "Set up a new service directory with our standard structure."
- "Run the debugging playbook for a failing integration test."
Bad skill candidates:
- One-off tasks you'll never repeat.
- Context that changes every time (skills are for stable patterns).
- Things better handled by a regular prompt with a few specifics.
A simple skill, from scratch
Here's an review-pr skill. Paste this into .claude/skills/review-pr.md:
# review-pr
Review the diff against `main` as a senior engineer.
Check for:
- correctness bugs and edge cases
- concurrency or race issues
- missing error handling
- missing or weak tests
- naming and readability
- unnecessary changes outside the scope of the ticket
Output:
1. A one-paragraph summary of the PR.
2. A bulleted list of issues, grouped by severity (blocker / major / minor).
3. A "questions for the author" section if anything is unclear.
Do not rewrite code. Review only.
Invoke it with "run the review-pr skill" or a similar phrasing Claude recognizes. The skill loads, the agent reviews in the specified format, and you get consistent output every time.
Skills that compose
Skills can reference other skills. This is the pattern that turns them from convenience into architecture.
Example: a ship-feature skill that references write-tests, review-pr, and draft-pr-description:
# ship-feature
Implement the feature described in the attached spec.
Steps:
1. Write the implementation.
2. Run the `write-tests` skill to add tests.
3. Run the `review-pr` skill against your own changes.
4. Apply any blocker fixes.
5. Run the `draft-pr-description` skill to produce a PR description.
6. Stop and summarize.
This is a miniature playbook. You invoke one skill; it orchestrates three others. The result is reproducible, reviewable output from what would otherwise be a ten-paragraph prompt.
Skills I use daily
After about a year of skill-building, these are the ones I wouldn't give up:
review-pr: described above.write-tests: our test conventions, coverage targets, mocking rules.fix-the-flake: a debugging checklist for flaky integration tests.add-feature-flag: our pattern for introducing new feature flags.draft-release-notes: turns a commit range into user-facing release notes.onboard-new-endpoint: adds a new API endpoint following our patterns.explain-like-im-six-months-in: post-hoc explanation of a complex change, written for a teammate who hasn't been in the codebase long.
Some are public on the team repo; some are personal. Together they cover most of my repeated work.
Skills as team artifacts
Skills get dramatically more valuable when the team uses them. The bar for a team skill is higher: it has to work across several developers' mental models, and it has to stay stable as conventions evolve.
A few team skills that paid off:
- Onboarding skill: tells a new hire's agent how the repo is organized and how to make a first change. Cuts time-to-first-PR roughly in half.
- Review skill: ensures every PR gets reviewed against the same checklist, regardless of which human is driving.
- Release skill: generates release notes, updates the changelog, and prepares the PR for a release cut.
Share skills through the repo — commit them to .claude/skills/ and they travel with the code. See the developer productivity stack for the broader team convention story.
Skills vs CLAUDE.md
New users conflate these. They're related but different:
CLAUDE.mdis context always loaded for every session. It's the ambient knowledge.- Skills are loaded on demand when invoked by name.
Rule: if it's information the agent needs to know by default ("we use pnpm, not npm"), it goes in CLAUDE.md. If it's a procedure invoked for specific tasks ("here's how to review a PR"), it goes in a skill.
Don't stuff everything into CLAUDE.md — it bloats the context budget. See the Claude Code integration docs for the file-loading details.
Skills vs slash commands
Some editors and integrations have slash-command equivalents. Skills are upstream of those: a skill is a markdown file, portable across sessions and CLIs. Slash commands are often IDE-specific.
Write the skill. Make the slash command a thin wrapper that invokes the skill. You keep portability and get the UX.
Iterating on a skill
Skills improve with use. My iteration loop:
- Write a rough skill that captures the current pattern.
- Use it for a week.
- When it produces suboptimal output, diagnose why — usually the skill is missing a step or ambiguous on one.
- Edit the skill. Commit.
- Repeat.
After ~3-5 iterations, most skills stabilize. The ones that keep changing are usually skills that should be split into two more-focused skills.
Common mistakes
Mistakes I've made or seen:
- Too-long skills: a 500-line skill is a book, not a playbook. Keep skills to about a screenful.
- Ambiguous steps: "review carefully" means nothing. "Flag bugs, race conditions, and missing tests" is specific.
- Skills that overlap: two skills doing similar things confuse the agent. Merge or disambiguate.
- Output format not specified: if you don't say how to format the output, you'll get different formats each time.
- Skills for one-offs: not every prompt is a skill. Skills are for repeated work.
Where skills fit in the grid
In a parallel grid setup, skills are the coordination mechanism. Each pane has its own agent, but all panes reference the same skills. When I run the parallel AI coding workflow, every Implementer pane uses write-tests and ship-feature — same skills, different features, consistent output.
See the parallel AI agents use case for the pane-level patterns, and getting started for setting up a space that includes skill-enabled CLIs.
The skills I'd write first
If I were starting from scratch today, these three skills first:
review-pr: a gentle forcing function for better reviews.write-tests: consistent test output, one of the biggest time sinks.ship-feature: orchestrates the other two; becomes your daily driver.
From those three, more skills emerge as you notice what you repeat. In a month you'll have 8-10 skills and the agent will feel custom-fitted to your team.
Key takeaways
Skills are markdown files that turn repeated prompts into reusable playbooks. They compose, they're version-controlled, and they make agent output consistent across sessions and developers. The investment is small; the leverage is large.
Start with three skills. Iterate for a month. You'll look back at your old workflow — typing the same prompt for the hundredth time — and wonder why you didn't do this sooner. Write skills. Share skills. Let the agent do the repeat work; save your typing for the work that's actually new.
Keep reading
- From Cursor to a Terminal Grid: A Migration StoryAn honest migration story from Cursor to a terminal grid of AI CLIs: what I missed, what I gained, and why I didn't switch back.
- The Developer Productivity Stack for an AI-First TeamA practical productivity stack for AI-first teams: shared spaces, CLI conventions, review loops, and team-level habits that compound across developers.
- AI Pair Programming in 2026: Past the HypeAI pair programming is past the hype phase and into the workflow phase. What actually works in 2026, what's overrated, and how senior devs are using it.
- OpenAI Codex CLI in the Real World: What Actually WorksA deep dive on OpenAI Codex CLI in real workflows: where it beats Claude, where it fails, and the patterns that let it earn a permanent pane.
- 10 Claude Code Power Tips You Haven't Seen on TwitterTen practical Claude Code tips beyond the basics: session surgery, skill composition, CLAUDE.md patterns, and parallel tricks that actually ship code faster.
- Multi-Model Code Review: Claude, GPT, and Qwen in One GridA step-by-step tutorial for multi-model code review with Claude, GPT/Codex, and Qwen running in parallel panes. Catch bugs none of them would catch alone.