Definition

Prompt engineering

Prompt engineering is the practice of writing inputs that steer an LLM toward useful, accurate, on-task output — the core skill of every AI power user.

Prompt engineering is the practice of writing inputs to an LLM that reliably produce the output you want. That sounds trivial, but small wording changes can swing accuracy by double-digit percentages on coding tasks, which is why every serious user of Claude Code or Codex CLI eventually builds a mental library of prompt patterns.

Why it matters

LLMs are sensitive to phrasing. "Fix this test" and "Read the test, identify the failure, propose a minimal fix, then apply it" route the model through different behaviors even though a human reads them as equivalent. System prompts, inline instructions, and examples all change what the model emits.

For agentic coding, prompt engineering shows up in three places: the tool's built-in system prompt, your user turns, and your slash commands or subagents — which are reusable prompt templates. Running several CLIs in a SpaceSpider grid layout lets you A/B-test prompts across tools quickly.

How it works

Core techniques:

  • Specificity — "Write a function" gets something; "Write a TypeScript function that takes X and returns Y, handling Z error cases" gets the right thing
  • Examples (few-shot) — include sample input/output pairs; the model generalizes
  • Chain-of-thought — ask the model to reason step-by-step before concluding
  • Role framing — "You are a senior Rust reviewer" shifts tone and rigor
  • Structured output — request JSON, markdown tables, or diffs explicitly
  • Negative constraints — "Do not touch files outside src/" works; models respect bounds when told

For agentic loops, the prompt is a living document — the CLI appends tool observations, the user adds corrections, and the context window fills up. Managing that flow is part of prompt engineering.

How it's used

Daily patterns:

  • Save reusable prompts as slash commands in Claude Code
  • Write a system prompt for a subagent specialized in one task
  • Include the full failing test output in the prompt, not a summary
  • Ask the model to plan in plan mode before editing

See /blog/prompt-engineering-for-coding-agents if we've written about it.

FAQ

Is prompt engineering a real skill or just folklore?

It's real. Models are stable for short windows (days to weeks between releases) and prompt patterns reliably outperform naive phrasing. It is also model-specific — a great Claude prompt may need rewriting for GPT or Qwen.

Will prompt engineering matter as models improve?

Less. Newer models are more forgiving of casual prompts. But when you push at the edges of what a model can do (long context, complex refactors, multi-step plans), prompt structure still matters.

Related terms