Definition

System prompt

A system prompt is the persistent instruction block that sets an LLM's role, rules, and tools for the entire conversation.

A system prompt is the persistent instruction block that sits at the top of an LLM conversation and sets the model's role, rules, tone, and available tools. Unlike user turns, the system prompt isn't treated as something to respond to — it's the framing the model applies to everything else it sees.

Why it matters

System prompts are why Claude Code behaves differently from Claude on claude.ai, and why Codex CLI behaves differently from GPT in ChatGPT. Same underlying model, different system prompt, different behavior. For agentic coding CLIs, the system prompt is where the tool describes its available tools, file conventions, safety rules, and output format.

When you add a subagent or a custom slash command, you're really just authoring a specialized system prompt plus tool selection. Running different CLIs side-by-side in SpaceSpider's grid layout is one way to compare how different system prompts shape identical requests.

How it works

In Anthropic's Messages API the system prompt is a top-level system parameter. In OpenAI's Chat Completions it's a message with role: "system". The model is trained to treat these as authoritative instructions with higher priority than user messages. Most providers also support prompt caching — mark the system prompt as cacheable and subsequent calls skip re-reading the (potentially very long) prefix.

A well-designed coding-agent system prompt typically includes:

  • Role ("You are an expert software engineer...")
  • Available tools and when to use them
  • File conventions (path formats, editing rules)
  • Output format rules (diffs vs. full files, markdown fencing)
  • Safety boundaries (never run rm -rf, always confirm writes)
  • Optional few-shot examples

How it's used

Places you'll encounter system prompts:

  • Each agentic CLI ships a default — usually locked, sometimes extensible via hooks or config files
  • Subagents are user-authored system prompts for specialized roles
  • Fine-tuned models may rely less on long system prompts because behavior is baked into weights
  • API products embedding LLMs write their own system prompts to constrain user behavior

FAQ

Can I change Claude Code's system prompt?

Not directly — the main agent prompt is built into the CLI. But you can add to it via custom instructions (CLAUDE.md files), hooks, and subagents, all of which extend or override behavior without replacing the core prompt.

How long should a system prompt be?

As long as it needs to be, as short as it can be. Production agent prompts are often 2-10k tokens. Past that, quality starts to decline and cost climbs. Use caching to absorb the cost of long stable prompts.

Related terms