Tool use
Tool use is the mechanism by which an LLM requests an external action — like reading a file or running a shell command — via structured function calls.
Tool use (also called "function calling") is the mechanism by which an LLM requests an external action by emitting a structured call — usually JSON naming a tool and its arguments. The runtime executes the tool, returns the result, and feeds it back into the model's context window as an observation. Tool use is the feature that turns a chatbot into an autonomous agent.
Why it matters
Without tool use, an LLM can only emit text. With it, the same model can read a file, run pytest, query a database, hit an HTTP endpoint, or open a browser. Every agentic coding CLI — Claude Code, Codex CLI, Qwen Code, Kimi CLI — is fundamentally a tool-use loop.
Understanding tool use helps you understand why agents succeed or fail. A wrong tool call is often more informative than a wrong text answer — you can see exactly which file it read or which command it ran.
How it works
The provider (Anthropic, OpenAI, Alibaba, Moonshot) defines a schema for tool declarations: name, description, JSON-Schema-style input. The client includes tool definitions in the request. The model emits a special response containing tool calls instead of (or alongside) natural text. The client executes the call and replies with a tool-result message containing whatever the tool returned.
A typical coding-agent tool set:
read_file(path)— returns file contentwrite_file(path, content)— writes a filerun_bash(command)— executes and returns stdout/stderrglob(pattern)/grep(pattern)— searchweb_fetch(url)— retrieve a URL- Plus any MCP servers you've plugged in
Parallel tool calls (multiple in one turn) are supported on modern providers, which speeds up independent reads.
How it's used
Tool use patterns developers rely on:
- Read-before-write: agents tend to read the surrounding files before emitting a patch
- Parallel reads: kick off several file reads in one turn
- Tool-result caching: providers like Anthropic let you cache expensive tool results
Every turn of a SpaceSpider Claude Code pane is a few rounds of tool use behind the scenes.
Related terms
- Autonomous agent — tool use is what makes it autonomous
- MCP — the open protocol for tool-use extensions
- Subagent — delegating tool use to a child agent
- Hook — interpose on tool calls in Claude Code
- Sandbox — containing tool-use blast radius
FAQ
Is tool use the same as ReAct?
ReAct (Reason + Act) is a prompting pattern from 2022 that inspired modern tool use. Today's native tool calling is more structured — the model emits JSON, not free-form "Action: ..." text — but the shape is the same.
Can models hallucinate tool names?
Yes, occasionally. A well-implemented harness validates the tool call against the declared schema and rejects unknown names, which the model sees as an error and corrects.
Related terms
- Agentic codingAgentic coding is software development where an LLM-powered agent plans, edits, runs, and verifies code on its own using tools, not just autocomplete.
- AI pair programmingAI pair programming is a collaboration style where an LLM assistant sits alongside you, suggesting code and reviewing changes in real time as you work.
- ANSI escape codesANSI escape codes are control sequences that terminals interpret for colors, cursor movement, and screen clearing — the language of every modern CLI UI.
- Autonomous agentAn autonomous agent is an AI program that perceives, decides, and acts on its own toward a goal — the architecture behind modern coding CLIs.
- CheckpointA checkpoint is a saved snapshot of file state that lets you roll back an AI coding agent's changes to a known-good point.
- Claude CodeClaude Code is Anthropic's official command-line agent that plans, edits, runs, and verifies code across your repo using Claude models and tool use.