What is Tool Use
Tool use (also called function calling) is the ability of an AI model to invoke external functions during a conversation. Instead of only generating text, the model outputs a structured request to call a specific function with specific arguments. The system executes the function and returns the result for the model to continue reasoning.
How it works
The process has four steps:
- Define — tools are described to the model with a name, description, and parameter schema
- Select — the model reads the user's request and decides a tool call would help
- Call — the model outputs a structured tool call (function name + arguments as JSON)
- Continue — the system executes the tool, returns the result, and the model incorporates it into its response
User: "How many open issues do we have?"
Model decides: call get_issue_count(status="open")
System executes: → 47
Model responds: "There are 47 open issues."
The model does not execute anything itself. It produces a structured intent — "I want to call this function with these arguments." The surrounding system (the host application, the MCP client) is responsible for actually executing the call and enforcing permissions.
Why it matters
Tool use is the capability that makes AI agents possible. Without it, AI is limited to what it learned during training. With tool use, AI can access live data, take real actions, and verify its own work.
MCP standardizes how tools are discovered and invoked. Before MCP, every AI provider had a different tool-use format — OpenAI's function calling, Anthropic's tool use, Google's function declarations. MCP provides one protocol that works across all of them, so a tool built once is usable by any model.
See How AI Agents Work and How MCP Tools Work for deeper coverage of tool use in practice.