What is an MCP Resource
An MCP resource is a data source that provides context to AI through the Model Context Protocol. Each resource is identified by a URI and returns content the AI can read — files, database records, API responses, log output, or any structured data.
How it works
Resources are identified by URIs that follow a scheme defined by the server:
file:///project/src/main.rs
db://production/users/schema
git://repo/commit/abc123
The client calls resources/list to discover available resources, then resources/read with a specific URI to fetch content. The server returns the data as text or binary (base64-encoded). Resources can also be dynamic — a resource template like db://production/{table}/schema lets the client request any table's schema without the server enumerating every possibility upfront.
Resources are application-controlled. The host application (not the AI model) decides which resources to attach to a conversation. This is the key difference from tools, where the model decides what to invoke.
Why it matters
AI models produce better output when they have relevant context. But the context window is finite — you cannot paste an entire codebase into every prompt. Resources solve this by giving the application a structured way to select and inject exactly the data the AI needs.
Resources also decouple data access from the AI's decision-making. The application can attach a project's README, a database schema, or recent error logs as context before the conversation even starts. The AI reads this context and produces grounded answers without needing to discover and call tools first.
See How MCP Resources Work for the full explanation of static resources, templates, and subscriptions.