
How MCP Servers Work — Exposing Tools and Data to AI
An MCP server is a program that exposes capabilities to AI applications. It declares what it can do — which tools it offers, which resources it provides, which prompts it supports — and responds when clients request them.
The server doesn't know which AI model is calling it. It doesn't know if the user is having a casual conversation or running an automated pipeline. It exposes capabilities through a standard interface, and the client decides how to use them.
What Does a Server Expose?
Three primitives, each serving a different purpose:
Tools — Actions the AI Can Take
Tools are executable functions. The AI calls them to DO things in the external world:
{
"name": "create_github_issue",
"description": "Create a new issue in a GitHub repository",
"inputSchema": {
"type": "object",
"properties": {
"repo": { "type": "string", "description": "owner/repo" },
"title": { "type": "string" },
"body": { "type": "string" }
},
"required": ["repo", "title"]
}
}
The AI reads the name, description, and input schema to understand what the tool does and how to call it. The schema is JSON Schema — the same standard used by OpenAPI.
Resources — Data the AI Can Read
Resources provide context. The AI reads them to understand the environment:
{
"uri": "file:///project/README.md",
"name": "README.md",
"description": "Project documentation",
"mimeType": "text/markdown"
}
Resources are identified by URIs. Common schemes: file:// for filesystem data, https:// for web resources, git:// for version control, or custom schemes for application-specific data.
Resources are read-only. If the AI needs to modify data, it uses a tool.
Prompts — Templates for AI Behavior
Prompts are reusable interaction templates. They structure how the AI approaches specific tasks:
{
"name": "sql_expert",
"description": "Expert SQL query assistance with context about the database schema",
"arguments": [
{ "name": "table", "description": "The table to focus on", "required": true }
]
}
When the AI (or user) selects this prompt, the server returns a structured message that includes system instructions, few-shot examples, and relevant context — shaping the AI's behavior for that specific task.
The Server Lifecycle
1. Startup
The server starts and waits for a connection. For stdio servers, this means waiting for input on stdin. For HTTP servers, this means listening on a port.
2. Initialization
The client sends an initialize request. The server responds with its capabilities:
{
"protocolVersion": "2025-06-18",
"capabilities": {
"tools": { "listChanged": true },
"resources": { "subscribe": true },
"prompts": {}
},
"serverInfo": {
"name": "my-database-server",
"version": "1.0.0"
}
}
The capabilities object declares what this server supports:
toolswithlistChanged: true— the server has tools AND will notify if the tool list changesresourceswithsubscribe: true— the server has resources AND supports subscriptions for change notificationsprompts: {}— the server has prompts (basic support, no extra features)
If a capability isn't listed, the server doesn't support it. Clients must respect this.
3. Discovery
After initialization, the client discovers what's available:
tools/list— returns all available tools with their schemasresources/list— returns all available resources with their URIsprompts/list— returns all available prompts with their arguments
These lists can be paginated for servers with many items.
4. Operation
The server handles requests:
tools/call— execute a tool and return resultsresources/read— return the contents of a resourceprompts/get— return the expanded prompt template
The server can also send notifications:
notifications/tools/list_changed— tools were added, removed, or modifiednotifications/resources/updated— a subscribed resource changednotifications/resources/list_changed— the resource list changed
5. Shutdown
The connection closes. For stdio servers, stdin closes. For HTTP servers, the session ends.
Local vs Remote Servers
Local servers (stdio transport) run on your machine as a subprocess. The AI application launches them, communicates through stdin/stdout, and kills them on shutdown. Fast, no network overhead, but limited to one client.
Examples: filesystem access, local database, git operations, code analysis.
Remote servers (Streamable HTTP transport) run as HTTP services. They can serve multiple clients simultaneously, run on any machine, and persist across connections. Slower (network latency), but accessible from anywhere and sharable.
Examples: SaaS integrations (GitHub, Sentry, Slack), cloud databases, shared team tools.
The same server logic can often support both transports — the MCP SDKs abstract the transport layer, so the server code is the same regardless of how clients connect.
What Makes a Good Server?
Focused scope — a server should do one thing well. A filesystem server handles files. A GitHub server handles GitHub. Don't build a server that does everything.
Clear tool descriptions — the AI reads the description field to decide when to use a tool. Vague descriptions lead to wrong tool selection. Be specific: "Search for files by name pattern in the project directory" not "Search files."
Validated inputs — the inputSchema must accurately describe what the tool accepts. The AI generates inputs based on the schema. If the schema is wrong, the AI will send wrong inputs.
Meaningful errors — when a tool fails, return a clear error message with isError: true. The AI uses the error message to understand what went wrong and try a different approach.
Minimal permissions — request only what the server needs. A search server shouldn't need write access. A read-only database server shouldn't accept write queries.
Security
MCP expects a human in the loop. Servers should not assume that every tool call is authorized. The host application (Claude Code, Cursor, etc.) is responsible for:
- Showing the user which tools are available
- Confirming sensitive operations before execution
- Logging tool usage for audit
Servers should:
- Validate all inputs
- Implement access controls
- Rate limit tool calls
- Sanitize outputs (don't leak secrets in tool results)
Next Steps
- How MCP Clients Work — the other side: how AI applications discover and use servers.
- How MCP Tools Work — deep dive into the tool primitive.
- How MCP Resources Work — deep dive into the resource primitive.