What is an MCP Server
An MCP server is a program that exposes capabilities to AI applications through the Model Context Protocol. It is the bridge between AI and the outside world — wrapping databases, APIs, file systems, or any service into a standard interface that any MCP client can use.
How it works
When a client connects, the server and client exchange capabilities through an initialization handshake. The server declares what it offers:
- Tools — functions the AI can call (e.g.,
run_query,create_issue,search_files) - Resources — data the application can read (e.g.,
file:///src/main.rs,db://users/schema) - Prompts — reusable templates for common tasks (e.g., "review this code," "explain this error")
The server listens for JSON-RPC requests over a transport (stdio or HTTP), executes the requested operation, and returns the result. A single server is typically focused on one domain — a GitHub server, a database server, a file system server. The client can connect to multiple servers simultaneously.
Why it matters
MCP servers are the ecosystem's unit of composition. Each server wraps one domain cleanly. Building a new integration means building one server, and every MCP-compatible AI application can immediately use it. There is no need to write plugins for Claude, then Cursor, then Windsurf — one server works everywhere.
The server model also enforces a clean security boundary. The server controls what actions are available, validates inputs, and can enforce access policies. The AI never gets raw access to the underlying system. The server mediates every interaction.
Community servers already exist for GitHub, PostgreSQL, Slack, Google Drive, file systems, and hundreds of other services. You can also build custom servers for internal tools and proprietary APIs.
See How MCP Servers Work for the lifecycle from initialization through request handling.