Skip to content

MCP: USB for AI Tools

MCP: USB for AI Tools

What is MCP?

The Model Context Protocol (MCP) is an open standard from Anthropic that defines how AI applications connect to external tools and data sources. Before MCP, every integration was custom -- your agent talks to Slack one way, to a database another way, to a file system a third way. MCP gives all of these a common interface.

The easiest analogy: USB for AI tools. Before USB, every peripheral needed its own connector. USB made everything plug-and-play. MCP does the same for AI integrations -- a standard protocol so your AI app can connect to any MCP-compatible tool without writing custom glue code.

With MCP, your app can discover available tools, call them, get data back, and listen for updates, all through the same protocol. The tool's implementation doesn't matter. Your app just speaks MCP.

The problem it solves

Without MCP, connecting AI apps to tools is an M-times-N problem. If you have 5 AI apps and 10 tools, you need 50 custom integrations. Each one is different, each one breaks differently, and each one needs separate maintenance.

MCP turns this into M-plus-N. Each AI app implements one MCP client. Each tool implements one MCP server. They all work together.

How it's structured

MCP uses a client-server model with three roles:

  • Hosts are the applications users interact with -- Claude Desktop, an IDE, your custom agent.
  • Clients live inside hosts. They manage connections to MCP servers, one client per server.
  • Servers are standalone programs that expose capabilities. A Postgres MCP server exposes database queries. A GitHub MCP server exposes repo operations. A filesystem MCP server exposes file read/write.

What servers expose

An MCP server can offer three types of capabilities:

  • Tools: Functions the AI can call. "Run this SQL query," "create a GitHub issue," "send a Slack message."
  • Resources: Data the AI can read. Files, database records, API responses.
  • Prompts: Pre-built prompt templates for common tasks.

The connection flow

  1. The host app starts up and creates MCP clients for each configured server.
  2. Each client connects to its server and does a capability handshake -- "what tools do you have?"
  3. When the AI needs a tool, the host routes the request through the appropriate client to the right server.
  4. The server runs the operation and returns results.

Communication: local vs remote

MCP servers talk to clients in two ways:

  • stdio for local tools. The client launches the server as a subprocess and communicates over standard input/output. Simple, fast, no network setup.
  • HTTP with SSE (Server-Sent Events) for remote tools. The server runs on a different machine, and the client connects over HTTP. Supports streaming results.

Getting started

If you're using Claude Desktop, you can add MCP servers by editing your config file. Anthropic publishes reference servers for common tools (filesystem, GitHub, Postgres, Slack), and the community has built dozens more.

If you're building your own MCP server, Anthropic provides SDKs in Python and TypeScript. A basic server that exposes a single tool is about 30 lines of code.

Where this is going

MCP was donated to the Linux Foundation in early 2025, alongside Google's A2A protocol. The two are complementary -- MCP handles how agents use tools, A2A handles how agents talk to each other.

The real value shows up at scale. Once enough tools have MCP servers, any AI app that speaks MCP can use all of them without custom work. We're not there yet, but the ecosystem is growing fast. Most major AI IDEs (Cursor, Windsurf, Claude Code) already support MCP.