Honestly, MCP Is Just npm For Agent Tools
Back to Blog

Honestly, MCP Is Just npm For Agent Tools

A friend asked what MCP actually gives you over plain APIs. I tried to defend it. Every defense line collapsed under one simple question. Here's the honest walk through why MCP is a packaging standard, not a technical innovation, and why that's still valuable.

14 Apr 202608 Mins read
Honestly, MCP Is Just npm For Agent Tools

A friend pinged me at 9pm asking what MCP actually gives you over plain APIs. Full skepticism vibe. He had been reading about it, couldn't find anything substantial, and wanted me to either convince him or admit there was nothing there.

I started typing the defense. Standardized tool use. Dynamic discovery. Session state. The usual arguments. Halfway through I realized I did not believe myself, and by the time we had finished the conversation I had watched every defense line collapse under one very simple question: "if the server is local, can't the agent just import the function?"

Yes. Yes it can. And once you see it, you can't unsee it.

the received wisdom

The pitch for MCP goes like this. If you want an agent to talk to Slack and GitHub and Jira, you're writing three integration layers. Three auth flows. Three response parsers. Three schema translations for the LLM. MCP wraps all of that behind one protocol so the LLM client does not care what is behind it. Standardized interface, dynamic discovery, composability. USB for AI tools.

That sounds important. It is also mostly true. What I want to pick apart is whether it's substantive, or whether the same thing was already possible with boring tools you already use.

defense line one. standardized tool use for LLMs

The core value prop is one thing: a wire format that every LLM client understands when it wants to list and call tools. tools/list returns JSON schemas. tools/call invokes one with arguments. That is the whole protocol loop in practice.

Here is a minimal Python MCP server using the official FastMCP SDK:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("hello")

@mcp.tool()
def add(a: int, b: int) -> int:
    return a + b

if __name__ == "__main__":
    mcp.run()

Seven lines. Tiny. Now here is the same thing without MCP:

def add(a: int, b: int) -> int:
    return a + b

Two lines. Also tiny. Your agent framework on the other side can introspect the signature, read the docstring, build its own JSON schema, and call the function directly. No protocol. No JSON-RPC. No stdio handshake. Most agent frameworks already do exactly this.

So what did I save by using MCP in the first example over the second? Only the wire format. Everything else, the schema generation and the discovery and the call-and-return, is stuff the framework would have done anyway.

defense line two. but OpenAPI already does this

If the value is "JSON schema plus discovery for callable endpoints", OpenAPI did that fifteen years ago. You hand an agent an openapi.json, it reads the schemas, it calls the endpoints. LangChain has OpenAPIToolkit for exactly this. Every function-calling LLM already takes JSON schemas as input, which means anything expressible in OpenAPI you can feed straight in.

So the steelman has to move further. Stateful sessions. Resources. Prompts. Sampling. Let me take them in order, because only one of these four turns out to be interesting.

Stateful sessions. OpenAPI describes stateless REST. MCP keeps a persistent session so the client does not re-auth every call and the server can hold context between calls. True, but re-auth in HTTP is called a cookie and holding context is called a session and the pattern is thirty years old. The novelty is not that MCP servers are stateful, it is that the wire format bundles session handling. That is a packaging convenience, not a technical innovation.

Resources. MCP has a concept of "resources", readable files or data the server can expose for the LLM to pull into context. OpenAPI only describes callable endpoints. Fair point, except a callable endpoint that returns a file is still just an API. There is nothing an MCP resource does that GET /files/:id does not. You could express every MCP resource as an OpenAPI endpoint with ten minutes of yak shaving.

Prompts. MCP servers can ship templated prompt snippets that clients surface as slash commands. That is a UX feature, not a protocol feature. Any CLI with a plugin system already does this.

Each of these is real, but when you line them up, you are not looking at technical progress. You are looking at a coordinated bundle of conventions. Which is a real thing to want. It is just not what "protocol" usually means.

defense line three. the direction of the connection is flipped

Here is where I had to actually revise my take. The substantive thing MCP does that OpenAPI cannot is subtle. The MCP server sits next to the LLM, not next to the internet.

OpenAPI describes services, and services live behind HTTP endpoints, which means they already have an API. MCP servers are processes that live on your machine, or in your IDE, or inside Claude Desktop, and they bridge the LLM into contexts that were never APIs to begin with.

Your filesystem is not an API. Your running IDE buffer is not an API. Your local Docker daemon, your browser tab, your vim state, your postgres socket, your current git branch: none of these things have OpenAPI specs, because none of them exposed HTTP endpoints in the first place. Wrapping them in a tiny stdio server that speaks MCP lets the LLM touch them without you building a full web service first. That is real capability expansion.

And MCP has one genuinely new feature on top of this: sampling. The server can ask the LLM to do inference mid-call. "Before I continue, summarize this chunk for me." sampling/createMessage goes from server back to client, and the client runs inference and returns a completion. OpenAPI has no concept of this because REST calls only flow one way. An HTTP endpoint never calls back into your brain and asks you to think about something. An MCP server does.

If I had stopped thinking here two hours in, I would have written "MCP gives the LLM arms and legs in non-API environments, plus a back-channel to request reasoning." That is a real story. It is the best honest defense of the protocol. I was ready to ship that take.

the question that broke it

My friend wasn't done. His next message was the one that collapsed the whole frame: "if the server is local, we basically don't need MCP, the agent can literally import the file and run those functions lol".

And he's right.

If the MCP server is running on your machine, and the agent is running on your machine, then the entire stdio JSON-RPC layer between them is solving a problem that only exists because they were separate processes. Put them in the same process, which is what every first-party agent environment already does, including Claude Code and Cursor and every custom agent anyone ships, and the "protocol" dissolves.

The agent just imports the module:

from tools.github import create_issue
from tools.filesystem import read_file
from tools.ide import current_buffer

# introspect signatures, build schemas, call directly
# no wire format, no handshake, no session

You can even do the sampling thing without a protocol. If the tool is a Python function and the LLM client is a Python object in the same process, the function can call the client directly. client.sample(prompt). Done. Bidirectional. No bytes on the wire. No JSON-RPC. No initialize handshake. The server-to-client call is a normal function return from inside a normal function call, which is how every computer program has worked for fifty years.

So what does MCP give you in this world, over a plain function call? Exactly one thing: process isolation. The tool runs in its own process, its own memory space, under its own permissions. If it crashes or misbehaves it does not take the agent down with it. That is real. It is also a property of any subprocess, not a property that requires a protocol. subprocess.Popen with a clean interface gives you the same isolation without any of the wire format. The protocol adds structure to the conversation between processes, but the isolation comes from the process boundary itself.

what MCP actually is

After walking through all of that, here is what I think MCP actually is. It is a distribution and discovery standard for agent tools. It is npm for tool servers. The value is almost entirely in packaging and ecosystem, not in the protocol itself.

Someone writes an MCP server for Notion. They publish it. Every MCP-compatible client, Claude Desktop and Cursor and whatever else shows up next, plugs it in without reading the source. That is genuinely useful. Without MCP, everyone rewrites the Notion integration, and there is no common place to look.

The parallel with npm is so close it is almost the whole thing. npm did not invent JavaScript modules. CommonJS and ES Modules already defined how modules work. What npm added was a registry, a manifest format, and a naming convention that let anyone install anyone else's code with one command. The protocol underneath was thin. The value was adoption.

MCP is playing the same move. JSON-RPC over stdio was solved in 2005. Tool schemas for LLMs existed before MCP. What MCP adds is a registry, a manifest, a naming convention, and the promise that if you write a tool once, every compatible client speaks the same thing. That is valuable, but let's be honest about what kind of value it is. It is coordination, not invention.

when MCP still wins

I want to be fair here, so here are the cases where picking MCP actually makes sense:

  • You're publishing a tool for other people to use. Write it as an MCP server and every client can consume it without friction. This is the npm case and it is fine.
  • You're building a platform where tools are plugged in at runtime by end users. Skill-loading agent platforms, Cursor-style extensions, Claude Desktop's config-file plugins. The end user cannot write Python to wire in a tool, so they need a discovery-and-plug interface. MCP is that interface.
  • You need process isolation for security reasons. You do not trust a third party tool to share memory with your agent. MCP's stdio or SSE boundary gives you that, though so does subprocess.Popen with a simpler contract.
  • You're in the Anthropic ecosystem and want path-of-least-resistance. Claude Desktop reads MCP servers natively from its config file. If you want your tool to show up there, MCP is the on-ramp. That is ecosystem lock-in, not a technical argument, but lock-in is a real reason to pick things.

If you are building your own agent for your own codebase, and the tools are your own functions, you do not need MCP. You need def.

the part I actually care about

The thing that annoys me about the MCP discourse is not MCP. It is the framing. Every launch post and thread talks about MCP like it is a technical innovation. It is not. It is a coordination mechanism wrapped around a protocol thin enough that you could re-spec it in an afternoon.

That is not a criticism of the project. Coordination mechanisms are valuable. npm made JavaScript shippable. USB made peripherals sane. HTTP made hypertext universal. None of them were technically novel when they appeared, and all of them became load-bearing because enough people agreed on the same thing. That is how standards work. Standards are a social phenomenon that happens to be implemented in bytes.

MCP might become load-bearing in the same way. It might also not. What I want us to stop doing is pretending the protocol is doing work that the protocol is not doing. The value is in the ecosystem. The ecosystem only exists if everyone agrees to use the standard. And nobody agrees to use a standard that is advertised dishonestly. So call MCP what it is: a packaging format for agent tools with an attached wire protocol. Pick it up when you need to publish or consume, skip it when you don't, and let the technical hype die down so the coordination win can show up on its own merits.

You're not missing anything. There is no deep technical substance in MCP. There is a thin standard that becomes valuable only at ecosystem scale, and the honest version of the pitch is "use this so your tool works in Claude Desktop and Cursor without writing glue." That is a real value proposition. Ship it with that framing and I'll stop writing posts like this one.

~ Ashish Kumar Verma 🫡

Related Posts