The Reports of MCP's Death Are Greatly Exaggerated
Perplexity CTO says they dropped MCP for "traditional" APIs and CLIs, and many claim to be following suit. Meanwhile, the protocol surpassed 97M monthly SDK downloads, over 17,000+ registered servers, and wide adoption from OpenAI, Google, Microsoft, and AWS. One of these narratives is wrong.
If you've been following the MCP discourse this month, you'd think MCP was a legacy technology on life support. Eric Holmes' post "MCP is dead. Long live the CLI" hit the top of Hacker News. A former backend lead at Manus published a detailed argument for replacing tool catalogs entirely. The bandwagoning here has led to naive claims that MCP is "a dumb abstraction that AI doesn't need." The zeitgeist has coalesced around a tidy narrative: MCP is dead, just use CLI tools and direct API calls.
In my view, this is a very wrong take. The people making this argument are conflating their specific use cases with the entire problem space. But before I make my case, let's review the claims.
The critics aren't wrong about everything
Like any argument, the criticisms of the Model Context Protocol (MCP for short) have some merit and are technically grounded. I'm not here to dismiss them outright.
Yarats cited two specific production pain points: context window overhead (tool schemas consume tokens before meaningful work begins) and authentication friction (each MCP server manages its own auth flow, creating headaches at scale). These are legitimate engineering complaints from someone running multi-tool agents in production at scale.
Holmes' argument is more philosophical. Unix made a design decision 50 years ago: everything is a text stream. Small tools do one thing well, composed via pipes. LLMs made an almost identical decision 50 years later: everything is tokens. An LLM is, in a very meaningful sense, a terminal operator that's faster than any human and has already ingested vast amounts of shell commands in its training data. Holmes argues that MCP adds process management overhead without any benefit when CLI tools today provide composability, debuggability, and auth that "just works" (n.b. that last one especially is a stretch, if you ask me).
The Manus/Pinix author takes this even further with battle-tested production evidence. His single-run architecture replaces 15 typed tool calls with Unix pipe composition: cat log.txt | grep ERROR | wc -l in one tool call instead of three separate function invocations with round-trips through the neural network. His three-technique heuristic design, using progressive --help discovery, error messages as navigation, and a consistent output format is reasonably clever engineering. I recommend reading his full post.
These are serious practitioners building real systems. Their observations about where MCP adds overhead are correct. But their conclusion that MCP should be abandoned is a category error.
They're solving different problems
Every critic cited here is evaluating MCP against the same use case: a single developer (or a single agent) operating in a controlled environment, running automated pipelines where reliability, debuggability, and throughput matter more than anything else. In that context, they're largely right, and reaching for MCP could cause unnecessary overhead. A curl command is easier to debug than a two-process MCP client/server system over stdio. Unix pipes are more composable than sequential tool calls. Direct API calls are faster when you already know which tools you need.
But that's not what MCP is for.
curl command couldn't satisfy, even if you wanted it to. The interoperability layer isn't optional for these use cases; it's the whole point.MCP solves for dynamic tool discovery, standardized capability negotiation, and multi-vendor agent ecosystems. The Manus author's own "Boundaries and limitations" section concedes this implicitly: he acknowledges that typed APIs win for strongly-typed interactions and high-security requirements. His architecture is suitable for a single agent in a sandbox. It doesn't address what happens when you need a hundred different agents, from different vendors, to discover and negotiate capabilities across enterprise boundaries without bespoke integration code for every combination.
We've been here before: the REST cargo cult
If the "just use APIs" argument sounds compelling, it's worth remembering that we've already run this experiment. It didn't go well, and there aren't enough Swagger specs and OpenAPI documentation startups you can show me to convince me otherwise.
In 2000, Roy Fielding published his doctoral dissertation describing REST as the architectural style of the web itself. Not as an API design pattern (which it's become ubiquitous as), but as the architecture of HTTP. The defining constraint was HATEOAS: responses must be self-describing, clients should need nothing beyond an entry URL and standard media types, and all state transitions come from hypermedia links within the response itself. Frankly, the original REST vision would have been fantastic for LLMs, but unfortunately this isn’t the world we built for ourselves.
Instead, the industry ran with the label "REST," dropped the defining constraint, and built ad-hoc JSON-over-HTTP RPC systems that they called "REST APIs." By 2008, Fielding himself was publicly frustrated, writing that if your API isn't driven by hypertext, it cannot be RESTful. Nobody cared. The cargo cult was already entrenched, and a million flowers have bloomed on this topic.
As the htmx team pointed out, what we call "REST" today is often indistinguishable from RPC with prettier URLs. Every "REST API" has a bespoke auth pattern, error format, pagination scheme, its own way of describing capabilities, or more commonly, no guaranteed machine-readable capability description at all. You read the docs and hardcode it. There is no discovery. There is no negotiation. The only real standard here is HTTP, and the industry decided it made sense to conflate HTTP semantics with API semantics, without any further enforcement or standardizations aside for recommendations that we as a society hope you followed.
This is precisely what "just use APIs instead of MCP" proposes we return to: the same non-standard patchwork that the REST cargo cult produced. Every API is a snowflake. OpenAPI as a standard helps, but it's documentation-for-humans that happens to be machine-readable; not a runtime negotiation protocol.

MCP is doing what REST was attempting (and failed) to do: creating a complete standard interface for remote interactions, intended for machines; Defining a standard transport, standard capability discovery, standard schema negotiation. It's also worth noting that MCP adds more conventions than raw protocol. MCP largely builds on the JSON-RPC protocol, which, as far as holy wars go, I prefer RPC over REST semantics any day. (Also yes, I know what WSDL is, I’m not getting into that here. This essay is long enough, and I think it’s safe to assume you weren’t planning on switching to SOAP)
What MCP is specifically contributing here is the standardized way to describe tools, discover capabilities, and negotiate schemas on top of that foundation.
Convention is what REST failed to achieve (and arguably led to its rampant, naive adoption). Convention is what "just use APIs" can never provide. Not without reinventing what MCP fundamentally solved. You can already do that today, but this guarantees you lose out on another important tenet: interoperability.
A brief aside on Google's A2A
While we're on the subject of standards fragmentation, Google's Agent-to-Agent protocol deserves a mention. A2A is positioned as handling agent-to-agent communication, while MCP handles agent-to-tool communication. In theory, complementary layers. (What if a tool was an agent, though 🤔)
In practice, both protocols are built on JSON-RPC. A2A's core concepts (agent discovery, capability negotiation) could be standardized as MCP extensions rather than an entirely new protocol stack. Instead, we get two standards where one would do, requiring developers to evaluate and potentially implement both. From the company that brought you Angular-then-React-competitor, Dart-then-TypeScript-competitor, and GCP's perpetual third-place reinventions. It’s the "Not Invented Here" instinct they’re famous for, dressed up as innovation and creating the kind of fragmentation that motivated having a standard in the first place.

What the adoption numbers actually say
Actual adoption numbers tell a story that the "MCP is dead" narrative simply doesn't support. The project recently reported that the MCP SDK hit over 97 million monthly downloads. Over 17,000 registered servers are available today. 143,000 executable AI components indexed across major registries. Every major AI platform has adopted it. The Linux Foundation is stewarding it as an open standard.
If MCP were dying, the ecosystem would be contracting (or at the very least becoming stagnant), not accelerating.
But the most compelling evidence isn't the adoption numbers; it's how actual infrastructure teams are building on top of MCP rather than replacing it.
Cloudflare published a detailed post on their Code Mode architecture. Their thesis is that the problem isn't MCP itself, it's the tool calling pattern. LLMs have seen millions of TypeScript examples in training data but almost no tool-calling examples. Their analogy is memorable: asking an LLM to use tool-calling is like putting Shakespeare through a one-month Mandarin course and then asking him to write a play in it.
Their solution? Convert MCP server schemas into TypeScript APIs and let the agent write code against them, executed in V8 isolates. One round-trip instead of three. Intermediate values stay in the code, never passing back through the neural network. I honestly think this is pretty clever and absolutely has merit.
Pydantic independently arrived at a similar architecture called Monty (a sandboxed Python subset interpreter in Rust). Zapcode does this for TypeScript. Three teams, three languages, same conclusion: MCP for discovery and schema, generated code for invocation.
None of these teams are deprecating MCP. They replaced the tool-calling mechanism. That's a fundamentally different claim than "MCP is dead," and the fact that three independent teams converged on "keep MCP, change the calling convention" is a meaningful lesson.
In fact, this feels more akin to the benefits of something like Protocol Buffers, which the astute reader may know I have some, uh, skin in the game about (n.b. I used to lead the Protobuf team during my tenure at Google). Developing an ecosystem of codegen around MCP sounds compelling to me. I think this is in large part what Stubby/gRPC got right about cross-platform RPC.
Even Perplexity's move proves the point
Here's a detail everyone missed in the Yarats announcement: Perplexity still ships an MCP server for external developers. It's on their docs site right now, with one-click install for Cursor, VS Code, and Claude Desktop.
Kill it, cowards (you won't).
What Yarats moved was internal agent infrastructure to direct APIs because for their specific high-throughput, latency-sensitive, production pipeline use case, that makes sense. The company that built MCP integrations, shipped them to developers, and promoted them to the community still maintains those integrations for the use cases where MCP excels.
This isn't "MCP is dead." This is "we picked the right tool for our specific context." That's just good engineering. The fact that it got reported as an obituary tells you more about the tech media cycle than anything else.
The case for CLI calls
CLI and direct API calls can be a better choice, for example:
- For personal dev tooling where you're the only user.
- For automated CI/CD pipelines where reliability matters more than discovery.
- For high-throughput production systems where every token counts. (Even though every Claude user just received a 1 million token budget)
- For any context where the tool set is static and well-known.
The Manus post is probably the best example of what this looks like when done well. His two-layer architecture (Unix execution layer for pipe semantics, LLM presentation layer for cognitive constraints) is well-designed for single-agent sandbox environments. His progressive --help discovery technique is more context-efficient than stuffing tool schemas into the system prompt. These are real contributions to how we think about agent tooling.
This isn't MCP's failure. It's an appropriate tool selection (pun intended). Unix philosophy and protocol-based interoperability aren't enemies; they serve different layers of the stack.
What's actually happening with MCP
The MCP project is clearly thriving and listening to its users very intently. Many of its most popular criticisms are literally being addressed by the roadmap. Streamable HTTP fixes the stdio transport problem that makes debugging painful. The .well-known metadata proposal means tools won't need to load entire manifests, directly addressing the context window overhead concern. Enterprise SSO and audit trails are in progress.
A protocol that's actively addressing its known weaknesses, backed by accelerating ecosystem adoption and Linux Foundation governance, doesn't look like something that's dying. It looks to me like something that's maturing and evolving.
Long live MCP
Every few months, the Hacker News crowd picks a technology to kill. Docker was dead. REST was dead. SQL died multiple times. Kubernetes was dead. They're all still here. MCP just joined that list, and the pattern playing out is identical: early adopter disillusionment masquerading as obituary.
The question was never "MCP or CLI." It's "where does each belong in your project?" The smartest teams are already using both: MCP for discovery and interoperability, CLI and direct APIs for production pipelines and personal tooling. The Cloudflare/Pydantic/Zapcode convergence shows what this looks like in practice: MCP as the schema and discovery layer, with the invocation pattern evolving independently.
MCP's actual value was never its transport layer. It's the conventions. A standardized way to describe tools, discover capabilities, and negotiate schemas. Something REST was never able to deliver on.
If we're nominating engineering concepts to kill, I'd vote for REST APIs once and for all.