· Antonio Leiva · ai  · 5 min read

MCPs Are More Alive Than Ever

There is a weird obsession lately with declaring MCP servers dead.

I don’t really buy it.

It’s true that MCPs were overused for a while. We were in the early phase of agentic tooling, everything felt new, and MCP quickly became the hammer everyone wanted to swing.

And of course, not everything was a nail.

In the last few weeks, social media has been full of people saying MCPs are done. Some even organized literal “funerals” for them.

I think that reading is shallow.

MCPs were never doomed. What was doomed was the first wave of lazy uses around them. My bet is much simpler: in the future, every company will have both an API and an MCP.

Why? Because once you zoom out from the developer bubble, the answer becomes pretty obvious.

What MCPs actually solved

If someone arrives late to the party, MCP is a protocol Anthropic introduced and that later became more broadly standardized. The sales pitch was that it was the USB-C of AI.

And honestly, that comparison was not bad.

Before MCP, if you wanted an agent to talk to an external service, you had to build your own tool layer by hand. That integration usually lived inside one specific agent, in one specific codebase, and was not really reusable anywhere else.

LLM APIs already gave us tools, of course. You could describe a tool, explain when to use it, wait for the model to request the call, run it in code, and feed the result back.

That works.

But it does not scale well when every integration is custom and deeply embedded in the application.

MCP solved a very real problem: it standardized the conversation between clients and tool providers. If a client speaks MCP and a server speaks MCP, they can work together.

That is still valuable.

The real problem was context bloat

The reason MCPs started getting a bad reputation is not that the protocol was useless.

The problem was that they flooded the context window.

Every MCP exposes tools. Every tool needs a description, parameters, argument types, usage hints, and a bunch of other metadata. Once MCP servers started growing, the prompt overhead got ridiculous.

The GitHub MCP became the canonical example for this. At one point, Copilot CLI shipped with it installed, and the context window could start already heavily occupied just from the tool definitions.

That was absurd.

So people started asking a fair question:

If the agent can already use gh, or discover how to use it with --help, why would I pay a huge context tax to wrap the same thing in MCP?

That criticism was valid.

CLIs and skills killed the bad MCPs

This is where a lot of people got confused.

CLIs did not kill MCPs. They killed a specific category of bad MCPs.

The same thing happened with skills.

Skills are powerful because they let you load context only when it is needed. That instantly made a whole class of lightweight MCP wrappers feel unnecessary:

  • MCPs that only proxied a few simple API calls
  • MCPs that were basically a thin layer over an existing CLI
  • MCPs that existed only because the agent did not yet know how to use the underlying tool

Those were always living on borrowed time.

Once agents got better and skills became common, that whole layer collapsed.

And good riddance.

The developer view is too narrow

The mistake is assuming that because developers can get away with CLI plus skill, everyone else should do the same.

That is developer brain talking.

Imagine someone who is not technical at all. They use ChatGPT or Claude on mobile. They want to connect accounting software, CRM, calendar, email, or internal business tools.

Now explain to them that first they need to:

  • install a desktop app
  • install a CLI
  • keep that CLI updated
  • find a skill
  • or worse, write the skill themselves

That workflow is dead on arrival.

But now imagine something else:

Inside ChatGPT or Claude there is a store of apps or connectors. You click once, grant access, and the assistant can use that service.

That is not science fiction.

That is already happening.

And those app-like connectors are, structurally speaking, MCPs.

MCPs make much more sense in consumer and enterprise products

This is why I think MCPs are not going anywhere.

As a programmer, maybe you won’t use them directly every day. Maybe in many cases you will still prefer a CLI plus skill because it is leaner, cheaper in context, and easier to control.

Fine.

But that is not the whole market.

For real products, especially those trying to reach a wider audience, MCPs solve a distribution and integration problem:

  • standard access to tools
  • easier installation inside AI clients
  • lower friction for non-technical users
  • a path for companies to expose their systems to assistants without bespoke integrations everywhere

That is a real wedge.

And now UI is entering the picture

There is another reason I think the “MCPs are dead” take is premature.

The protocol is evolving.

One of the most interesting additions is the ability for MCPs to serve UI that the client can render. That means the protocol is no longer just about invoking tools. It can also become a standard way to surface interfaces inside the assistant experience.

If the future really does involve interacting with software increasingly through chat, then some tasks will obviously need more than plain text.

They will need UI.

And MCP becomes a natural place to standardize that too.

That is not a dead protocol. That is a protocol still expanding its surface area.

My bet

So no, I don’t think MCPs are dead.

I think the noisy, shallow, context-hungry version of the MCP boom is dying, and that is healthy.

But the real role of MCPs is becoming clearer:

  • not as a wrapper for every tiny thing
  • not as a toy for developers to over-engineer
  • but as the standard integration layer serious products will use to connect with AI clients

That is why my bet is still the same:

companies will end up needing both an API and an MCP.

Not because MCP is magical.

Because if you want your product to be usable inside the assistants people actually use, sooner or later you will need a standard bridge into that ecosystem.

Back to Blog