skip to content
A logo Linell asked a nice AI to create. Linell Bonnette

Betting on MCP

I think that MCP is a step towards the future of AI, and I'm betting it sticks.

It’s hard to keep up with the latest and greatest in the world of AI, even if you’re working with it every day. If you haven’t spent much time looking into Model Context Protocol yet, you may be wondering what it is and why I think it’s a big deal. In short:

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

Practically, it’s an easy way to expose your tools to LLMs. Since the introduction of function calling, the process has been to build a tool and then add it as an argument to the API call that fetches. This approach has worked well for me in several applications, but the main drawback for my usage has been that it’s just not very flexible. Sure, the tools can be used flexibly, but in practice my applications have ended up with either very specific tools and agents or a lot of complexity in organizing agents to interact with each other. Additionally, it’s been painful to build tools that need to be used in multiple applications or languages — the “find my user” tool in my Ruby on Rails application has to either be replicated for each application or be a separate service.

Enter MCP

MCP isn’t rocket science, but it’s a big deal. It’s a way to standardize how tools are exposed to LLMs, and it’s a way to make tools more flexible and reusable. Instead of needing to recreate the wheel for each application, we’re able to just add a couple of specific endpoints to our application, expose them to an MCP-friendly client, and 💥 things are working.

The neat thing is that the client doesn’t need to know anything other than how to connect to the server, so anywhere you can access the Internet, you can use any MCP-compatible client. It feels a little dystopian, but I feel like it’s essentially handing the keys to the kingdom over to the LLM… in a good way.

Why It’s the Future

Let’s take a step back and think about a fundamental change on which my bet is based:

Web search is (effectively) dead. You’ve heard of Stackoverflow’s problems, maybe of Chegg’s lawsuit, and I’m sure you’ve stopped searching after reading the Google AI results at least once or twice. I’m not saying that nobody will ever perform a traditional search again, but with LLMs getting smarter and more accurate every day it’s hard to imagine that the total search volume won’t decline drastically across the board.

If users are no longer searching, how is new information going to be discovered? How are you going to interact with ‘websites’? I use quotes because, if everything is handled via a natural language interface, then what is the point of a website?

I don’t think websites are going to literally disappear, but I do think that the way people interact with them is going to be drastically different than it is right now. There aren’t many reasons to visit a website and just read the content when an LLM assistant can read it for you and condense it into the information you actually want or need instead of a wall of text. Obviously some websites are more susceptible to this than others, but again, I think the trend is clear and will be hard to ignore in the next few years.

So, what does this have to do with MCP? Well, MCP is a way to expose your data to LLMs directly, without having to go through a search engine. It lets the LLM directly interact with your data, allowing users to still interact with your content or your data regardless of whether it’s via your website or via a natural language interface like Claude Desktop.

Even better, authorization is a first-class citizen with MCP, so you’re able to actually control what data is exposed to the LLM. You can expose data to the LLM that is not publicly available, and you can control the level of access that the LLM has to that data. Effectively, you’re able to let the LLM get creative with the tools you’ve provided to it while also ensuring that you aren’t accidentally exposing sensitive data.

What’s Next?

I’m working on adding MCP support to more than one application, and I’m excited to see both how those applications usage changes as a result and how the MCP ecosystem evolves. I’m personally going to prioritize MCP support in everything relevant that I build going forward.

I’m betting that MCP will be a big deal, and I’m betting that it will be a big deal for a lot longer than the current hype cycle. I’m not sure if I’ll be right, but I’m betting on it.