The Model Context Protocol: AI’s Super Glue

June 26th, 2025

AI has become so ubiquitous these days that you can’t watch TV or read the news without seeing it mentioned in some form or another. AI concepts such as chatbots, ChatGPT, large language models (LLMs), and even Retrieval Augmented Generation (RAG) have already entered the enterprise lexicon and are becoming familiar concepts. However, more recently, the Model Context Protocol (MCP) has arrived in the AI universe and is generating a lot of buzz. At Solution Street we have started to work with MCP recently and we wanted to share some of our initial insights in this blog article. In this article we will dig into what MCP is and more importantly how it can be applied to solve some of your business problems along with some potential considerations to make when adopting MCP.

So, what exactly is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. Think of MCP like a USB-C port for AI applications: it provides a standardized way to connect AI models to different data sources and tools acting like a digital super glue that enables smooth interoperability.

Before MCP, integrating every new data source into an AI solution required its own custom implementation, making truly connected systems difficult to scale. MCP addresses this challenge by providing a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. Some of the benefits of MCP include:

  • Standardization: MCP is a standard protocol to connect AI to any system, eliminating the “NxM problem” of needing to build separate integrations for each combination.
  • Flexibility: MCP allows switching between different AI providers without rebuilding integrations.
  • Ecosystem Growth: There are over 5,000+ active MCP servers as of May 2025, with rapid community adoption.
  • Future-Proofing: Major players in the AI space including Anthropic, OpenAI, Google DeepMind, and Microsoft have officially adopted MCP.

Ok, but how does MCP work?

High-Level Overview

The MCP architecture is designed to enable intelligent agent behavior by connecting AI large language models (LLMs) to a dynamic set of external tools through an orchestration layer. It consists of the following main components:

  • MCP Host: An AI-powered app, for example, Claude Desktop, an IDE (e.g., Visual Studio Code), or another tool acting as an agent.
  • MCP Client: The client acts as the orchestrator between the user, the tool provider (MCP Server), and the underlying language model (LLM). It routes the flow of data and ensures each component has the information it needs.
  • LLM (Large Language Model): This is the reasoning engine that interprets the user’s query, selects the right tools, and synthesizes a meaningful response. For example, Claude, ChatGPT, Gemini, and Llama.
  • MCP Server: Provides access to a library of tools or APIs. It handles requests for available tools and executes the ones selected by the LLM.

The process usually begins when the user submits a query via the MCP Host. The MCP Client coordinates this input with the LLM, which may invoke one or more external tools through the MCP Server. The final response, enriched with the results from the invoked tools if applicable, is then returned to the user.

Step-by-Step Example: Asking About the Current Weather

Suppose a user asks an LLM, such as a locally running Llama LLM, “What’s the weather like in Paris today?” Since the underlying model is not aware of current events or real-time data, the best answer you will get will be general and not very helpful such as: “Without checking current data, I can tell you that Paris in June typically has mild to warm weather.”

However, now with the help of MCP and MCP Servers that can provide up-to-the-minute weather information, we can ask the same question and get this answer: “It’s 72°F and sunny in Paris today.”

But how does it actually work under the covers? 

Here is a step-by-step walk through the process:

  1. User Query (Step 1): The user sends their question to the MCP Host.
  2. Initialize Connection (Step 2): The MCP Host uses the MCP Client to initialize a connection with the MCP Server.
  3. Request Tool List (Step 3): The client asks the server what tools are available.
  4. Return Tools (Step 4): The server sends back a list of available tools and metadata about using the tools, e.g., [“WeatherAPI”, “Calendar”, “NewsFetcher”].
  5. Query + Tools to LLM (Step 5): The client now sends the user’s query along with the available tool list to the LLM.
  6. Tool Call Decision (Step 6): The LLM determines that WeatherAPI should be used and generates a tool call request.
  7. Request Tool Implementation (Step 7): The client forwards this tool request to the MCP Server.
  8. Return Tool Results (Step 8): The server runs the WeatherAPI and returns the result: “72°F and sunny.”
  9. Tool Results to LLM (Step 9): The client sends these results back to the LLM.
  10. Detailed Response (Step 10): The LLM composes a full answer, such as “It’s 72°F and sunny in Paris today.”
  11. Final Response to User (Step 11): The client sends the final, detailed answer to the user.

By providing the LLM extended capabilities through the use of the tools exposed by the MCP servers, we are able to provide much richer, real time, and grounded answers to the users query. But this is not all that MCP can enable. MCP servers can also provide tools that interact with a user’s local filesystem, query databases, do web queries, and approve work orders as just some of many examples. Think of MCP as a way to provide an LLM with a toolbox of tools to use at its disposal. You tell the LLM WHAT you want accomplished and the LLM then intelligently selects and sequences the appropriate tools to accomplish HOW it will complete the requested task.

Note: while the example above focuses on the tools capability of MCP, the MCP protocol also can expose resources (for example, data and content) as well as prompts (reusable prompt templates and workflows) as primitives to the LLM as well.

This link provides more information.

Some use cases for MCP

The state of the AI ecosystem before MCP felt awfully like the wild west world we used to live in before the use of open APIs and standard protocols. In those days not so long ago, companies would come out with some really great tools, but no one had an easy way to integrate with them and thus limited their usefulness and adoption. However, now with the introduction of MCP as a standard way for LLMs to utilize external tools and resources, it opens up a whole new set of use cases that were more difficult to imagine just a few months ago. Here are several use cases that we at Solution Street see as potentially good fits for using MCP in the enterprise.

Enterprise Productivity and Knowledge Management

Many companies that we have worked with in the past have a library of documentation and reference material that have been composed from years of business. We can imagine using MCP to enable an intelligent AI assistant that can help employees ask a wide range of company specific questions and receive relevant answers in plain text.  By integrating the AI’s underlying LLM with customer’s systems, including file storage, databases, collaboration servers, etc. via MCP, the LLM can ingest relevant content from this data and not only use the content to generate output but also link out to the original documents themselves. Company teams and customers can ask support questions and receive instant, contextual answers with links to relevant documents without having to search and find the right document.

Customer Support and Service Automation

In our over 23 years of business, we have seen many customers use different support and ticket management systems within the same company, often across multiple departments. For example, the customer support department using one CRM system and the development team using another ticket system. In a lot of cases those systems worked independently and often required duplicate data entry and maintenance. Now that these systems are beginning to adopt the MCP protocol, we are able to build intelligent integrations of those systems that allows the LLM to access and use both the CRM data as well as the development ticketing data (e.g., Jira, GitHub, etc.). AI chatbots can read ticket histories, update statuses, and escalate issues automatically while maintaining full context across interactions. Customer support could get a full summarization of an issue’s status and state, including development status, through a simple query. This AI integration would also allow developers to get more insight into customer interactions as well as insight into the functions they’re working on without having to jump around systems.

Healthcare and Compliance

We also see practical applications in the health care industry for adoption and usage of the MCP protocol. Imagine a secure AI assistant operating within a hospital’s virtual private cloud (VPC), connecting patient management systems, scheduling platforms, and compliance databases together through standardized MCP interfaces. Doctors and medical staff could then use an AI agent integrated with MCP to query and update these  systems. Many health care systems have different applications that handle these various functions and this type of AI agent enabled through MCP could help streamline patient care coordination while maintaining strict security and compliance requirements.

SaaS Product and ERP AI Integration

Over the many years we have been working on SaaS (Software as a Service) and ERP (Enterprise Resource Planning) software development projects, one of the features that consistently comes up is the ability for other systems to be able integrate with the product’s key functionality. Today, this is primarily done with REST APIs that can be consumed by third parties. But naturally, as AI continues to be adopted, the demand to integrate these products into the AI ecosystem also grows. Enter MCP – by wrapping these existing product APIs in an MCP server to expose its core functionalities – like record creation, status updates, and reporting – we can now make the application’s functionality accessible to an AI language model. This allows users to interact naturally with the product via AI by specifying what they want done, while the LLM figures out how to execute those actions using the available tools.

Software Development and Creative Workflows

MCP has already gained quick adoption in the software development space, with many MCP hosts and servers available today. For example, in an MCP-enabled IDE like Cursor, you could ask: “Find the first heading on example.com and copy its text” and the AI via FireCrawl will navigate and return the heading text. We have also seen game developers being able to automate repetitive tasks and focus more on creativity once they had an AI agent enabled with MCP to offload tasks onto. For example, arranging a scene with dozens of objects might be tedious by hand, but an AI agent could do it systematically.

Is MCP really a big deal?

MCP has really surged into the AI community’s consciousness early this year. There are a few big reasons for this recent buzz: AI agents and agentic workflows became major buzzwords in 2023–2024, but the challenge came when integrating these agents with business systems and data. With the introduction of MCP, we now have the ability to position our customers ahead of the curve as the standard reaches mainstream adoption.

MCP didn’t launch as just a static specification. Anthropic “dogfooded” it extensively and released it with a comprehensive initial set using Claude Desktop and numerous reference implementations. The open nature fostered a community. Tools like Cursor and Windsurf integrated MCP. Companies began to provide pre-built servers for hundreds of integrations. MCP has become a thriving open standard with thousands of integrations and growing. LLMs are most useful when connecting to the data you already have and software you already use.

Despite being relatively new, we feel that MCP is more than just hype and it is an important step forward in standardizing the way we integrate with AI language models. However, it’s still in its early days and MCP faces challenges, including potential security risks and the need for more widespread adoption.

Where should you start?

Solution Street recommends your first step to adopting MCP is to evaluate where MCP can provide the most value in your specific context. We recommend starting with low-risk, high-value use cases to demonstrate ROI before expanding to mission-critical systems. For simpler applications, MCP may even be overkill. If an AI model only needs to access one or two straightforward APIs, direct API calls might be a more efficient solution than implementing MCP. The learning curve associated with MCP’s messaging system and server setup means that its benefits need to be weighed against its complexity.

While MCP represents a significant breakthrough, it’s important to acknowledge that this technology is still evolving, and several concerns need careful consideration. It’s important to consider security concerns and potential vulnerabilities. Without security in mind, the adoption of MCP may introduce new security risks, including malicious tools disguised in unofficial repositories, consent fatigue attacks (e.g., user overexposure to consent requests), security breaches due to insufficient sandboxing, plaintext credential exposure, and weak authentication mechanisms. For example, an MCP host should obtain explicit user consent (i.e., human in the loop) before invoking any tool. Users should also understand what each tool does before authorizing its use. Additional security controls may also need to be considered, such as authentication, authorization, audit, logging, and specific data controls.

MCP is still new, and this can lead to breaking changes, requiring frequent updates to servers and clients. While the core concept appears stable, developers should anticipate and prepare for version upgrades and evolving best practices.

For users and organizations considering MCP adoption, Solution Street can perform an MCP readiness assessment and help your organization with a roadmap to AI and MCP adoption.

Conclusion

We feel that Model Context Protocol matters because it helps to bring the dream of a universal AI assistant closer to a practical reality. It’s the missing piece that makes tools context-aware and interoperable with AI, with immediate productivity wins (less manual glue work) and strategic advantages (future-proof, flexible integrations).

MCP represents more than just another integration standard: it’s the foundation for the next generation of intelligent, connected applications. While challenges remain, the rapid adoption by major tech companies and the growing ecosystem of tools demonstrate MCP’s potential to transform how we build and interact with AI systems.

Ready to explore how MCP can revolutionize your AI strategy? Solution Street specializes in secure, scalable MCP implementations that deliver real business value. Contact us to discuss your specific use case and learn how we can help you stay ahead of the curve.