What is the Model Context Protocol?
By CorpusIQ
The Model Context Protocol (MCP) is the standard that lets ChatGPT, Claude, and Perplexity pull live data from business tools and enterprise systems in a uniform way. Anthropic published the specification, OpenAI and Perplexity adopted it, and CorpusIQ runs a production MCP server exposing 22+ business tools (QuickBooks, Shopify, HubSpot, Gmail, Drive, GA4) to all three LLMs through one OAuth connection. This article explains what MCP is, what problem it solves, and how it compares to the alternatives.
The problem MCP was designed to solve
Before MCP, every LLM vendor shipped its own way to extend model capabilities. OpenAI had plugins. Anthropic had tool use. Each method required the tool author to rewrite integrations per model family. A tool author who wanted to support both ChatGPT and Claude had to build twice, maintain twice, and rebuild when either vendor changed its SDK. For a small connector shop, that was a full-time job. For a business owner who just wants to query QuickBooks from a chat, it was a non-starter.
MCP is the vendor-neutral protocol that replaces per-model integrations. Anthropic published the spec in November 2024, framing it as the HTTP of AI connectivity. The proposal: define one message format for tool discovery, invocation, and results, and let every LLM client speak that format. In 2025 OpenAI and Perplexity both adopted MCP, which means a single MCP server now runs across the three major chat interfaces without modification.
What MCP actually defines
MCP is a JSON-RPC-based protocol with three core concepts: tools, resources, and prompts. Tools are functions the LLM can call, described with name, parameters, and a schema. Resources are readable content the LLM can request, like a document or a database row. Prompts are reusable templates the server can suggest. Servers advertise their capabilities; clients call into them; results flow back as structured messages.
Transport is separate from the protocol. Local MCP servers typically use stdio (the client spawns the server as a subprocess). Remote servers use HTTP with Server-Sent Events, which is how CorpusIQ is deployed: running on Microsoft Azure, reachable from ChatGPT, Claude Desktop, Claude Web, Claude Code, and Perplexity without any local installation.
MCP in practice, the CorpusIQ pattern
CorpusIQ is a concrete example of an MCP server built for business data. It exposes 22+ connectors, each represented as a set of MCP tools. A list invoices tool lives on the QuickBooks connector. A search messages tool lives on the Slack connector. A run query tool lives on the PostgreSQL connector. When you ask Claude about your top five unpaid invoices this month, Claude reads the tool definitions CorpusIQ advertises, decides to call the invoice-listing tool with appropriate parameters, and feeds the result back into the conversation as cited context.
Because MCP is model-neutral, the exact same conversation works in ChatGPT or Perplexity with no change to the CorpusIQ side. The LLM picks the tools it wants; CorpusIQ serves them. Authentication is OAuth, scoped read-only per connector. Data retention is zero: every call is ephemeral and the response is not stored.
How MCP compares to existing alternatives
Compared to traditional REST APIs, MCP is higher level. A REST API returns rows and leaves parsing, reasoning, and response synthesis to the caller. An MCP server returns results wrapped in descriptive tool metadata that LLMs can cite. Human developers do not directly consume MCP servers; the LLM is the consumer. See the companion article on MCP vs API for a deeper breakdown.
Compared to ChatGPT plugins, MCP is portable. A plugin only works inside ChatGPT. An MCP server works in ChatGPT, Claude, and Perplexity, which matters if your team uses more than one LLM or wants the option to switch. Compared to workflow automation tools like Zapier or n8n, MCP is reactive rather than scheduled: the LLM decides when to query, in response to a user prompt, rather than running on a trigger.
Why MCP matters for small businesses
For a small business owner, MCP turns the AI assistant you already use into a decision engine over your own data. No new interface, no report builder, no dashboard maintenance. Ask a question in plain English; the LLM calls the right MCP tools across your connected business apps; you get a cited answer. That is the shape of the CorpusIQ product, priced at $29.95 per month on the Solo plan, with all 22+ connectors included.
Related reading
- MCP vs API: What Is Actually Different
- Building an MCP Server: A Practical Guide
- MCP Security: Protecting Your Data in the Context Window
- See all 22+ live CorpusIQ connectors
- Pricing, starting at $29.95 per month
Frequently asked questions
Anthropic published the Model Context Protocol specification in late 2024 as an open protocol for connecting LLMs to external tools and data sources. OpenAI adopted MCP for ChatGPT in 2025. Perplexity supports MCP as well. The protocol is open-source under the MCP Inc. organization.
Plugins are OpenAI-specific and live in the GPT Store. An MCP server is vendor-neutral, the same server works in ChatGPT, Claude Desktop, Claude Code, Claude Web, and Perplexity without rewrites. Plugins also lean on the OpenAI retention and pricing policies; MCP servers are hosted by the vendor of your choice.
No. Consuming MCP servers is a configuration step in each LLM client. Building an MCP server does require code, which is why CorpusIQ exists: the server, the 22+ connectors, and the OAuth handling are already built and managed for you.
MCP supports stdio for local servers and HTTP with Server-Sent Events for remote servers. CorpusIQ runs as a remote HTTP+SSE server so the LLM can reach it over the internet without any local installation on the user machine.
The protocol itself is a message format and does not specify authentication policy. Individual MCP servers like CorpusIQ enforce OAuth scopes, TLS, and rate limits. CorpusIQ uses read-only OAuth across all 22+ connectors and is CASA Tier 2 certified by DEKRA.