The Model Context Protocol (MCP) has become the dominant standard for connecting AI agents to external tools and data sources. Anthropic released the open specification in late 2024, and by mid-2025 OpenAI, Microsoft, Google and dozens of enterprise vendors had adopted it. Over 13,000 MCP servers launched on GitHub in 2025 alone. CData reported 294 enterprise MCP server downloads in its first six weeks, processing 4.2 million rows of production data.
The protocol works. The analogy of “USB-C for AI agents” holds: MCP provides a standard interface so any model can call any tool through a consistent schema. That is a significant engineering improvement over the previous approach of custom API wrappers for every integration.
But standardising the interface is not the same as standardising governance. For organisations in financial services, insurance, legal or healthcare, the question is not whether MCP can connect an agent to a database. The question is whether that connection can be audited, scoped, monitored and controlled to the standard a regulator expects.
The answer, as of early 2026, is: not with MCP alone.
What MCP covers
The MCP specification defines a JSON-RPC transport layer with three core primitives: tools (functions an agent can call), resources (data an agent can read) and prompts (templates for agent instructions). Each tool exposes a typed schema describing its inputs and outputs.
The June 2025 specification update added OAuth 2.1 as the standard authentication mechanism. MCP servers are now classified as OAuth Resource Servers. Clients must implement RFC 8707 Resource Indicators, which scope access tokens to a specific server. This prevents a compromised server from reusing a token to access other resources.
The November 2025 revision introduced support for asynchronous tool execution, structured error handling and improved capability negotiation. These are meaningful improvements for production deployments.
In summary, MCP now provides: typed tool schemas, OAuth 2.1 authentication, token scoping via resource indicators, capability negotiation and structured error responses.
What MCP does not cover
The specification is deliberately a transport protocol. It defines how an agent talks to a tool. It does not define what happens around that conversation. For regulated environments, the gaps are significant.
No standardised audit logging
MCP does not specify how tool calls should be logged, what metadata should be captured, how logs should be stored or for how long. There is no required format for recording which agent called which tool, with what parameters, at what time, and what was returned.
In financial services, the FCA expects firms to demonstrate their decision-making process. The SRA requires law firms to show how client data was handled. Without immutable, structured audit trails of every tool invocation, an MCP deployment cannot satisfy these requirements. The protocol leaves this entirely to the implementer.
No data residency controls
MCP does not address where servers run or where data transits. A tool call from an agent in London might route through an MCP server hosted in the United States, processing UK client data in a jurisdiction that does not meet UK GDPR requirements. The protocol has no mechanism to express or enforce data residency constraints.
No rate limiting or resource governance
The specification does not define rate limiting, concurrency controls or cost boundaries for tool invocations. An agent with access to a database query tool can, in principle, execute thousands of queries per second. There is no protocol-level mechanism to prevent runaway costs or resource exhaustion.
No policy evaluation framework
MCP does not include a concept of policies that govern when a tool may be called. There is no way to express rules like “this tool requires human approval for transactions above a threshold” or “this tool may only be invoked during business hours” within the protocol itself.
No compliance metadata
Tool schemas describe inputs and outputs but carry no metadata about data classification, regulatory scope or compliance requirements. An MCP server exposing access to client records looks identical at the protocol level to one exposing a weather API.
The security risks that matter
The governance gaps are compounded by active security risks that Microsoft, Red Hat, Pillar Security and Palo Alto Unit 42 have documented extensively.
Prompt injection through tool results
This is the most consequential risk for regulated deployments. When an MCP server returns data to an agent, that data enters the model context. If the returned data contains hidden instructions, the agent may execute them. Microsoft’s research describes cross-tool injection scenarios where a document fetched via one tool contains embedded commands that trigger actions through a different tool.
This is not theoretical. In mid-2025, Supabase’s Cursor agent processed support tickets containing embedded SQL instructions, leading to exfiltration of sensitive integration tokens. Atlassian’s Jira Service Management MCP integration suffered a similar attack through poisoned support tickets.
Over-permissioning
MCP servers typically request broad permission scopes. As Pillar Security documented, a server that stores OAuth tokens for multiple services becomes a high-value target. If an attacker obtains the token stored by a Gmail MCP server, they can create their own server instance using that stolen token. The concentration of access credentials in a single protocol layer fundamentally changes the attack surface.
Tool poisoning
MCP tool descriptions are loaded directly into the model context. Attackers can embed hidden instructions in tool metadata. Microsoft’s Plug, Play, and Prey analysis describes scenarios where a weather-checking tool description secretly instructs the agent to exfiltrate private conversations. Security researchers found command injection flaws in 43% of analysed MCP servers.
Supply chain risks
With thousands of community MCP servers available, the supply chain risk is substantial. CVE-2025-6514 demonstrated how an OAuth proxy vulnerability in mcp-remote enabled remote code execution affecting over 437,000 environments. Pillar Security has documented campaigns systematically scanning the internet for exposed MCP endpoints.
What a compliant MCP deployment requires
Deploying MCP servers in a regulated environment requires an infrastructure layer above the protocol that provides the governance MCP itself does not include.
Scoped permissions with least privilege
Every MCP server should expose the minimum set of tools required for its function. OAuth tokens should be scoped to individual servers using resource indicators. Tools that access sensitive data should require separate, time-limited credentials rather than long-lived tokens shared across services.
Immutable audit logging of all tool calls
Every MCP tool invocation must be logged with: the calling agent identity, the tool name and version, input parameters, the full response, a timestamp and the execution context. Logs must be append-only, tamper-evident and stored for the retention period required by the applicable regulatory framework. This logging cannot be optional or best-effort. It must be structural.
Data residency constraints on server placement
MCP servers that process regulated data must run within the required jurisdiction. This means deploying servers in specific regions and routing tool calls through infrastructure that guarantees data does not leave the required boundaries. The deployment topology must be documented and auditable.
Policy-as-code evaluation before tool execution
Before an MCP tool call reaches the server, a policy engine should evaluate whether the call is permitted. Policies can encode rules such as: this agent may only query records in its assigned portfolio; this tool requires human approval above a specified threshold; this data source may not be accessed outside business hours. These policies should be version-controlled, testable and auditable.
Input and output filtering
All data returned from MCP servers should pass through a filtering layer that detects and strips potential injection payloads before the data enters the model context. Similarly, tool inputs should be validated against expected schemas and sanitised for dangerous patterns. This defence is not foolproof, but it raises the cost of successful injection attacks significantly.
Server provenance and integrity verification
Organisations should maintain an approved registry of MCP servers, with cryptographic verification of server identity and integrity. Community servers should undergo security review before deployment. Runtime monitoring should detect unauthorised servers or unexpected changes to tool descriptions.
The role of workflow orchestration
The governance controls described above require coordination. An individual MCP server does not know about data residency policies, audit requirements or approval workflows. These are concerns of the system that orchestrates agent behaviour.
Workflow orchestration engines sit above MCP and provide the execution framework for governance. An orchestration layer can intercept tool calls before they reach MCP servers, evaluate policies, log the interaction, enforce rate limits and route calls to jurisdiction-appropriate infrastructure.
This is the architectural pattern that makes MCP viable in regulated environments: the protocol handles the transport, and the orchestration layer handles the governance. Platforms like CompleteFlow implement this pattern by wrapping MCP tool calls in governed workflows that enforce audit logging, human approval gates and data residency constraints.
The 2026 MCP roadmap acknowledges some of these gaps. Enterprise requirements around audit trails, SSO-integrated authentication, gateway behaviour and configuration portability are on the horizon. But the roadmap describes future intentions, not current capabilities. Organisations deploying MCP today in regulated contexts cannot wait for the specification to catch up. They need the governance layer now.
Conclusion
MCP is a good protocol. It solves a real engineering problem, and its adoption by every major AI vendor confirms its value as a standard. But a transport protocol is not a governance framework. The specification does not claim to be one.
For organisations operating under regulatory obligations, deploying MCP servers without a governance layer above them is not a viable option. The security risks are documented and actively exploited. The compliance gaps are structural and well understood.
The path forward is not to avoid MCP. It is to deploy it within an architecture that provides the audit trails, access controls, data boundaries and policy enforcement that the protocol itself does not yet include.