ChanlChanl
Tools & MCP

Three Providers, Three Tool APIs. That's the Problem MCP Solves

OpenAI, Anthropic, and Google all implement function calling differently. MCP is emerging as the standard that saves developers from writing adapter code for every provider.

DGDean GroverCo-founderFollow
March 26, 2026
7 min read
Person connecting different shaped puzzle pieces together

A developer ships an agent on OpenAI. It calls functions, retrieves data, executes workflows. It works. Then the boss walks in: "Can you make this work with Claude too? And maybe Gemini for the European customers."

Three days later, the developer has written two adapter layers, discovered that Anthropic returns tool calls as content blocks instead of a dedicated field, realized that Gemini expects FunctionDeclaration objects with OBJECT (uppercase) as the type value, and is questioning every career decision that led to this moment. The agent logic hasn't changed. The tools haven't changed. Only the plumbing between the model and the tools has changed. And it took three days.

This is the tool calling fragmentation problem. It's real, it's expensive, and it's why MCP matters far more than most developers realize.

The Fragmentation Is Worse Than You Think

Every major AI provider implements tool calling differently, and the differences go deeper than surface-level syntax. The schema format, the response structure, how parallel calls work, how errors propagate: all different.

OpenAI pioneered function calling in June 2023. You pass a tools array in the request, each tool defined with a JSON Schema for its parameters. When the model wants to call a function, it returns a message with tool_calls as a dedicated field, each call containing the function name and arguments as a JSON string.

Anthropic went in a completely different direction. Claude uses a content-block architecture. Tool calls arrive as blocks within the response content array, each with type tool_use, alongside regular text blocks. You don't get a separate tool_calls field. You parse the content array and filter for tool use blocks.

Google went a third direction. Gemini uses FunctionDeclaration objects based on OpenAPI 3.0 specs, but with Google-specific conventions. Types are uppercase strings (STRING, OBJECT, INTEGER) rather than JSON Schema lowercase. Several standard JSON Schema attributes like default, optional, and oneOf aren't supported.

Here's what that looks like in practice:

FeatureOpenAIAnthropicGoogle Gemini
Tool definition keytools[].functiontools[]tools[].functionDeclarations
Schema formatJSON SchemaJSON SchemaOpenAPI 3.0 subset
Type values"string", "object""string", "object""STRING", "OBJECT"
Tool call locationmessage.tool_calls[]Content block: type: "tool_use"candidates[].content.parts[].functionCall
Parallel tool callsNative (multiple in one response)Sequential by defaultSupported
Strict modestrict: true for guaranteed schemaNot available via same mechanismNot available
Result return formattool role messagetool_result content blockfunctionResponse part
System message handlingMultiple system messages anywhereSingle system block, concatenatedSystem instruction field

That's not a minor inconvenience. That's a completely different integration for every provider.

Why This Keeps Happening

Nobody set out to create this mess. Each provider optimized their tool calling implementation for their model's specific architecture and training approach.

OpenAI was first to market, so they defined the format everyone else had to react to. Their tools array and tool_calls response field became the pattern that developers learned first. Anthropic designed Claude's content-block architecture to support interleaved reasoning and tool use within a single response, letting the model explain its thinking alongside its tool calls. Google aligned with OpenAPI because they already had thousands of API definitions across Google Cloud.

It's the USB-A vs. USB-B vs. Mini-USB story all over again. Each connector made sense for its original device. But developers, like laptop owners in 2010, ended up carrying a bag full of adapters.

The pain compounds fast. Anthropic enforces strict alternating turn order: user, assistant, user, assistant. OpenAI doesn't. If your agent architecture assumes you can inject multiple system messages mid-conversation (a common OpenAI pattern), switching to Claude means restructuring your entire message flow. Anthropic requires an explicit max_tokens parameter. OpenAI defaults it. Gemini doesn't support several JSON Schema features your tools might already use.

Every difference is a potential bug. Every adapter is code you have to maintain. And every provider update risks breaking your carefully crafted compatibility layer.

MCP as the Standardization Layer

MCP, the Model Context Protocol, doesn't replace function calling. It operates one level above it. Instead of defining your tools in each provider's native format, you define them once in an MCP server, and any MCP-compatible client handles the translation to whichever model it's connected to.

The numbers tell the story. Since Anthropic released MCP as open source in November 2024, adoption has been explosive: 97 million monthly SDK downloads across Python and TypeScript, over 10,000 active servers, and first-class client support in Claude, ChatGPT, Cursor, VS Code, Gemini, and Microsoft Copilot. (We covered the protocol architecture in depth in MCP Explained: Build Your First MCP Server.)

The real turning point came in December 2025, when Anthropic donated MCP to the newly formed Agentic AI Foundation under the Linux Foundation. OpenAI and Block joined as co-founders. AWS, Google, Microsoft, Cloudflare, and Bloomberg signed on as supporting members. That's not casual interest. That's the entire industry saying "yes, this is the standard."

What makes MCP different from previous standardization attempts is that it doesn't just standardize the tool schema. It standardizes discovery. An MCP server advertises what it can do through a protocol handshake. Clients don't need a static list of every possible tool. They connect, discover capabilities, and call tools through a consistent interface regardless of which LLM is doing the reasoning underneath.

For developers, the practical benefit is straightforward: write your tools once as an MCP server. Connect it to whatever agent framework or model provider you want. When your boss says "make it work with Claude too," the answer is "it already does."

What This Means for Agent Developers

The middleware layer for AI agents is forming right now. The teams that recognize it early will ship faster. The teams that don't will be rewriting adapter code for years.

If you're starting a new agent project today, building directly on a single provider's native tool calling API is a bet that you'll never need to switch providers, never need to support multiple models for different use cases, and never need to share your tools with another team's agent. That's a risky bet.

The smarter architecture looks like this: tools defined as MCP servers, agent logic built on a provider-agnostic framework like the Vercel AI SDK (which aligned its tool definition format with MCP in version 5), and model selection as a configuration choice rather than an architectural commitment.

This isn't theoretical. Chanl's tool management system already works this way. Tools are defined once, exposed via MCP, and callable by agents regardless of which model powers them. When a customer wants to switch from GPT-4o to Claude for a voice agent, zero tool code changes. When they want to A/B test Gemini against Claude using scenario testing and compare results in analytics, the tools work identically.

The winners in this shift are developers who invested in clean tool abstractions early. The losers are teams with thousands of lines of provider-specific adapter code that will need to be rewritten or maintained indefinitely.

The Vercel AI SDK tells the same story. AI SDK 5 renamed its parameters field to inputSchema to align with MCP's specification. AI SDK 6 added a full agent abstraction where you define the agent once and it works across providers. The tooling is converging fast.

The Open Question

Will every provider fully adopt MCP? Probably not as their sole interface. OpenAI will keep its native function calling. Anthropic will keep its content-block tool use. Google will keep FunctionDeclaration. Native APIs offer tighter integration with each model's specific capabilities, like Anthropic's programmatic tool calling or OpenAI's strict mode.

But MCP doesn't need to replace native APIs. It needs to be the default layer teams build on, with native APIs as the escape hatch for provider-specific optimization.

Google's A2A protocol adds another dimension. Where MCP handles agent-to-tool communication, A2A handles agent-to-agent communication: discovery through "Agent Cards," task delegation, status updates, and result passing. They're complementary, not competing. An orchestrator agent might use A2A to delegate a task to a specialist agent, which then uses MCP to call the tools it needs. We mapped out how MCP, A2A, and WebMCP form a three-layer protocol stack earlier this month.

With the Agentic AI Foundation now governing both MCP and A2A under the Linux Foundation, and every major cloud provider sitting at the table, the standards picture is clearer than it's been at any point in the AI agent era.

The fragmentation won't disappear overnight. But the answer to "which tool calling format should I use?" is no longer "whichever provider you're locked into." It's MCP. Build there. Let the protocol handle the rest.

If you want to understand the mechanics underneath all of this, our Learning AI series covers how function calling actually works under the hood, from schema definitions through to multi-step tool chains. And for more on the fragmentation problem at the tool management layer, see 50 tools, zero memory: the agent paradox and why RAG quality is a retrieval problem, not a model problem.

One Tool Definition, Every Agent

Register your tools once with MCP support. Test them with AI scenarios. Monitor execution in production.

Start Building
DG

Co-founder

Building the platform for AI agents at Chanl — tools, testing, and observability for customer experience.

Learn Agentic AI

One lesson a week — practical techniques for building, testing, and shipping AI agents. From prompt engineering to production monitoring. Learn by doing.

500+ engineers subscribed

Frequently Asked Questions