Skip to main content

Announcing our $20m Series A from GV (Google Ventures) and Workday Ventures Read More

Emmanuel Delorme · · 14 min read
MCP vs API: What 200+ Connector Builds Taught Us

MCP vs API: What 200+ Connector Builds Taught Us

Table of Contents

Most engineers encounter MCP for the first time and ask: “Is this replacing REST APIs?” No. After building 200+ connectors at StackOne that serve both traditional API consumers and AI agents, the answer is clearer than the discourse suggests.

MCP wraps APIs. It doesn’t replace them. But knowing that isn’t enough. The real question is when to use which, and what happens when you need to connect to not one API but fifty.

MCP vs API: Two Protocols, Not Two Choices

An API is a contract between a developer and a system. You read the docs, understand the endpoints, write the auth code, handle pagination, and map the response fields to your data model. You know exactly what you’re calling and what you’ll get back.

MCP (Model Context Protocol) is a contract between an AI agent and a system. The agent doesn’t read docs. It discovers available tools at runtime through a standardized protocol, sees their input schemas, and decides which to call based on the user’s request. The protocol handles the transport. The agent handles the decision.

The key distinction: APIs are designed for developers who know what they want. MCP is designed for AI agents that need to figure out what’s available.

MCP vs Function Calling: What’s the Difference?

This trips up a lot of developers. OpenAI function calling and Anthropic tool use are client-side patterns. They define how the model decides to invoke a tool within a conversation. MCP is a server-side protocol. It defines how the tool itself is discovered, described, and invoked over a transport layer.

They’re complementary. The model uses function calling to decide “I should call bamboohr_list_employees.” MCP is how that call gets routed to a server that actually runs it.

Same Operation, Two Ways

Theory is cheap. Here’s what the difference looks like in practice.

The task: List all employees from BambooHR.

Path A: Direct Provider API Call

// Direct BambooHR API — you handle everything
const response = await fetch(
  "https://api.bamboohr.com/api/gateway.php/acme/v1/employees/directory",
  {
    headers: {
      Authorization: `Basic ${btoa(BAMBOOHR_API_KEY + ":x")}`,
      Accept: "application/json",
    },
  }
);

const { employees } = await response.json();
// BambooHR returns: id, displayName, workEmail, department, jobTitle
// Workday returns: Worker_ID, fullName, emailAddress, supervisoryOrg
// HiBob returns: id, fullName, email, work.department, work.title
// Every provider is different. Auth, pagination, field names, error codes.

You write this integration once. Then you write it again for Workday. And again for HiBob. And again for each of the next 47 HRIS providers your customers use. Each has different auth (API key, OAuth 2.0, SAML), different pagination (offset, cursor, page token), and different field names for the same concept.

Path B: MCP tool call via StackOne

// AI agent discovers and calls tools via MCP
import { Agent, MCPServerStreamableHttp, run } from "@openai/agents";

const authToken = Buffer.from(`${STACKONE_API_KEY}:`).toString("base64");

const stackoneMcp = new MCPServerStreamableHttp({
  url: `https://api.stackone.com/mcp?x-account-id=customer-bamboohr`,
  requestInit: {
    headers: { Authorization: `Basic ${authToken}` },
  },
});

await stackoneMcp.connect();

const agent = new Agent({
  name: "hr-assistant",
  model: "gpt-5.2",
  mcpServers: [stackoneMcp],
});

// The agent discovers bamboohr_list_employees, bamboohr_get_employee, etc.
// Native action names — not normalized. The model sees BambooHR's terminology.
const result = await run(agent, "List all employees in the engineering department");
console.log(result.finalOutput);

The agent never saw BambooHR’s API docs. It discovered bamboohr_list_employees through MCP, understood its schema, and called it. The action name matches the provider’s own terminology, so the model works with concepts it was trained on.

What Changed Between API and MCP

In Path A, a developer writes code that calls a specific endpoint, handles auth, parses the response, and manages errors. In Path B, an AI agent discovers available tools and decides which to call. The Falcon execution engine handles the provider API translation underneath. The agent never touches raw HTTP.

This matters when the task is ambiguous. “Find employees who changed departments in the last 90 days and draft an email to their managers” isn’t a single API call. It’s a sequence of tool calls that the agent figures out at runtime.

What MCP Gets Right

After building StackOne connectors that serve both REST and MCP consumers, these are the properties that matter in practice.

Dynamic discovery. The agent calls tools/list and gets back every available tool with its name, description, and input schema. No docs to read. No SDK to install. No OpenAPI spec to parse. When StackOne adds a new action to a connector, every MCP client sees it immediately.

Standardized protocol. Every MCP server speaks the same JSON-RPC 2.0 protocol over stdio or Streamable HTTP. An agent that works with one MCP server works with any MCP server. Before MCP, every AI tool integration was bespoke.

Credential abstraction. The AI model never sees API keys, OAuth tokens, or endpoint URLs. The MCP server handles auth internally. This isn’t foolproof (a bad MCP server can still leak credentials), but it’s a meaningful security boundary that reduces the blast radius of prompt injection attacks.

Industry convergence. MCP was donated to the Agentic AI Foundation (Linux Foundation) in December 2025. OpenAI, Google, Microsoft, AWS, Cloudflare, and Bloomberg are all founding members. Claude, ChatGPT, Cursor, GitHub Copilot, and Microsoft Copilot all support MCP as clients. This isn’t a single-vendor bet.

What MCP Doesn’t Solve

Most “MCP vs API” articles stop at the benefits. We can’t. We’ve shipped MCP servers to production customers and seen where the abstraction breaks down.

MCP Token Cost

Every MCP tool definition lives in the agent’s context window. Each tool consumes 550-1,400 tokens depending on schema complexity.

Here’s what this looks like at scale:

SetupTools loadedTokens consumed
3 MCP servers (GitHub + Slack + Sentry)40 tools~55,000 tokens
StackOne (all connectors, unfiltered)916 tools~138,000 tokens

Loading 916 tools into context is useless. The agent gets confused, picks wrong tools, and burns tokens on schema definitions it never uses. “Just add more MCP servers” is bad advice. Every server you connect dumps its full tool catalog into the context window.

StackOne solves this with tool search and execute: instead of loading every tool definition upfront, the agent gets two meta-tools (~300 tokens total). It calls search_tools("list employees from bamboohr") and gets back 5-10 relevant action definitions instead of 916. It picks the right one and calls execute_tool. Context usage drops from ~138,000 tokens to ~300.

// The agent's entire toolkit: 2 tools, ~300 tokens
const tools = [
  {
    name: "search_tools",
    description: "Search for relevant tools by natural language query",
    input_schema: {
      properties: { query: { type: "string" } },
      required: ["query"]
    }
  },
  {
    name: "execute_tool",
    description: "Execute a previously discovered tool by name",
    input_schema: {
      properties: {
        tool_name: { type: "string" },
        arguments: { type: "object" }
      },
      required: ["tool_name"]
    }
  }
];

Known Workflows: Call Connectors via StackOne’s SDK

MCP is for agents that need to discover tools. When the workflow is deterministic, StackOne’s AI Action SDK lets you call the same connectors directly in code. Same 200+ providers, same auth handling, but deterministic execution with no agent in the loop.

Solving MCP’s Practical Limitations

MCP’s benefits are real, but operating 200+ MCP connectors in production surfaced practical limitations. Here’s how StackOne addresses each one.

The Falcon execution engine wraps native provider APIs directly and exposes them through MCP and A2A with native action names. The agent sees bamboohr_list_employees, not a normalized hris_list_employees. It sees greenhouse_list_open_jobs, not ats_list_jobs. The model works with terminology it was trained on.

Code Mode: Handling Large MCP Responses Without Bloating Context

For complex multi-step workflows, even tool-by-tool execution can bloat the context. A single MCP tool response (list of Jira bugs, pull request data, employee records) can run to 14,000+ tokens of raw JSON.

Code mode solves this. The agent writes TypeScript that runs in a sandbox. Raw API responses stay in the sandbox. Only a summary returns to the agent’s context:

// Agent writes this code, executed in sandbox
const bugs = await tools.jira_list_issues({
  query: { type: "Bug", status: "Open" }
});
const prs = await tools.github_list_pull_requests({
  query: { state: "open" }
});

// Cross-reference: find bugs with no matching PR
const prKeys = prs.data
  .map(p => p.title.match(/[A-Z]+-\d+/))
  .flat()
  .filter(Boolean);

const unlinked = bugs.data.filter(b => !prKeys.includes(b.key));

return `Found ${unlinked.length} open bugs without linked PRs.`;
// Raw JSON (14K+ tokens) never enters agent context
// Only this summary (~50 tokens) returns

In StackOne’s benchmarks, a single MCP workflow generated ~14,000 tokens of raw JSON inside the sandbox. What returned to the agent was a 500-token summary. That’s a 96% token reduction.

MCP vs API: Decision Framework

After 200+ connector builds, here’s when we use each approach.

ScenarioDirect APIMCP
One provider, deterministic workflowYesNo
AI agent needs tool discoveryNoYes
AI agent + multiple providersNoYes
Batch data sync (high throughput)YesNo
Interactive AI assistantNoYes
Multi-agent orchestrationNoYes
Traditional SaaS applicationYesNo
Prototyping with Claude/CursorNoYes
Ambiguous tasks requiring chained tool callsNoYes

The pattern: use direct APIs when the path is known and speed matters. Use MCP when the agent needs to discover tools, chain calls, or work across multiple providers. Most production systems end up using both.

When MCP Is Overkill

We built MCP servers for 200+ connectors. We also know when they’re unnecessary.

Single integration, fixed endpoints. If your app talks to one Salesforce instance and calls three known endpoints, a direct REST integration is simpler, faster, and easier to debug. MCP’s value comes from standardization across many tools. One tool doesn’t need a standard.

Deterministic pipelines. If every run must execute steps A, B, C in order with specific parameters, don’t route it through an agent. Write code that calls A, B, C. StackOne’s connectors work in both modes: agents discover them via MCP, developers call them directly via the AI Action SDK. Same connectors, same auth. You can also build composite actions with the AI Connector Builder that chain multiple steps into a single deterministic call.

High-throughput batch jobs. Syncing 50,000 records doesn’t benefit from tool discovery. You know the endpoint, you know the schema. Use the REST API directly.

Environments without MCP clients. If your system doesn’t use Claude, ChatGPT, Cursor, or another MCP-aware client, there’s no consumer for the MCP server. REST APIs work everywhere. MCP works in the growing but still specific ecosystem of agent-native tools.

Getting Started with SaaS MCPs

If you’re building AI agents that need to interact with enterprise SaaS tools:

  1. Sign up for StackOne (free tier available)
  2. Connect your first provider (BambooHR, Salesforce, Slack, or 200+ others)
  3. Point your MCP client at https://api.stackone.com/mcp
  4. Start building

If you need a custom connector for an API that StackOne doesn’t cover yet, the AI Connector Builder generates one from API documentation in about 10 minutes. We wrote about the process in detail with our Fireflies MCP connector build.

MCP vs API Resources

Frequently Asked Questions

Does MCP replace APIs?
No. MCP wraps APIs. Every MCP server has a traditional API behind it handling auth, data fetching, and business logic. MCP adds a standardized discovery and invocation layer on top so AI agents can find and call tools without reading API documentation. Your REST APIs still do the work.
When should I use MCP vs a direct API call?
Use direct API calls when you have a deterministic workflow with known endpoints, when you need maximum performance with no extra latency, or when you're building a traditional application (not an AI agent). Use MCP when AI agents need to discover tools at runtime, when you want one protocol for multiple providers, or when you're building multi-agent systems where tool sharing matters.
How many tokens do MCP tool definitions consume?
Each MCP tool definition uses 550-1,400 tokens depending on schema complexity. Loading 40 tools from three MCP servers can consume 55,000 tokens before the user types a word. At scale, this becomes a real cost and performance concern. StackOne solves this with tool search and execute: instead of loading 916 tools (~138K tokens), the agent gets 2 meta-tools (~300 tokens) and discovers relevant actions on demand.
Is MCP more secure than direct API calls?
MCP abstracts credentials so the AI model never sees raw API keys or URLs. But security depends on implementation, not protocol. A poorly built MCP server can leak credentials just like a bad API wrapper. The MCP authorization spec now requires OAuth 2.1 with PKCE for HTTP transports, but the spec has been revised three times since March 2025 and is still maturing.
Is MCP just SOAP or WSDL for AI?
The comparison is understandable but inaccurate. SOAP and WSDL standardized how systems talk to systems. MCP standardizes how AI agents discover and use tools at runtime. The consumer is fundamentally different. MCP also supports local process spawning (stdio transport) and session state, which have no SOAP equivalent. That said, the concern about unnecessary abstraction layers is valid for simple, single-integration scenarios.
Does ChatGPT support MCP?
ChatGPT has supported MCP as a native client since March 2025. OpenAI adopted MCP the same day the 2025-03-26 spec was released. MCP support is available in the OpenAI Agents SDK, ChatGPT Desktop, and the Responses API. In December 2025, OpenAI co-founded the Agentic AI Foundation alongside Anthropic and Block to govern MCP as an open standard.
How do I migrate existing API integrations to MCP?
You don't rewrite your APIs. You wrap them. An MCP server is a thin layer that exposes your existing API endpoints as discoverable tools with JSON Schema definitions. StackOne's AI Connector Builder automates this process, generating MCP-compatible connectors from API documentation in approximately 10 minutes. Your existing API handles all the business logic. The MCP server just makes it agent-accessible.

Put your AI agents to work

All the tools you need to build and scale AI agents integrations, with best-in-class security & privacy.