Everything your team needs to know about MCP in 20262026年你的团队需要了解的关于MCP的一切https://workos.com/blog/everything-your-team-needs-to-know-about-mcp-in-2026The Model Context Protocol (MCP) is an open standard that gives AI models a universal way to connect to external tools, data sources, and services. Anthropic introduced it in November 2024, and it has since become the de facto protocol for connecting AI to the real world, adopted by OpenAI, Google DeepMind, Microsoft, and thousands of development teams. The Python and TypeScript SDKs alone see roughly 97 million monthly downloads. In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, making it a vendor-neutral, community-governed standard.This article covers everything your team needs to evaluate, adopt, and build with MCP: how the protocol works as a complete system, how authentication has evolved, where the ecosystem stands, what the 2026 roadmap prioritizes, and where the real gaps remain.模型上下文协议MCP是一项开放标准为AI模型提供了连接外部工具、数据源和服务的通用方式。Anthropic于2024年11月推出该协议后现已成为连接AI与现实世界的事实标准协议被OpenAI、Google DeepMind、微软及数千个开发团队采用。仅Python和TypeScript SDK每月下载量就达约9700万次。2025年12月Anthropic将MCP捐赠给Linux基金会旗下的Agentic AI基金会使其成为供应商中立、社区治理的标准。本文涵盖团队评估、采用和构建MCP所需了解的所有内容该协议作为完整系统的工作原理、认证机制的演变、生态系统现状、2026年路线图的优先事项以及仍存在的实际差距。Why MCP existsBefore MCP, every AI application that needed to talk to an external system had to build its own custom connector. Want Claude to access Google Drive? Build a custom integration. Want ChatGPT to query your Postgres database? Build another one. Want Cursor to read your Jira tickets? Thats yet another bespoke connector.This is the N×M problem. If you have N AI applications and M tools or data sources, you need N × M custom integrations. Each one has its own authentication approach, its own data format, its own error handling. Fragile, expensive, and impossible to scale.ADD_IMAGEMCP eliminates this by defining a single protocol that any AI application can use to talk to any tool. Build an MCP server once, and every MCP-compatible client (Claude, ChatGPT, Gemini, Cursor, VS Code, a custom agent) can use it. The integration count drops from N × M to N M. That isnt just a developer convenience; its the difference between an ecosystem that compounds and one that fragments.MCP存在的意义在MCP出现之前每个需要对接外部系统的AI应用都必须构建专属连接器。想让Claude访问Google Drive得开发定制集成方案。想让ChatGPT查询Postgres数据库又得开发另一套接口。想让Cursor读取Jira工单还得再开发专属连接器。这就是N×M的扩展难题。当存在N个AI应用和M个工具数据源时就需要开发N×M个定制集成方案。每个方案都有独立的认证方式、数据格式和错误处理机制。这种模式脆弱昂贵且难以扩展。ADD_IMAGEMCP通过制定统一协议解决了这个问题任何AI应用都能通过该协议与各类工具交互。只需构建一次MCP服务端所有兼容MCP的客户端Claude、ChatGPT、Gemini、Cursor、VS Code或自定义智能体都能直接调用。集成方案数量从N×M锐减至NM。这不仅是开发效率的提升更是生态聚合与碎片化的本质区别。How MCP works todayMCP is built around three roles.Thehostis the AI application the user interacts with: Claude Desktop, Cursor, ChatGPT, or a custom app.Theclientlives inside the host and manages connections to MCP servers. Each client maintains a one-to-one relationship with a single server, but a host can run many clients simultaneously.Theserverexposes capabilities to the AI through the protocol.When the user makes a request:The AI model inside the host decides which tools to invoke and with what parameters.The appropriate client routes the request to the right server.The server executes the action against its underlying system (an API, a database, a SaaS platform) and returns the result.The model incorporates that result into its response.A single user request can involve multiple servers working in concert. Ask your AI assistant to summarize the Slack discussion about the Q3 roadmap and create a Linear ticket for the top action item, and the host routes requests to both a Slack and a Linear server, composing the results seamlessly.MCP围绕三大核心角色构建。宿主是用户直接交互的AI应用载体例如Claude Desktop、Cursor、ChatGPT或定制应用程序。客户端驻留在宿主内部负责管理与MCP服务器的连接。每个客户端与单个服务器保持一对一关系但一个宿主可同时运行多个客户端。服务器通过协议向AI暴露功能接口。当用户发起请求时宿主内的AI模型决策需调用的工具及参数对应客户端将请求路由至目标服务器服务器在其底层系统API/数据库/SaaS平台执行操作并返回结果模型将结果整合至最终响应单个用户请求可协调多个服务器共同完成。例如要求AI助手总结Slack中关于Q3路线图的讨论并为重点事项创建Linear工单时宿主会分别向Slack服务器和Linear服务器路由请求并无缝整合处理结果。译文严格遵循技术文档的简洁性要求采用决策/路由/暴露等专业术语将host-client-server体系译为宿主-客户端-服务器保持架构一致性通过载体/驻留/协调等措辞准确传递分布式系统特性最后用例保留Slack/Linear等品牌名称符合技术文本惯例For a deeper dive on MCP architecture, see How MCP servers work.What servers exposeMCP servers present their capabilities through three primitives:‍Toolsareactionsthe AI can take: send a message, create a record, run a query, trigger a deployment. Tools are the write side of MCP. The user can approve or deny tool calls depending on the hosts configuration.‍Resourcesaredatathe AI can read: files, database rows, API responses, documents. Resources are the read side of MCP. A Google Drive server might expose your documents as resources; a database server might expose tables and views.‍Promptsarereusable templatesthat guide the AIs behavior for specific tasks, helping standardize how it interacts with a particular domain.MCP is not a replacement for APIs.Your existing APIs still do the work. MCP is the protocol layer that gives AI a standardized way to discover and use them.服务器暴露的内容MCP服务器通过三个基本要素展示其功能工具是AI可执行的操作发送消息、创建记录、运行查询、触发部署。工具是MCP的写入端。根据主机配置用户可以批准或拒绝工具调用。资源是AI可读取的数据文件、数据库行、API响应、文档。资源是MCP的读取端。例如Google Drive服务器可能将您的文档作为资源暴露数据库服务器可能暴露表和视图。提示是可复用的模板用于指导AI在特定任务中的行为帮助标准化其与特定领域的交互方式。MCP并非API的替代品。您现有的API仍负责实际工作。MCP是协议层为AI提供标准化方式来发现和使用它们。TransportMCP supports two transport mechanisms.Thestdiotransport runs servers as local subprocesses of the host, communicating through standard input/output. This is the simplest setup and works well for desktop applications and development tools.‍Streamable HTTPsupports remote servers deployed anywhere on the internet, using standard HTTP and Server-Sent Events for real-time streaming. Its compatible with existing load balancers, proxies, and CDNs.Connections arestateful. Unlike REST APIs, the client and server maintain a session, so the server can remember context across multiple requests within a workflow. This matters for multi-step operations like database transactions or multi-file code refactoring.The wire format is JSON-RPC 2.0, the same lightweight protocol used in many developer tools.Async tasksNot every operation completes in milliseconds. Before tasks, MCP requests were synchronous: the client calls a tool, blocks, and waits. That model breaks for anything that takes more than a few seconds.Tasks introduce a call-now, fetch-later pattern. Any request can return a task handle immediately while the real work continues in the background. Clients can poll or subscribe for progress updates. Tasks move through defined states (working, input_required, completed, failed, cancelled), giving agents and users visibility into long-running operations like ETL jobs, large file conversions, or multi-step provisioning.Sampling and elicitationMCPs original design was strictly one-directional: the model calls, the server answers. The current spec introduces collaboration patterns that let servers participate more actively in workflows.‍Sampling allows servers to request completions from the AI model during execution. Instead of the model unilaterally deciding everything, a server can ask the model to reason about intermediate state, validate assumptions, or generate content as part of a multi-step process. The user can review and edit sampled output before it goes back to the server, keeping humans in the loop for accuracy-sensitive workflows.‍Elicitation lets servers pause execution and request input from the user. URL-mode elicitation handles interactions that must happen outside the MCP client for security reasons: OAuth flows, credential entry, payment setup. The server sends the user to a trusted external URL to complete the step, then resumes. Form-mode elicitation handles structured input when the server needs clarification before proceeding, like choosing between multiple valid interpretations of a request.These patterns shift MCP from a pure command-execution protocol to something closer to a collaboration framework. Servers are no longer passive; they can think (via sampling), ask questions (via elicitation), and coordinate multi-step work (via tasks).Server-side agent loopsServers can include tool definitions in sampling requests, specify tool choice behavior, and implement multi-step reasoning internally. A research server can spawn agents, coordinate their work, and deliver a coherent result using standard MCP primitives, without custom orchestration code. Combined with tasks, this means a server can run a complex background workflow, report progress, and deliver results asynchronously.MCP Apps: the UI layerUntil January 2026, every MCP interaction was text-based. MCP Apps changed that by extending the protocol into interactive user interfaces. Tools can now return rich HTML interfaces that render in sandboxed iframes within the chat experience. Users can manipulate dashboards, edit designs, compose formatted messages, and interact with live data without leaving the conversation.MCP Apps was co-developed with OpenAI and works in Claude, ChatGPT, Goose, and VS Code. Launch partners include Amplitude, Asana, Box, Canva, Clay, Figma, Hex, monday.com, Slack, and Salesforce. The security model uses iframe sandboxing, pre-declared templates, auditable JSON-RPC messaging, and user consent for UI-initiated tool calls.This is the first official MCP extension, built using the extensions system introduced in the November 2025 spec. More extensions will follow.How MCP got hereMCPs adoption arc has been remarkably fast.November 2024.Anthropic open-sources MCP. At launch, its primarily a developer tool for improving AI-assisted coding.March 2025.Two things happen on the same day. The spec launches its second version, introducing Streamable HTTP and OAuth 2.1. And OpenAI announces full MCP support across the Agents SDK, Responses API, and ChatGPT desktop app. This is MCPs inflection point. Streamable HTTP makes remote servers practical; OpenAIs adoption gives every MCP server access to the largest AI user base.April 2025.Google DeepMind confirms MCP support for Gemini.June 2025.The spec formalizes MCP servers as OAuth Resource Servers and mandates Resource Indicators (RFC 8707) to prevent token misuse.September 2025.The MCP Registry launches. It grows to nearly 2,000 server entries within months.November 2025.The 2025-11-25 spec release ships the largest set of changes since launch: async tasks, enhanced sampling, elicitation, server-side agent loops, Client ID Metadata Documents, client security requirements, and the extensions system.December 2025.Anthropic donates MCP to the Agentic AI Foundation under the Linux Foundation. OpenAI and Block join as co-founders, with AWS, Google, Microsoft, Cloudflare, GitHub, and Bloomberg as supporting members.January 2026.MCP Apps launches as the first official extension.March 2026.The 2026 roadmap is published, making enterprise readiness a top priority.Authentication and authorizationAuth is the area of MCP that has evolved the most, and its the top concern for any organization evaluating the protocol.How it worksForlocal serversrunning via stdio, authentication is handled by the host applications permissions. The server runs as a subprocess and inherits the same access as the app running it. The security boundary is the users own machine.Forremote servers, MCP specifies an OAuth 2.1 flow. The server points to an authorization server, the client goes through a consent flow, and receives scoped tokens. The protocol supports PKCE, token scoping, and consent screens.How it has maturedThe auth story has tightened considerably across three spec revisions.March 2025introduced OAuth 2.1 as the foundation, but client registration relied on Dynamic Client Registration (DCR), which required every authorization server to maintain a database of known clients.June 2025formalized MCP servers as OAuth Resource Servers and mandated Resource Indicators (RFC 8707). This prevents a token issued for one server from being reused to access a different server.November 2025shifted the default from DCR to Client ID Metadata Documents (CIMD). With CIMD, a clients identity is a URL pointing to a JSON document the client controls. Authorization servers fetch metadata on demand rather than maintaining a registration database for every client. This is critical for MCPs scale: a single AI client might connect to thousands of servers it has never seen before. The same release added default scope definitions and client security requirements for local server installation.Session-scoped authorizationOne emerging pattern for organizations concerned about agent access is session-scoped authorization. Rather than granting agents persistent access via long-lived OAuth tokens, access is time-limited to the duration of a specific task. When the session ends, access ends automatically. The agent cannot renew the session on its own. A human must explicitly approve a new session.The enterprise auth gapThe 2026 roadmap acknowledges that static client secrets remain common in production and calls for paved paths toward SSO-integrated flows. The goal: IT administrators should be able to manage MCP server access from the same identity provider console where they manage everything else. This work is pre-RFC. If your organization is hitting these walls, contributing to the Working Groups is the most direct way to shape the protocol.For more on this, see How WorkOS solved enterprise auth for MCP servers.The ecosystemAdoptersOn the client side: Claude, ChatGPT, Gemini, Microsoft Copilot, GitHub Copilot, Cursor, Windsurf, VS Code, and Zed all support MCP.On the server side: Slack, GitHub, Google, Salesforce, Stripe, HubSpot, Shopify, Notion, Linear, Sentry, Figma, Webflow, Cloudflare, Postman, WooCommerce, and many others have built official or community-maintained servers. The developer tools category has seen particularly strong adoption. Context7, highlighted on Thoughtworks Technology Radar, provides LLMs with version-specific documentation through MCP. Cloudflares Code Mode demonstrated 98% token savings by letting agents discover and call tools dynamically rather than loading all definitions upfront.Official SDKs exist for TypeScript and Python. Community SDKs cover Java, Kotlin, C#, Go, Rust, and Swift.The registry and server discoveryThe MCP Registry is the central index for discovering servers. The 2026 roadmap includes work on MCP Server Cards, a proposed standard for exposing server metadata via.well-knownURLs so browsers, crawlers, and registries can discover capabilities without connecting.Not all servers are equal. Some are official and well-maintained (GitHubs, Stripes). Others are community-built and may lack robust error handling, security review, or documentation. Theres no conformance testing yet, though the roadmap commits to conformance test suites.GovernanceMCP is governed through the Agentic AI Foundation under the Linux Foundation. Development is organized around Working Groups (Transports, Auth, Registry, and others) and Interest Groups, with changes proposed through Specification Enhancement Proposals (SEPs).Whats working and whats notWhere MCP is strong today‍Developer tooling and coding assistants.This is where MCP sees the deepest adoption. Cursor, VS Code, Claude Code, and similar tools use MCP to give AI context about codebases, documentation, and development infrastructure.‍Single-user, interactive workflows.Asking your AI assistant to check your calendar, summarize a Slack thread, and file a ticket works reliably.‍Read-heavy integrations.MCP servers that expose data as resources are straightforward to build and deploy safely.‍Ecosystem breadth.Close to 2,000 servers in the registry, major platform adoption, and active community development. The network effects are real.Where the gaps are‍Enterprise observability.No standardized audit trail. Teams building production MCP deployments are inventing their own logging, tracing, and compliance infrastructure.‍Multi-tenancy.SaaS providers building MCP servers need to isolate tenant data and enforce tenant-specific policies. The protocol doesnt define a model for this yet.‍Rate limiting and cost attribution.When agents invoke tools autonomously, organizations need to cap usage and attribute costs. This isnt addressed at the protocol level. The emerging landscape of agent payment protocols like x402 and Stripe MPP is beginning to address the payment side, but cost governance within organizations remains unsolved.‍Configuration portability.Setting up an MCP server in one client means starting from scratch in another. No portable config standard exists yet.‍Gateway behavior.Enterprises running MCP behind API gateways, security proxies, or load balancers face undefined behavior around authorization propagation, session affinity, and inspection boundaries.The 2026 roadmapThe 2026 roadmap, published in March 2026, organizes development around four priority areas. SEPs that align with these areas receive expedited review.Transport evolution: Streamable HTTP gave MCP a production-ready transport, but running it at scale has exposed gaps. The Transports Working Group is focused on evolving Streamable HTTP to work statelessly across multiple server instances, defining how sessions are created, resumed, and migrated during scale-out events, and standardizing MCP Server Cards for metadata discovery.Agent communication: The current spec laid the groundwork with sampling, server-side agent loops, and parallel tool calls. Further work will expand how servers and models collaborate, supporting more sophisticated multi-step reasoning and coordination patterns.Enterprise readiness: This is the priority most relevant to organizations deploying MCP at scale. The roadmap calls out four specific gaps: structured audit trails and observability that plug into existing SIEM and APM infrastructure; enterprise-managed auth with SSO-integrated flows; gateway and proxy patterns including authorization propagation and session affinity; and configuration portability across clients. Most of this work is expected to land as extensions rather than core spec changes.Governance maturation: With MCP now under the Linux Foundation, the community needs a clear contributor ladder, delegation of authority to Working Groups, and standardized charter templates. Currently every SEP requires full core-maintainer review, which creates bottlenecks as the ecosystem grows.How MCP compares‍MCP vs. regular APIs.APIs are point-to-point connections. MCP is a protocol layer on top of APIs that standardizes how AI models discover and use them. Your APIs still do the work; MCP gives AI a consistent way to find and call them.‍MCP vs. function calling.Function calling is the AI models ability to decide to invoke a tool. MCP is the protocol that connects the model to the tool. Theyre complementary. Function calling answers should I call a tool? MCP answers how do I reach the tool and talk to it? OpenAI adopted MCP specifically to give their function calling infrastructure access to the broader ecosystem of servers, rather than maintaining a separate plugin architecture.‍MCP vs. Googles A2A.MCP standardizes how AI connects to tools and data. A2A standardizes how agents communicate with each other. They solve different problems. An AI system might use MCP to gather data and A2A to coordinate with other agents. MCP is about the relationship between an agent and its tools; A2A is about the relationship between agents.‍MCP vs. IBMs ACP.ACP, developed by IBM Research for the BeeAI platform, focuses on orchestrating communication between multiple agents in a local-first environment. MCP focuses on connecting AI models to external data and tools. ACP originally drew inspiration from MCP but has evolved independently. For a detailed comparison, see the MCP, ACP, and A2A breakdown.‍MCP vs. LangChain, LlamaIndex, and other frameworks.MCP handles the connection protocol between AI and external tools. Frameworks handle orchestration, chaining, memory, retrieval, and other higher-level concerns. They operate at different layers and work together. LangChain, for example, has integrated MCP support.MCP auth with WorkOSThe roadmap makes it clear that enterprise-managed auth is a priority, but the spec work is still ahead. If youre building MCP servers today and need to move past static client secrets now, WorkOS already solves this.WorkOS AuthKit acts as a spec-compliant OAuth 2.1 authorization server for MCP. Your MCP server is the Resource Server; AuthKit handles authorization, token issuance, consent screens, and client registration. It supports both CIMD (the current spec default) and DCR (for backwards compatibility with older clients). The integration is minimal: point your servers protected resource metadata at an AuthKit domain, verify JWTs on incoming requests, and AuthKit handles the rest.What this gives you in practice: SSO-integrated access to your MCP server out of the box, so IT teams can manage MCP access through the same identity providers they already use. Scoped tokens, PKCE, and consent flows that follow the spec. No custom OAuth infrastructure to build or maintain.If you already have users and a login system, Standalone Connect can run as middleware for MCP OAuth flows without requiring a migration. Users authenticate with your existing system; AuthKit handles only the OAuth authorization and token issuance that MCP clients need.The article above describes where the protocol is headed on enterprise auth. WorkOS is how you get there today. See the MCP auth documentation for implementation details.Further reading and official resourcesFor more info into specific topics covered in this article:MCP 2025-11-25 spec update: async tasks, better OAuth, extensionsMCP Apps: rendering interactive UIs in AI clientsBeyond request-response: how MCP servers are learning to collaborateMCP async tasks: building long-running workflows for AI agentsCIMD vs DCR: the new default for MCP client registrationClient ID Metadata Documents: how OAuth client registration works in MCPDynamic Client Registration (DCR) in MCP: What it is, why it exists, and when to still use itUnderstanding URL-mode elicitation in MCPMCPs 2026 roadmap makes enterprise readiness a top priorityPipes MCP: session-scoped authorization for AI agentsx402 vs. Stripe MPP: payment infrastructure for AI agents and MCP toolsMCP, ACP, A2A: understanding the protocol landscapeMCP auth: The difference between a bridge and a backdoorHow to add OAuth to your MCP serverA developer’s guide to MCP authBest practices for securing MCP model-agent interactionsWhy OAuth is the right fit for the MCP RegistryEnterprise ready MCP servers: How to secure, scale, and deploy for real-world AI--https://workos.com/blog/everything-your-team-needs-to-know-about-mcp-in-2026How MCP servers work: Components, logic, and architecturehttps://workos.com/blog/how-mcp-servers-work