The Model Context Protocol (MCP) provides a standardized way for servers to request LLM sampling (“completions” or “generations”) from language models via clients. This flow allows clients to maintain control over model access, selection, and permissions while enabling servers to leverage AI capabilities—with no server API keys necessary. Servers can request text or image-based interactions and optionally include context from MCP servers in their prompts.
Sampling in MCP allows servers to implement agentic behaviors, by enabling LLM calls to occur nested inside other MCP server features.
Implementations are free to expose sampling through any interface pattern that suits their needs—the protocol itself does not mandate any specific user interaction model.
For trust & safety and security, there SHOULD always be a human in the loop with the ability to deny sampling requests.
Applications SHOULD:
Clients that support sampling MUST declare the sampling
capability during
initialization:
To request a language model generation, servers send a sampling/createMessage
request:
Request:
Response:
Sampling messages can contain:
Model selection in MCP requires careful abstraction since servers and clients may use different AI providers with distinct model offerings. A server cannot simply request a specific model by name since the client may not have access to that exact model or may prefer to use a different provider’s equivalent model.
To solve this, MCP implements a preference system that combines abstract capability priorities with optional model hints:
Servers express their needs through three normalized priority values (0-1):
costPriority
: How important is minimizing costs? Higher values prefer cheaper models.speedPriority
: How important is low latency? Higher values prefer faster models.intelligencePriority
: How important are advanced capabilities? Higher values prefer
more capable models.While priorities help select models based on characteristics, hints
allow servers to
suggest specific models or model families:
For example:
The client processes these preferences to select an appropriate model from its available
options. For instance, if the client doesn’t have access to Claude models but has Gemini,
it might map the sonnet hint to gemini-1.5-pro
based on similar capabilities.
Clients SHOULD return errors for common failure cases:
Example error:
The Model Context Protocol (MCP) provides a standardized way for servers to request LLM sampling (“completions” or “generations”) from language models via clients. This flow allows clients to maintain control over model access, selection, and permissions while enabling servers to leverage AI capabilities—with no server API keys necessary. Servers can request text or image-based interactions and optionally include context from MCP servers in their prompts.
Sampling in MCP allows servers to implement agentic behaviors, by enabling LLM calls to occur nested inside other MCP server features.
Implementations are free to expose sampling through any interface pattern that suits their needs—the protocol itself does not mandate any specific user interaction model.
For trust & safety and security, there SHOULD always be a human in the loop with the ability to deny sampling requests.
Applications SHOULD:
Clients that support sampling MUST declare the sampling
capability during
initialization:
To request a language model generation, servers send a sampling/createMessage
request:
Request:
Response:
Sampling messages can contain:
Model selection in MCP requires careful abstraction since servers and clients may use different AI providers with distinct model offerings. A server cannot simply request a specific model by name since the client may not have access to that exact model or may prefer to use a different provider’s equivalent model.
To solve this, MCP implements a preference system that combines abstract capability priorities with optional model hints:
Servers express their needs through three normalized priority values (0-1):
costPriority
: How important is minimizing costs? Higher values prefer cheaper models.speedPriority
: How important is low latency? Higher values prefer faster models.intelligencePriority
: How important are advanced capabilities? Higher values prefer
more capable models.While priorities help select models based on characteristics, hints
allow servers to
suggest specific models or model families:
For example:
The client processes these preferences to select an appropriate model from its available
options. For instance, if the client doesn’t have access to Claude models but has Gemini,
it might map the sonnet hint to gemini-1.5-pro
based on similar capabilities.
Clients SHOULD return errors for common failure cases:
Example error: