Connect
Optimize
Secure
Announcing StackOne Defender: leading open-source prompt injection guard for your agent • Read More →
Production-ready Mistral AI MCP server with 58 extensible actions — plus built-in authentication, security, and optimized execution.
Coverage
Create, read, update, and delete across Mistral AI — and extend your agent's capabilities with custom actions.
Authentication
Per-user OAuth in one call. Your Mistral AI MCP server gets session-scoped tokens with zero credentials stored on your infra.
Agent Auth →Security
Every Mistral AI tool response scanned for prompt injection in milliseconds — 88.7% accuracy, all running on CPU.
Prompt Injection Defense →Performance
Free up to 96% of your agent's context window to enhance reasoning and reduce cost, on every Mistral AI call.
Tools Discovery →A Mistral AI MCP server lets AI agents read and write Mistral AI data through the Model Context Protocol — Anthropic's open standard for connecting LLMs to external tools. StackOne's Mistral AI MCP server ships with 58 pre-built actions, fully extensible via the Connector Builder — plus managed authentication, prompt injection defense, and optimized agent context. Connect it from MCP clients like Claude Desktop, Cursor, and VS Code, or from agent frameworks like OpenAI Agents SDK, LangChain, and Vercel AI SDK.
Every action from Mistral AI's API, ready for your agent. Create, read, update, and delete — scoped to exactly what you need.
List all available Mistral models accessible to the organization.
Retrieve metadata and capabilities for a specific Mistral model by ID.
Update the name or description of a fine-tuned model.
Permanently delete a fine-tuned model from the organization.
List all files uploaded to the Mistral AI organization.
Retrieve metadata for a specific uploaded file by its ID.
Download the raw content of an uploaded file by its ID.
Permanently delete an uploaded file from the Mistral AI organization.
Create an asynchronous batch inference job to process large numbers of requests at reduced cost.
List all asynchronous batch processing jobs for the organization.
Retrieve the details and current status of a specific batch job.
Create a new Mistral agent with custom instructions, tools, and a backing model.
List all agent entities in the Mistral AI organization.
Retrieve the configuration and metadata for a specific Mistral agent.
Modify an agent's configuration and create a new version with the updated settings.
Permanently delete a Mistral agent and all its versions.
List all versions of a specific Mistral agent.
Retrieve the configuration for a specific version of a Mistral agent.
Create a new persistent conversation and run an initial completion.
List all stored conversations in the Mistral AI organization.
Retrieve metadata for a specific stored conversation.
Permanently delete a stored conversation and its full message history.
Create a new document library for indexing documents for RAG retrieval.
Retrieve metadata for a specific document library.
Update the name or description of an existing document library.
Permanently delete a document library and all its indexed documents.
List all documents in a specific Mistral document library.
Retrieve metadata for a specific document in a Mistral library.
Remove a document from a Mistral document library.
Generate a chat response from a Mistral model given a conversation history.
Generate vector embeddings for one or more text inputs using a Mistral embedding model.
Generate code completions using the fill-in-the-middle (FIM) pattern for code infill.
Extract text and structured data from documents and images using Mistral's OCR model.
Classify text input for policy violations and harmful content using a Mistral moderation model.
Classify a full chat conversation for policy violations using a Mistral moderation model.
Classify text using a fine-tuned Mistral classifier model.
Classify a chat conversation using a fine-tuned Mistral classifier model.
Create or update a named version alias for a Mistral agent.
Append new messages to an existing conversation and run a completion.
Grant or update access to a Mistral library for a user, workspace, or organization.
Generate a temporary signed URL to access a specific file without exposing API credentials.
List all version aliases for a specific Mistral agent.
Retrieve the full entry history of a conversation including tool calls and results.
Retrieve all user and assistant messages in a conversation.
List all document libraries available in the Mistral AI organization.
Retrieve the extracted text content of a specific document in a Mistral library.
Retrieve the processing status of a specific document in a Mistral library.
Retrieve a temporary signed URL to access a specific document in a Mistral library.
Retrieve a signed URL to access the OCR-extracted text of a document in a Mistral library.
List all entities that have been granted access to a Mistral library.
Switch the active version of a Mistral agent to a specific version number.
Remove a named version alias from a Mistral agent.
Revoke access to a Mistral library for a user, workspace, or organization.
Archive a fine-tuned model to disable inference on it without deleting it.
Restore an archived fine-tuned model to active status for inference.
Request cancellation of a running batch processing job.
Restart a conversation from a specific entry point, branching the conversation history.
Trigger reprocessing of a document in a Mistral library.
One endpoint. Any framework. Your agent is talking to Mistral AI in under 10 lines of code.
MCP Clients
Agent Frameworks
{
"mcpServers": {
"stackone": {
"command": "npx",
"args": [
"-y",
"mcp-remote@latest",
"https://api.stackone.com/mcp?x-account-id=<account_id>",
"--header",
"Authorization: Basic <YOUR_BASE64_TOKEN>"
]
}
}
}Anthropic's code_execution processes data already in context. Custom MCP code mode keeps raw tool responses in a sandbox. 14K tokens vs 500.
11 min
Benchmarking BM25, TF-IDF, and hybrid search for MCP tool discovery across 916 tools. The 80/20 TF-IDF/BM25 hybrid hits 21% Top-1 accuracy in under 1ms.
10 min
MCP tools that read emails, CRM records, and tickets are indirect prompt injection vectors. Here's how we built a two-tier defense that scans tool results in ~11ms.
12 min
origin_owner_id.All the tools you need to build and scale AI agent integrations, with best-in-class connectivity, execution, and security.