Skip to main content

Announcing StackOne Defender: leading open-source prompt injection guard for your agent Read More

Veremark MCP Server
for AI Agents

Production-ready Veremark MCP server with 22 extensible actions — plus built-in authentication, security, and optimized execution.

Veremark logo
Veremark MCP Server
Built by StackOne StackOne

Coverage

22 Agent Actions

Create, read, update, and delete across Veremark — and extend your agent's capabilities with custom actions.

Authentication

Agent Tool Authentication

Per-user OAuth in one call. Your Veremark MCP server gets session-scoped tokens with zero credentials stored on your infra.

Agent Auth →

Security

Agent Protection

Every Veremark tool response scanned for prompt injection in milliseconds — 88.7% accuracy, all running on CPU.

Prompt Injection Defense →

Performance

Max Agent Context. Min Cost.

Free up to 96% of your agent's context window to enhance reasoning and reduce cost, on every Veremark call.

Tools Discovery →

What is the Veremark MCP Server?

A Veremark MCP server lets AI agents read and write Veremark data through the Model Context Protocol — Anthropic's open standard for connecting LLMs to external tools. StackOne's Veremark MCP server ships with 22 pre-built actions, fully extensible via the Connector Builder — plus managed authentication, prompt injection defense, and optimized agent context. Connect it from MCP clients like Claude Desktop, Cursor, and VS Code, or from agent frameworks like OpenAI Agents SDK, LangChain, and Vercel AI SDK.

All Veremark MCP Tools and Actions

Every action from Veremark's API, ready for your agent. Create, read, update, and delete — scoped to exactly what you need.

Candidates

  • List Candidates

    Retrieve a list of all candidates in your Veremark account.

  • Delete Candidate

    Soft delete a specific candidate from your Veremark account by their GUID.

Checks

  • List Checks

    Retrieve a list of all available background checks, optionally filtered by country.

Criterias

  • Create Criteria

    Create a new background check criteria template with specified checks and country.

  • List Criteria

    Retrieve a list of all background check criteria templates in your Veremark account.

  • Get Criteria

    Retrieve details of a specific criteria template by its GUID.

  • Delete Criteria

    Delete a specific criteria template by its GUID.

Documents

  • Upload Document

    Upload a document to Veremark as multipart/form-data for use in background check verification.

  • Get Document

    Download the binary content of a specific document by its GUID.

Requests

  • Create Request

    Create a new background check request for a candidate using specified criteria.

  • List Requests

    Retrieve a list of all background check requests, optionally filtered by status change date.

  • Get Request

    Retrieve details of a specific background check request by its GUID.

  • Delete Request

    Delete a background check request by its GUID. Only requests in expired, requested, or draft status can be deleted.

Check Status

  • Update Check Status

    Update the status of a specific check within a background check request.

Magic Links

  • Create Magic Links

    Create time-limited magic links for accessing the report of a specific background check request.

Force Check Status

  • Force Check Status

    Force a specific check status transition within a background check request using the restricted endpoint.

Full Reports

  • Get Full Report

    Download the full PDF report for a background check request.

Check Reports

  • Get Check Report

    Download the PDF report for a specific individual check within a background check request.

Users

  • Create User

    Create a new user in your Veremark account with a specified role.

  • List Users

    Retrieve a list of all external users in your Veremark account.

  • Get User

    Retrieve details of a specific user in your Veremark account by their GUID.

  • Delete User

    Delete a specific user from your Veremark account by their GUID.

Set Up Your Veremark MCP Server in Minutes

One endpoint. Any framework. Your agent is talking to Veremark in under 10 lines of code.

MCP Clients

Agent Frameworks

Claude Desktop
{
  "mcpServers": {
    "stackone": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote@latest",
        "https://api.stackone.com/mcp?x-account-id=<account_id>",
        "--header",
        "Authorization: Basic <YOUR_BASE64_TOKEN>"
      ]
    }
  }
}

More Recruiting MCP Servers

JobAdder

246+ actions

Vincere

206+ actions

SmartRecruiters

164+ actions

Ashby

137+ actions

Factorial

127+ actions

HiBob

123+ actions

Veremark MCP Server FAQ

Veremark MCP server vs direct API integration — what's the difference?
A Veremark MCP server and direct API integration serve different use cases. Direct API integration is for software-to-software — backend code calling Veremark. A Veremark MCP server is for AI agents — MCP clients like Claude and Cursor, plus framework agents built with OpenAI, LangChain, or Vercel AI — discovering and calling Veremark at runtime. StackOne provides both.
How does Veremark authentication work for AI agents?
Veremark authentication for AI agents works through a StackOne Connect Session. Create one via the dashboard or the SDK — you get an auth link and ready-to-paste config for Claude Desktop, Cursor, and other MCP clients. Your user authenticates their own Veremark account; StackOne handles token exchange, storage, and refresh. Credentials never reach the LLM, and each user is isolated via origin_owner_id.
Are Veremark MCP tools vulnerable to prompt injection?
Yes — Veremark MCP tools can be vulnerable to indirect prompt injection. Any tool that reads user-written content — documents, messages, tickets, records, or free-text fields — is a potential vector. StackOne Defender scans every tool response before it enters the agent's context — regex patterns in ~1ms, then a MiniLM classifier in ~4ms. 88.7% accuracy, CPU-only.
What is the context bloat of a Veremark agent and how do I avoid it?
Context bloat happens when Veremark tool schemas and API responses eat your Veremark agent's memory, preventing it from reasoning effectively. A single Veremark query can return a massive JSON response, and connecting multiple tools compounds the problem. Tools Discovery and Code Mode reduce context bloat — loading only relevant tools per query and keeping raw responses out of the agent's context.
Can I limit which actions my Veremark agent can access?
Yes — you can limit which actions your Veremark agent can access directly from the StackOne dashboard. Toggle actions on or off, or restrict them to specific accounts, with no code changes to your agent. Session tokens can be scoped to exact actions so if one leaks, exposure stays contained.
Can I create custom agent actions for my Veremark MCP server?
Yes — you can create custom agent actions for your Veremark MCP server using Connector Builder. It's an integration agent your coding assistant (Claude Code, Cursor, or Copilot) can invoke to research Veremark's API, generate production-ready connector YAML, test against the live API, and validate before you ship.
When should I NOT use a Veremark MCP server?
Skip a Veremark MCP server if your integration is purely software-to-software — direct Veremark API integration is simpler when no AI agent is involved. For deterministic, compliance-critical operations (financial transactions, regulatory reporting), direct API gives you predictable behavior without agent-driven decision-making. MCP shines when AI agents need to dynamically discover and call Veremark actions at runtime.
What AI frameworks and AI clients does the StackOne Veremark MCP server support?
The StackOne Veremark MCP server supports both. MCP clients (paste-and-go apps): Claude Desktop, Claude Code, Cursor, VS Code, Goose. Agent frameworks (code SDKs you build with): OpenAI Agents SDK, Anthropic, Vercel AI, Google ADK, CrewAI, Pydantic AI, LangChain, LangGraph, Azure AI Foundry.

Put your AI agents to work

All the tools you need to build and scale AI agent integrations, with best-in-class connectivity, execution, and security.