Having come from the data & AI world, I had very little experience in API development before joining StackOne. I quickly learned what we're building: a unified API that lets developers connect to many different platforms through just one integration.
At StackOne, we've built this platform across multiple categories including HR, Recruitment and Learning Management systems. Our normalisation layers transform varied provider data models into a consistent schema, handling authentication complexities, rate limiting, and data transformation behind the scenes. For developers, this means integrating once with our API instead of building and maintaining separate integrations for each service—reducing integration time from months to days and allowing them to write code once that works seamlessly across multiple providers.
We're building toward full automation of new provider integrations - aiming to scale coverage exponentially, not linearly. Hence, I'm exploring how AI can automate repetitive integration work while maintaining quality, using Mastra to control exactly which steps happen when.
We have a capable team of integration engineers at StackOne, but each new integration still takes a lot of manual work. Our team spends a significant portion of time on basic tasks like reading API docs, mapping endpoints, and setting up authentication when they could be solving harder problems.
While our current process is effective, we could save a lot of time by automating these repetitive parts. As we add more providers, we need a better system that lets our team focus on the tough integration challenges instead of the routine work.
Mastra is a typescript framework to build agents. I’ve been using it to design deterministic agentic workflows—structured, step-by-step processes that leverage AI while ensuring consistent, predictable results. Unlike traditional AI implementations that can produce varying outputs for the same inputs, Mastra emphasises reproducibility.
The core components of our Mastra implementation include:
This first workflow is broken down into discrete steps, each with a specific responsibility in the integration process:
This approach shows a simplified version of our workflow. Each step connects through explicit data passing, ensuring deterministic results from start to finish.
As a mathematician at heart, I have to admit that calling LLM outputs 'deterministic' makes me cringe slightly. LLMs are inherently probabilistic systems. However, for the purposes of this blog, we're focusing on determinism in workflow structure and tool selection rather than in the specific content generated. While the exact words an LLM produces may vary, the sequence of operations, API calls, and decision points in our workflow can follow a predictable, reproducible pattern.
In API integration, determinism ensures identical outputs from identical inputs—critical for:
Non-deterministic approaches often produce configurations that work in testing but fail in production due to subtle variations. Mastra eliminates (or reduces at least for now) these inconsistencies, providing a foundation for scalable integration development.
I previously explored MCP tools exposed to a chatbot (Claude Desktop or Cursor) as a solution, but despite their strength in task/infrastructure orchestration, this wasn’t enough for our API integration generation needs.
In my opinion, Mastra's key strength is deterministic tool orchestration—I can precisely control which tools execute at each step in the workflow. This gives me the consistent outputs and reproducible results that I found impossible with MCP alone. Yes, there are downsides: higher implementation effort, a steeper learning curve, and more complex maintenance. But for this use case, this tradeoff is worthwhile. My current solution combines MCP's powerful toolset with Mastra's structured execution control, giving me the predictable integration generation I need without sacrificing capability.
Our workflow is still in mid-development—this is our first experience with Mastra. So far, we've built a workflow that generates configurations for our API integration framework with both auth-only and full configuration modes.
The most exciting aspect of our implementation is the iterative validation approach. Rather than accepting the first generated configuration, our system provides feedback-driven improvement through specific error messages, makes persistent attempts until all validations pass, and uses detailed error tracking to inform subsequent iterations. We plan to add more specialized validation steps as development progresses, further strengthening this feedback loop. This validation cycle exemplifies the deterministic nature of our approach—rather than accepting any generated configuration, we ensure it meets all requirements through systematic validation and refinement.
After exploring MCP for our chatbot tool integration in my previous blog post, I've been building an alternative approach with Mastra for our API integration workflows. While still in early development, this new direction addresses the core challenge we faced—controlling precisely which tools are executed when.
As I continue building the tool, I'm excited to explore additional use cases beyond basic configuration generation. I'll share concrete results, implementation details, and the solution(s) to finding the right balance between deterministic workflows and maintaining the flexibility to adapt to new providers.
This experiment with Mastra represents a promising alternative to how we've previously handled AI-driven tool orchestration—offering the control and predictability we need for scaling our unified API infrastructure.