Perstack

Agents Made Simple, Reliable.

An Agentic AI Platform. Built for Developers.

Runtime

Open-source, event-sourced runtime for long-running agents. Resume, replay, or diff executions across model changes.

Studio

A development environment for Expert creation. Write agents in natural language, test in real time, deploy with confidence.

Gallery

Community-driven collection of ready-to-use Experts. Every agent is audited, every execution is sandboxed.

What is Perstack?

Architecture for Agentic AI.

Agentic systems are tangled — prompts live in application code, untestable without deploying. Behavior changes require code changes, PRs, and deploys. Domain experts can't touch the prompts. When something breaks, you're reading raw logs.

Open-Source Runtime

Open-source, event-sourced runtime for long-running agents. Resume, replay, or diff executions across model changes.

Toolkit for Developers

Run locally with production-identical isolation. Inspect every tool call, delegation, and checkpoint in detail.

Expert Definitions

Define Experts in natural language. Descriptions, instructions, delegates, skills — all in a single TOML file. No SDK, no glue code.

Fully Reproducible Runs

Every run is a replayable event stream with step-level checkpoints. Pin versions, replay anytime, diff to find drift.

Sandboxed Runtime

Each Expert runs in its own isolated context. Workspace boundaries, environment sandboxing, tool whitelisting — by default.

Cross-Job Analytics

Success rates, token usage, errors, and tool utilization across jobs. CLI and Studio dashboards.

Agentic AI Development

Define. Experts in your words.

Expert definitions in perstack.toml are written by domain experts using natural language. Developers focus on integration, not prompt engineering.

Natural-language definitions

Define Experts with instruction, skills, and delegates. No classes, no boilerplate, no framework to learn.

MCP-native skills

Connect to any MCP server as a skill. GitHub, Slack, databases — your Expert uses real tools, not wrappers.

Multi-provider

Anthropic, OpenAI, Google, DeepSeek, Ollama, Azure, Bedrock, Vertex. Switch providers with one config change.

Describe what you need. We handle the rest.

Perstack generates Expert definitions from your description, tests them against real scenarios, iterates until behavior stabilizes, and reports what they can do.

Integration

Run. Same Behavior, Everywhere.

Open-source runtime. Event-derived execution. Deterministic checkpoints.

// observable

24 activity types, real-time

Every tool call, every delegation, every LLM interaction—streamed as structured events. Debug anything.

// isolated

Ephemeral VM per job

Each Expert runs in its own isolated context — workspace boundaries, environment sandboxing, and tool whitelisting. No shared state between runs.

// reproducible

Lockfile for production

Pin skill versions, provider configs, and Expert definitions. Event-derived execution and step-level checkpoints maintain reproducible behavior.

How Perstack Compares

Agent Frameworks
LangChain, CrewAI, Mastra
Cloud Platforms
Bedrock, Vertex
Perstack
Agent Runtime
Agent definition
Code-based, full control over execution
Console wizards, managed configs
Natural-language definitions in TOML
Production runtime
Self-managed infrastructure
Managed infrastructure, integrated monitoring
Open-source, sandbox-first
Multi-agent
Framework-specific orchestration
Provider's agent ecosystem
Native delegation between Experts
Security model
Application-level, self-implemented
Platform-managed permissions
Ephemeral VM per job, isolated by design
Versioning & sharing
Git + manual packaging
Platform-scoped sharing
Registry + lockfile pinning
Provider flexibility
Varies by framework
Provider's model ecosystem + select third-party
8 providers, one config change

Build your first Expert in 5 minutes.

Create. Execute. Integrate.