Skip to content
Home
GitHub

Walkthrough

This walkthrough takes you from zero to production integration.

There are two ways to provide API keys:

1. Pass host environment variables with -e

Export the key on the host and forward it to the container:

Terminal window
export FIREWORKS_API_KEY=fw_...
docker run --rm -it \
-e FIREWORKS_API_KEY \
-v ./workspace:/workspace \
perstack/perstack start my-expert "query" --provider fireworks

2. Store keys in a .env file in the workspace

Create a .env file in the workspace directory. Perstack loads .env and .env.local by default:

./workspace/.env
FIREWORKS_API_KEY=fw_...
Terminal window
docker run --rm -it \
-v ./workspace:/workspace \
perstack/perstack start my-expert "query"

You can also specify custom .env file paths with --env-path:

Terminal window
perstack start my-expert "query" --env-path .env.production

Generate an Expert definition interactively:

Terminal window
# Use `create-expert` to scaffold a micro-agent team named `ai-gaming`
docker run --pull always --rm -it \
-e FIREWORKS_API_KEY \
-v ./ai-gaming:/workspace \
perstack/perstack start create-expert \
--provider fireworks \
--model accounts/fireworks/models/kimi-k2p5 \
"Form a team named ai-gaming to build a Bun-based CLI indie game playable on Bash for AI."

create-expert is a built-in expert. It generates a perstack.toml that defines a team of micro-agents, runs them, evaluates the results, and iterates until the setup works. Each agent has a single responsibility and its own context window. Complex tasks are broken down and delegated to specialists.

The result is a perstack.toml ready to use:

[experts."ai-gaming"]
description = "Game development team lead"
instruction = "Coordinate the team to build a CLI dungeon crawler."
delegates = ["@ai-gaming/level-designer", "@ai-gaming/programmer", "@ai-gaming/tester"]
[experts."@ai-gaming/level-designer"]
description = "Designs dungeon layouts and game mechanics"
instruction = "Design engaging dungeon levels, enemy encounters, and progression systems."
[experts."@ai-gaming/programmer"]
description = "Implements the game in TypeScript"
instruction = "Write the game code using Bun, targeting terminal-based gameplay."
[experts."@ai-gaming/tester"]
description = "Tests the game and reports bugs"
instruction = "Play-test the game, find bugs, and verify fixes."

You can also write perstack.toml manually — create-expert is a convenient assistant, not necessary.

Terminal window
# Let `ai-gaming` build a Wizardry-like dungeon crawler
docker run --pull always --rm -it \
-e FIREWORKS_API_KEY \
-v ./ai-gaming:/workspace \
perstack/perstack start ai-gaming \
--provider fireworks \
--model accounts/fireworks/models/kimi-k2p5 \
"Create a Wizardry-like dungeon crawler in a fixed 10-floor labyrinth with complex layouts, traps, fixed room encounters, and random battles. Include special-effect gear drops, leveling, and a skill tree for one playable character. Balance difficulty around build optimization. Death in the dungeon causes loss of one random equipped item."

perstack start opens a text-based UI for developing and testing Experts. You get real-time feedback and can iterate on definitions without deploying anything.

Terminal window
docker run --pull always --rm \
-e FIREWORKS_API_KEY \
-v ./ai-gaming:/workspace \
perstack/perstack run ai-gaming \
--provider fireworks \
--model accounts/fireworks/models/kimi-k2p5 \
"Create a Wizardry-like dungeon crawler in a fixed 10-floor labyrinth with complex layouts, traps, fixed room encounters, and random battles. Include special-effect gear drops, leveling, and a skill tree for one playable character. Balance difficulty around build optimization. Death in the dungeon causes loss of one random equipped item."

perstack run outputs JSON events to stdout — designed for automation and CI pipelines.

AspectWhat Perstack Does
StateAll Experts share the workspace (./workspace), not conversation history.
CollaborationCoordinator(ai-gaming) delegates to Delegated Experts(@ai-gaming/level-designer, @ai-gaming/programmer, @ai-gaming/tester) autonomously.
ObservabilityEvery step is visible as a structured JSON event.
IsolationJob is executed in sandboxed environment safely. Each Expert has its own context window.

After running an Expert, inspect what happened:

Terminal window
docker run --rm -it \
-v ./ai-gaming:/workspace \
perstack/perstack log

By default, this shows a summary of the latest job — the Expert that ran, the steps it took, and any errors.

Key options for deeper inspection:

OptionPurpose
--errorsShow only error-related events
--toolsShow only tool call events
--step "5-10"Filter by step range
--summaryShow summarized view
--jsonMachine-readable output

This matters because debugging agents across model changes, requirement changes, and prompt iterations requires visibility into every decision the agent made. perstack log gives you that visibility without adding instrumentation code.

See CLI Reference for the full list of options.

Terminal window
docker run --rm -it \
-v ./ai-gaming:/workspace \
perstack/perstack install

This creates a perstack.lock file that caches tool schemas for all MCP skills. Without the lockfile, Perstack initializes MCP skills at runtime to discover their tool definitions — which can add 500ms–6s startup latency per skill.

Workflow:

  1. Develop without a lockfile — MCP skills are resolved dynamically
  2. Run perstack install before deploying — tool schemas are cached
  3. Deploy with perstack.lock — the runtime starts LLM inference immediately

When to re-run: after adding or modifying skills in perstack.toml, or after updating MCP server dependencies.

The lockfile is optional. If not present, skills are initialized at runtime as usual.

The CLI is for prototyping. For production, integrate Experts into your application via sandbox providers or runtime embedding.

Perstack’s isolation model maps naturally to container and serverless platforms:

  • Docker
  • AWS ECS
  • Google Cloud Run
  • Kubernetes
  • Cloudflare Workers

Each Expert runs in its own sandboxed environment. See Going to Production for the Docker setup pattern. Detailed guides for other providers are coming soon.

For tighter integration, embed the runtime directly in your TypeScript/JavaScript application:

import { run } from "@perstack/runtime"
const checkpoint = await run({
setting: {
providerConfig: { providerName: "fireworks" },
expertKey: "my-expert",
input: { text: "Start today's session" },
},
})

You can also listen for events during execution:

import { run } from "@perstack/runtime"
const checkpoint = await run({
setting: {
providerConfig: { providerName: "fireworks" },
expertKey: "my-expert",
input: { text: "Start today's session" },
},
eventListener: (event) => {
console.log(event.type, event)
},
})

The CLI is for prototyping. The runtime API is for production. Both use the same perstack.toml.

Build more:

Understand the architecture:

Reference: