Skip to content
Home
GitHub

CLI Reference

perstack
├── start Interactive TUI for developing and testing experts
├── run Headless execution with JSON event output
├── log View execution history and events
├── install Pre-collect tool definitions for faster startup
├── expert Manage experts on Perstack API
│ ├── list List draft scopes
│ ├── create Create a new draft scope
│ ├── delete Delete a draft scope
│ ├── push Push local expert definitions to a draft ref
│ ├── refs List draft refs
│ ├── version Assign a version to a draft ref
│ ├── versions List published versions
│ ├── publish Make an expert scope public
│ ├── unpublish Make an expert scope private
│ └── yank Deprecate a specific version
└── application Manage applications on Perstack API
└── list List applications

Interactive workbench for developing and testing experts.

Terminal window
perstack start [expertKey] [query] [options]

Arguments:

ArgumentRequiredDescription
[expertKey]NoExpert key (prompts if not provided)
[query]NoInput query (prompts if not provided)

Headless execution for production and automation. Outputs JSON events to stdout.

Terminal window
perstack run <expertKey> <query> [options]

Arguments:

ArgumentRequiredDescription
<expertKey>YesExpert key (e.g., my-expert, @org/my-expert, @org/expert@1.0.0)
<query>YesInput query

run-only option:

OptionDescription
--filter <types>Filter events by type (comma-separated, e.g., completeRun,stopRunByError)

Both start and run accept the following options:

OptionDefaultDescription
--provider <provider>anthropicLLM provider
--model <model>autoModel name (auto-resolved from expert tier or provider’s middle tier)
--reasoning-budget <budget>-Reasoning budget (minimal, low, medium, high, or token count)

Providers: anthropic, google, openai, deepseek, ollama, azure-openai, amazon-bedrock, google-vertex

OptionDefaultDescription
--max-retries <n>5Max retry attempts per generation
--timeout <ms>300000Timeout per generation (ms)
OptionDefaultDescription
--config <path>Auto-discover from cwdPath or HTTPS URL to perstack.toml
--env-path <path...>.env, .env.localEnvironment file paths
OptionDescription
--job-id <id>Custom job ID (default: auto-generated)
--continueContinue latest job with new run
--continue-job <id>Continue specific job with new run
--resume-from <id>Resume from specific checkpoint (requires --continue-job)
Terminal window
# Continue latest job from its latest checkpoint
--continue
# Continue specific job from its latest checkpoint
--continue-job <jobId>
# Continue specific job from a specific checkpoint
--continue-job <jobId> --resume-from <checkpointId>

--resume-from requires --continue-job. You can only resume from the coordinator expert’s checkpoints.

OptionDescription
-i, --interactive-tool-call-resultTreat query as interactive tool call result

Use with --continue to respond to interactive tool calls from the coordinator expert.

OptionDescription
--verboseEnable verbose logging

View execution history and events for debugging.

Terminal window
perstack log [options]

When called without options, shows a summary of the latest job (max 100 events).

Options:

OptionDefaultDescription
--job <jobId>-Show events for a specific job
--run <runId>-Show events for a specific run
--checkpoint <id>-Show checkpoint details
--step <step>-Filter by step number (e.g., 5, >5, 1-10)
--type <type>-Filter by event type
--errors-Show only error-related events
--tools-Show only tool call events
--delegations-Show only delegation events
--filter <expression>-Simple filter expression
--json-Output as JSON
--pretty-Pretty-print JSON output
--verbose-Show full event details
--take <n>100Number of events to display (0 for all)
--offset <n>0Number of events to skip
--context <n>-Include N events before/after matches
--messages-Show message history for checkpoint
--summary-Show summarized view

Event types:

startRun, callTools, resolveToolResults, callDelegate, stopRunByError, retry, completeRun, continueToNextStep

Filter expression syntax:

Terminal window
--filter '.type == "completeRun"'
--filter '.stepNumber > 5'
--filter '.toolCalls[].skillName == "base"'

Step range syntax:

Terminal window
--step 5 # Exact step
--step ">5" # Greater than 5
--step ">=5" # Greater than or equal to 5
--step "1-10" # Range (inclusive)

Pre-collect tool definitions to enable instant LLM inference.

Terminal window
perstack install [options]

By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. perstack install solves this by:

  1. Initializing all skills once and collecting their tool schemas
  2. Caching the schemas in a perstack.lock file
  3. Enabling the runtime to start LLM inference immediately using cached schemas
  4. Deferring actual MCP connections until tools are called

Options:

OptionDefaultDescription
--config <path>Auto-discover from cwdPath or HTTPS URL to perstack.toml
--env-path <path...>.env, .env.localEnvironment file paths

The lockfile is optional. If not present, skills are initialized at runtime.

The expert command group manages experts on the Perstack API.

Terminal window
perstack expert <subcommand> [options]

Parent options (inherited by all subcommands):

OptionDefaultDescription
--api-key <key>PERSTACK_API_KEY env varAPI key
--base-url <url>https://api.perstack.aiAPI base URL

List draft scopes.

Terminal window
perstack expert list [options]
OptionDescription
--filter <name>Filter by name
--take <n>Limit results
--skip <n>Offset

Create a new draft scope.

Terminal window
perstack expert create <scopeName> --app <id>
ArgumentRequiredDescription
<scopeName>YesExpert scope name
OptionRequiredDescription
--app <id>YesApplication ID

Delete a draft scope.

Terminal window
perstack expert delete <draftScopeId>
ArgumentRequiredDescription
<draftScopeId>YesDraft scope ID

Push local expert definitions to a draft ref.

Terminal window
perstack expert push <draftScopeId> [options]
ArgumentRequiredDescription
<draftScopeId>YesDraft scope ID
OptionDescription
--config <path>Path or HTTPS URL to perstack.toml

Reads experts from perstack.toml and creates a new draft ref.

List draft refs for a draft scope.

Terminal window
perstack expert refs <draftScopeId> [options]
ArgumentRequiredDescription
<draftScopeId>YesDraft scope ID
OptionDescription
--take <n>Limit results
--skip <n>Offset

Assign a semantic version to a draft ref.

Terminal window
perstack expert version <draftScopeId> <refId> <version> [options]
ArgumentRequiredDescription
<draftScopeId>YesDraft scope ID
<refId>YesDraft ref ID
<version>YesSemantic version (e.g., 1.0.0)
OptionDescription
--tag <tag>Version tag (e.g., latest)
--readme <path>Path to README file

List published versions for an expert scope. Does not require an API key for public experts.

Terminal window
perstack expert versions <scopeName>
ArgumentRequiredDescription
<scopeName>YesExpert scope name

Make an expert scope public.

Terminal window
perstack expert publish <scopeName>
ArgumentRequiredDescription
<scopeName>YesExpert scope name

Make an expert scope private.

Terminal window
perstack expert unpublish <scopeName>
ArgumentRequiredDescription
<scopeName>YesExpert scope name

Deprecate a specific expert version.

Terminal window
perstack expert yank <key>
ArgumentRequiredDescription
<key>YesExpert key with version (e.g., my-expert@1.0.0)

The application command group manages applications on the Perstack API.

Terminal window
perstack application <subcommand> [options]

Parent options (inherited by all subcommands):

OptionDefaultDescription
--api-key <key>PERSTACK_API_KEY env varAPI key
--base-url <url>https://api.perstack.aiAPI base URL

List applications.

Terminal window
perstack application list
VariableDescription
PERSTACK_API_KEYAPI key for expert and application commands
PERSTACK_STORAGE_PATHStorage directory for job/run data (default: ./perstack)
Terminal window
# Basic execution
perstack run my-expert "Review this code"
# Interactive TUI
perstack start
# With model options
perstack run my-expert "query" --provider google --model gemini-2.5-pro
# Continue job with follow-up
perstack run my-expert "initial query"
perstack run my-expert "follow-up" --continue
# Continue specific job from specific checkpoint
perstack run my-expert "retry" --continue-job job_abc --resume-from cp_xyz
# Respond to interactive tool call
perstack run my-expert "user response" --continue -i
# Custom config and env
perstack run my-expert "query" --config ./production.toml --env-path .env.production
# Registry experts
perstack run @org/expert@1.0.0 "query"
# Generate lockfile
perstack install
# List applications
perstack application list
# Expert lifecycle
perstack expert create my-expert --app <applicationId>
perstack expert push <draftScopeId> --config ./perstack.toml
perstack expert version <draftScopeId> <refId> 1.0.0 --tag latest
perstack expert versions my-expert
perstack expert publish my-expert
perstack expert yank my-expert@1.0.0
perstack expert unpublish my-expert
perstack expert delete <draftScopeId>
# View execution logs
perstack log
perstack log --job abc123 --errors --context 5
perstack log --json --pretty