Source code and prompts are visible
You can inspect, fork, and run locally
Build agents for orchagent. Use when creating, coding, or publishing any agent on the orchagent platform. Contains exact sandbox contracts, boilerplate code, environment details, orchagent.json field reference, and debugging patterns.
by orchagent
Build agents for orchagent. Use when creating, coding, or publishing any agent on the orchagent platform. Contains exact sandbox contracts, boilerplate code, environment details, orchagent.json field reference, and debugging patterns.
The full instructions provided by this skill
# orchagent Agent Builder
Complete builder reference for creating agents on orchagent. This covers the platform internals you need to write agent code that works correctly in orchagent's execution environment.
## When to Use This Skill
- Building a new agent for orchagent (any type)
- Writing Python or JavaScript code for a tool or agent-type agent
- Setting up orchagent.json manifest
- Debugging sandbox execution issues
- Building always-on services (Discord bots, webhooks, monitors)
- Creating orchestrator agents that call other agents
- Scheduling agents (cron jobs, webhooks)
- Configuring workspace secrets for agent runtime
> Looking to **use or run** agents instead of building them? Install the platform guide: `orch skill install orchagent-public/orchagent-guide`
> Full documentation: [docs.orchagent.io](https://docs.orchagent.io)
---
## Agent Types at a Glance
| Type | Engine | Runs Where | You Write | Use Case | LLM Provider Cost Tracking |
|------|--------|-----------|-----------|----------|---------------|
| **prompt** | direct_llm | Gateway (no sandbox) | prompt.md + schema.json | Single LLM call with templated prompt | Automatic |
| **tool** | code_runtime | Isolated sandbox | main.py or main.js (stdin/stdout) | Custom Python/JS code execution | Automatic* |
| **agent** | managed_loop | Isolated sandbox | prompt.md + optional code | LLM tool-use loop with built-in + custom tools | Automatic |
| **agent** + `runtime.command` | code_runtime | Isolated sandbox | main.py/main.js + deps | Custom code with own LLM calls, scheduled tasks | Automatic* |
| **skill** | None | N/A | SKILL.md | Knowledge attachment (no execution) | N/A |
*\*Tracks your underlying LLM provider costs (what Anthropic, OpenAI, or Google charge you — not orchagent platform fees). Requires using standard provider SDKs (`anthropic`, `openai`, `google-genai`) that read API keys from environment variables. The platform injects proxy tokens into `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, and `GEMINI_API_KEY` at runtime — standard SDKs pick these up automatically and route calls through the gateway for tracking. See Section 13.*
---
## 1. Tool Agent (type: "tool")
The most common type. Your code runs in an isolated sandbox.
### File Structure
**Python** — scaffold with `orch init --type tool`:
```
my-tool/
orchagent.json # Required: manifest
main.py # Required: entrypoint (reads stdin, writes stdout)
requirements.txt # Optional: pip dependencies
Dockerfile # Optional: custom environment
schema.json # Optional: input/output schemas
```
**JavaScript** — scaffold with `orch init --type tool --language javascript`:
```
my-tool/
orchagent.json # Required: manifest
main.js # Required: entrypoint (reads stdin, writes stdout)
package.json # Optional: npm dependencies
Dockerfile # Optional: custom environment
schema.json # Optional: input/output schemas
```
### orchagent.json
**Python:**
```json
{
"name": "my-tool",
"type": "tool",
"description": "What this tool does",
"entrypoint": "main.py",
"supported_providers": ["anthropic"],
"required_secrets": ["ANTHROPIC_API_KEY"],
"timeout_seconds": 60,
"tags": ["category"],
"bundle": {
"include": ["*.py", "requirements.txt"],
"exclude": ["tests/", "__pycache__"]
}
}
```
**JavaScript:**
```json
{
"name": "my-tool",
"type": "tool",
"description": "What this tool does",
"entrypoint": "main.js",
"supported_providers": ["anthropic"],
"required_secrets": ["ANTHROPIC_API_KEY"],
"timeout_seconds": 60,
"tags": ["category"],
"bundle": {
"include": ["*.js", "package.json", "package-lock.json"],
"exclude": ["tests/", "node_modules/"]
}
}
```
### The stdin/stdout Contract
Your code receives JSON on stdin and must print JSON to stdout. Nothing else.
**Python:**
```python
#!/usr/bin/env python3
import json
import sys
def main():
# Read input from stdin
input_data = json.load(sys.stdin)
# Access uploaded files (if any)
files = input_data.get("files", [])
for f in files:
path = f["path"] # e.g. /tmp/uploads/0_invoice.pdf
name = f["original_name"] # e.g. invoice.pdf
mime = f["content_type"] # e.g. application/pdf
size = f["size_bytes"]
# Your logic here
result = {"status": "ok", "output": "..."}
# Write JSON to stdout (this is your response)
print(json.dumps(result))
if __name__ == "__main__":
main()
```
**JavaScript:**
```javascript
const fs = require('fs');
function main() {
// Read input from stdin
const raw = fs.readFileSync('/dev/stdin', 'utf8');
const inputData = JSON.parse(raw);
// Access uploaded files (if any)
const files = inputData.files || [];
for (const f of files) {
const path = f.path; // e.g. /tmp/uploads/0_invoice.pdf
const name = f.original_name; // e.g. invoice.pdf
const mime = f.content_type; // e.g. application/pdf
const size = f.size_bytes;
}
// Your logic here
const result = { status: 'ok', output: '...' };
// Write JSON to stdout (this is your response)
console.log(JSON.stringify(result));
}
main();
```
**Critical rules:**
- Print ONLY your JSON result to stdout. Any other prints break parsing.
- Use `sys.stderr` for debug logging (stderr is captured but not returned to caller).
- Exit code 0 = success. Non-zero = error returned to caller.
- If stdout is not valid JSON, it gets wrapped as a JSON object with a "result" key containing the raw output.
### Sandbox Environment
**Working directory:** `/home/user`
**File locations:**
- `/home/user/` — your extracted code bundle
- `/tmp/uploads/` — uploaded files (multipart/form-data), named `{i}_{safe_name}`
- `/home/user/orchagent/skills/` — skill files (if skills attached)
**Pre-installed:** Python 3.11+, Node.js 20+, pip, curl, unzip (versions depend on E2B base template)
**Setup sequence for Python tool agents (before your code runs):**
1. Bundle decoded from base64 and extracted from zip
2. `orchagent` auto-corrected to `orchagent-sdk` in requirements.txt
3. `pip install -q orchagent-sdk` (if agent has manifest dependencies)
4. `pip install -q -r requirements.txt` (if file exists)
5. `python3 main.py < input.json`
**Setup sequence for JavaScript tool agents (before your code runs):**
1. Bundle decoded from base64 and extracted from zip
2. `npm ci` (if `package-lock.json` exists) or `npm install` (if `package.json` exists)
3. `node main.js < input.json`
**Setup sequence for agent-type agents:**
1. LLM provider SDK installed (`anthropic`, `openai`, or `google-generativeai`)
2. `pip install -q orchagent-sdk` (if agent has manifest dependencies)
3. Managed loop starts with your prompt.md as the system prompt
**Timing for tool and code agents (all `code_runtime`):** Setup (pip install) consumes part of your timeout. The sandbox lifetime equals `timeout_seconds` with NO buffer. If you have heavy dependencies, use a Dockerfile to pre-install them.
**Timing for agent-type agents:** Sandbox lifetime = `timeout_seconds + 120s` (120s buffer for setup). Setup does NOT consume your execution timeout.
### Environment Variables Available
Always available:
```
ORCHAGENT_BILLING_ORG_ID # Who is being billed
ORCHAGENT_ROOT_RUN_ID # Top-level execution ID
ORCHAGENT_REQUEST_ID # This specific request ID
```
LLM keys (if user provided BYOK):
```
ANTHROPIC_API_KEY # If provider is anthropic
OPENAI_API_KEY # If provider is openai
GEMINI_API_KEY # If provider is gemini
LLM_MODEL # Model name (always set for agent-type; BYOK-only for tool-type)
```
Agent-type only (managed loop):
```
LLM_PROVIDER # "anthropic", "openai", or "gemini" (determines which SDK is used)
```
If agent has manifest dependencies (orchestrator) — **all auto-injected by the gateway, do not add to `required_secrets`**:
```
ORCHAGENT_SERVICE_KEY # Temp API key for calling other agents (auto-created per run)
ORCHAGENT_SDK_REQUIRED # "1" (orchagent-sdk is auto-installed)
ORCHAGENT_GATEWAY_URL # Gateway base URL
ORCHAGENT_CALL_CHAIN # Current call chain (cycle detection)
ORCHAGENT_DEADLINE_MS # Epoch ms deadline
ORCHAGENT_MAX_HOPS # Remaining hop count
```
If skills are attached:
```
ORCHAGENT_SKILLS_DIR # Path to skill files (/home/user/orchagent/skills)
```
Custom workspace secrets (set via `orch secrets set NAME value` or dashboard Settings > Secrets):
```
YOUR_CUSTOM_SECRET # Injected if listed in required_secrets
```
### Entrypoint Detection
If `entrypoint` is set in orchagent.json, that file is used. Otherwise defaults to `main.py`.
For JavaScript agents: the platform detects `.js` entrypoints and uses `node` instead of `python3`. Set `"entrypoint": "main.js"` explicitly, or the auto-detection order is: `main.py`, `app.py`, `agent.py`, `run.py`, `__main__.py`, `main.js`, `index.js`, `agent.js`.
---
## 2. Agent Type (type: "agent")
LLM-powered agent with a tool-use loop. The platform runs an LLM loop for you — you write the prompt and optionally define custom tools.
### File Structure
Scaffold with `orch init --type agent`.
```
my-agent/
orchagent.json # Required: manifest with loop/custom_tools config
prompt.md # Required: system prompt for the LLM
schema.json # Optional: output schema for submit_result
```
### orchagent.json
```json
{
"name": "my-agent",
"type": "agent",
"description": "What this agent does",
"supported_providers": ["anthropic"],
"required_secrets": ["ANTHROPIC_API_KEY"],
"default_models": {
"anthropic": "claude-sonnet-4-5-20250929",
"openai": "gpt-4o",
"gemini": "gemini-2.5-pro"
},
"loop": {
"max_turns": 25,
"custom_tools": [
{
"name": "count_words",
"description": "Count words in a file",
"input_schema": {
"type": "object",
"properties": {
"path": { "type": "string", "description": "File path to count" }
},
"required": ["path"]
},
"command": "wc -w {{path}}"
}
]
},
"timeout_seconds": 120
}
```
### Built-in Tools (always available)
The LLM in the managed loop automatically gets these tools:
- **bash** — Run a shell command (120s timeout per command)
- **read_file** — Read a file's contents
- **write_file** — Create or overwrite a file (parent dirs auto-created)
- **list_files** — List directory contents (optional recursive flag)
- **submit_result** — Submit final result (schema from schema.json if provided)
### Custom Tools
Custom tools run shell commands when the LLM calls them. Template variables `{{key}}` in the command are substituted with parameter values. The tool input is also written to `/tmp/__tool_input.json` before the command runs. The tool's stdout becomes the result returned to the LLM.
**Two patterns for custom tools:**
**1. Inline shell commands** — for simple operations using template variable substitution:
```json
{
"name": "count_words",
"description": "Count words in a file",
"input_schema": {
"type": "object",
"properties": {
"path": { "type": "string", "description": "File path" }
},
"required": ["path"]
},
"command": "wc -w {{path}}"
}
```
For slightly more complex logic, read from `/tmp/__tool_input.json`:
```json
{
"name": "extract_urls",
"description": "Extract all URLs from a file",
"input_schema": {
"type": "object",
"properties": {
"path": { "type": "string", "description": "File to scan" }
},
"required": ["path"]
},
"command": "grep -oE 'https?://[^ ]+' {{path}} | sort -u | python3 -c \"import sys,json; print(json.dumps(sys.stdin.read().splitlines()))\""
}
```
**2. Calling other agents** — for complex logic, create a separate tool agent and call it via `orch_call.py` (see next section).
**Note:** Agent-type agents use a managed loop — no code bundle is uploaded. The LLM has **bash**, **read_file**, **write_file**, and **list_files** built-in for general-purpose tasks. However, if the agent declares `manifest.dependencies` and runs in **strict mode** (the default for new publishes), **bash is removed** and the agent must use its custom tools (dependencies). In flexible mode, all built-in tools remain available. Custom tools give the LLM a named, structured interface for specific operations.
### Calling Other Agents as Custom Tools
To call another orchagent agent from within the managed loop, use the `orch_call.py` helper (injected automatically when `manifest.dependencies` is declared):
```json
{
"name": "scan_secrets",
"description": "Scan code for leaked secrets",
"input_schema": {
"type": "object",
"properties": {
"url": { "type": "string" }
},
"required": ["url"]
},
"command": "python3 /home/user/helpers/orch_call.py org/leak-finder@v1"
}
```
The helper reads `/tmp/__tool_input.json`, calls the target agent via SDK, and prints the result.
**Requirements for agent-to-agent calls:**
- Declare dependencies in `manifest` (see Orchestration section)
- `orchagent-sdk` is auto-installed when dependencies are declared
### System Prompt Structure
The platform injects a platform context block BEFORE your prompt.md content:
```
[PLATFORM CONTEXT — auto-injected by orchagent]
## Environment
You are running inside an isolated sandbox. Working directory: /home/user
Uploaded files (if any): /tmp/uploads/
## Tools
- **bash**: Run shell commands (120s timeout per command)
- **read_file**: Read a file's contents
- **write_file**: Create or overwrite a file
- **list_files**: List directory contents
- **{your_custom_tool}**: {description}
## Skills
Reference material is available in /home/user/orchagent/skills/:
- {skill_name} -- {skill_description}
## Submitting Results
When done, call **submit_result** with output matching this schema:
{output_schema}
[END PLATFORM CONTEXT]
---
{YOUR prompt.md CONTENT HERE}
```
### LLM Provider Details
- **anthropic** — 16384 max tokens, prompt caching (ephemeral)
- **openai** — 16384 max tokens
- **gemini** — 16384 max tokens
**Important:** Always set `default_models` in orchagent.json to control which model runs. If omitted, the gateway picks its own defaults which may differ from what you expect.
Max turns: clamped to `min(max(1, configured), 50)`.
---
## 3. Code Agent (type: "agent" + runtime.command)
Custom code that runs in a sandbox — you control everything. Unlike tool agents (stdin/stdout contract) or managed loop agents (platform-controlled LLM loop), code agents run your entrypoint directly. Use this for scheduled tasks, data pipelines, or any agent that makes its own API calls.
### File Structure
**Python:**
```
my-code-agent/
orchagent.json # Required: manifest with runtime.command
main.py # Required: entrypoint
requirements.txt # Optional: pip dependencies
Dockerfile # Optional: custom environment
```
**JavaScript:**
```
my-code-agent/
orchagent.json # Required: manifest with runtime.command
main.js # Required: entrypoint
package.json # Optional: npm dependencies
Dockerfile # Optional: custom environment
```
### orchagent.json
**Python:**
```json
{
"name": "weekly-summary",
"type": "agent",
"runtime": {
"command": "python3 main.py"
},
"required_secrets": [
"ANTHROPIC_API_KEY",
"DISCORD_WEBHOOK_URL"
],
"timeout_seconds": 120,
"bundle": {
"include": ["*.py", "requirements.txt"],
"exclude": ["tests/", "__pycache__", ".env"]
}
}
```
**JavaScript:**
```json
{
"name": "weekly-summary",
"type": "agent",
"runtime": {
"command": "node main.js"
},
"required_secrets": [
"ANTHROPIC_API_KEY",
"DISCORD_WEBHOOK_URL"
],
"timeout_seconds": 120,
"bundle": {
"include": ["*.js", "package.json", "package-lock.json"],
"exclude": ["tests/", "node_modules/", ".env"]
}
}
```
**Key fields:**
- `"type": "agent"` — labels this as an agent (not a tool or skill)
- `"runtime": {"command": "python3 main.py"}` — triggers code_runtime engine instead of managed_loop
- No `run_mode` needed — defaults to `"on_demand"` (use `"always_on"` only for long-running services)
- No `supported_providers` or `default_models` needed — you make your own LLM calls directly
### Input/Output Contract
**Input:** Your code receives JSON on stdin (same as tool agents). For scheduled runs, this is the schedule's `input_data`. You can also read configuration from environment variables via `required_secrets`.
**Python:**
```python
import json, sys, os
# Option A: Read input from stdin (schedule input_data or API request body)
input_data = json.load(sys.stdin)
# Option B: Read from environment variables (set via required_secrets)
api_key = os.environ["ANTHROPIC_API_KEY"]
```
**JavaScript:**
```javascript
const fs = require('fs');
// Option A: Read input from stdin
const inputData = JSON.parse(fs.readFileSync('/dev/stdin', 'utf8'));
// Option B: Read from environment variables (set via required_secrets)
const apiKey = process.env.ANTHROPIC_API_KEY;
```
**Output:** Print JSON to stdout. This becomes the response data stored in run history.
**Python:**
```python
result = {"status": "success", "items_processed": 42}
print(json.dumps(result))
```
**JavaScript:**
```javascript
const result = { status: 'success', items_processed: 42 };
console.log(JSON.stringify(result));
```
**Logging:** Use `sys.stderr` (Python) or `console.error` (JavaScript) for debug output. Anything on stdout is treated as the agent's response — stray prints/console.log calls break JSON parsing.
**Exit codes:** 0 = success. Non-zero = error returned to caller.
If stdout is not valid JSON, it gets wrapped in a JSON object: `{"result": "your raw output here"}`.
### Example: Scheduled Data Pipeline
**Python:**
```python
#!/usr/bin/env python3
"""Fetch data, analyse with Claude, post to Discord. Runs on a cron schedule."""
import asyncio, json, os, sys, logging
import anthropic, httpx
logging.basicConfig(level=logging.INFO, stream=sys.stderr) # stderr, not stdout
logger = logging.getLogger(__name__)
async def run():
# Secrets injected as env vars (declared in required_secrets)
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
webhook_url = os.environ["DISCORD_WEBHOOK_URL"]
# Schedule input from stdin
input_data = json.load(sys.stdin)
repos = input_data.get("repos", [])
# Your logic: fetch data, call Claude, post results...
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=2048,
messages=[{"role": "user", "content": f"Summarize activity for {repos}"}],
)
summary = message.content[0].text
async with httpx.AsyncClient() as http:
await http.post(webhook_url, json={"content": summary})
# JSON to stdout = stored in run history
print(json.dumps({"status": "success", "repos": repos}))
if __name__ == "__main__":
asyncio.run(run())
```
**JavaScript:**
```javascript
const fs = require('fs');
const Anthropic = require('@anthropic-ai/sdk');
async function run() {
// Secrets injected as env vars (declared in required_secrets)
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const webhookUrl = process.env.DISCORD_WEBHOOK_URL;
// Schedule input from stdin
const inputData = JSON.parse(fs.readFileSync('/dev/stdin', 'utf8'));
const repos = inputData.repos || [];
// Your logic: fetch data, call Claude, post results...
const message = await client.messages.create({
model: 'claude-sonnet-4-5-20250929',
max_tokens: 2048,
messages: [{ role: 'user', content: 'Summarize activity for ' + JSON.stringify(repos) }],
});
const summary = message.content[0].text;
await fetch(webhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content: summary }),
});
// JSON to stdout = stored in run history
console.log(JSON.stringify({ status: 'success', repos }));
}
run().catch(err => { console.error(err); process.exit(1); });
```
### LLM Cost Tracking for Code Agents
The platform automatically tracks your underlying LLM provider costs (what Anthropic, OpenAI, or Google charge you) for code agents that follow the standard pattern shown above. Here is how it works:
1. You declare LLM API keys in `required_secrets` (e.g. `ANTHROPIC_API_KEY`)
2. You store the real key in your workspace vault (`orch secrets set ANTHROPIC_API_KEY sk-ant-...`)
3. At runtime, the platform replaces the real key with an encrypted proxy token and sets the base URL to route through the gateway
4. Standard SDKs (`anthropic`, `openai`, `google-genai`) read these env vars automatically — your LLM calls go through the gateway, which logs per-call token usage, cost, and model
**This works when your code does:**
```python
# Python — SDK reads env vars automatically
client = anthropic.Anthropic() # picks up ANTHROPIC_API_KEY + ANTHROPIC_BASE_URL from env
# Also works — explicit env var read
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
```
```javascript
// JavaScript — SDK reads from env
const client = new Anthropic(); // picks up ANTHROPIC_API_KEY from env
const client = new OpenAI(); // picks up OPENAI_API_KEY from env
```
**Cost tracking will NOT work when your code:**
- Hardcodes an API key: `client = Anthropic(api_key="sk-ant-abc123...")`
- Reads from a non-standard env var: `client = Anthropic(api_key=os.environ["MY_CLAUDE_KEY"])`
- Makes raw HTTP calls to provider URLs instead of using the SDK
- Uses a provider not yet supported by the proxy (currently supported: Anthropic, OpenAI, Gemini)
**Tip:** If you don't need custom code logic, consider using `type: "prompt"` (Section 4) or `type: "agent"` without `runtime.command` (Section 2) — these get full cost tracking with zero configuration because the platform makes the LLM calls directly.
### When to Use This vs Other Types
| Pattern | Use Code Agent |
|---------|---------------|
| **Scheduled tasks** | Weekly reports, nightly scans, periodic syncs |
| **Custom LLM logic** | Direct control over API calls, prompt caching, multi-step reasoning |
| **External integrations** | Posting to Discord/Slack, calling third-party APIs |
| **Data pipelines** | Fetch → transform → output workflows |
**vs. Tool:** Tools are designed to be called by other agents via orchestration. Code agents are standalone — triggered by schedule, API, or CLI.
**vs. Agent (Section 2):** Agents with a managed loop get a platform-controlled LLM tool-use loop. Code agents control their own execution and LLM calls directly.
**Sandbox environment:** Same as tools — see Section 1 for sandbox details (file locations, pre-installed software, timing). Like tools, the sandbox lifetime equals `timeout_seconds` with no buffer — use a Dockerfile to pre-install heavy dependencies.
---
## 4. Prompt Agent (type: "prompt")
Simplest type. No sandbox, no code. Gateway calls the LLM directly with your prompt template.
### File Structure
Scaffold with `orch init --type prompt`.
```
my-prompt-agent/
orchagent.json # Required: manifest
prompt.md # Required: prompt template with {{variables}}
schema.json # Optional: input/output schemas (auto-derived from prompt template vars if missing)
```
### orchagent.json
```json
{
"name": "my-prompt-agent",
"type": "prompt",
"description": "What this agent does",
"supported_providers": ["anthropic", "openai", "gemini"],
"default_endpoint": "run"
}
```
### prompt.md
Use `{{variable}}` placeholders that map to input schema fields:
```markdown
You are an expert code reviewer. Analyze the following code and provide feedback.
## Code to Review
{{code}}
## Focus Areas
{{focus_areas}}
Provide your review as structured JSON.
```
### schema.json
```json
{
"input": {
"type": "object",
"properties": {
"code": { "type": "string", "description": "Code to review" },
"focus_areas": { "type": "string", "description": "What to focus on" }
},
"required": ["code"]
},
"output": {
"type": "object",
"properties": {
"review": { "type": "string" },
"score": { "type": "number" }
}
}
}
```
**Critical:** The prompt comes from `prompt.md` file, NOT from any `"prompt"` field in orchagent.json. Schemas come from `schema.json` file, NOT from orchagent.json `input_schema`/`output_schema` fields. Those fields are ignored by the publish command.
---
## 5. Skill (type: "skill")
Knowledge attachments. No execution. Just a SKILL.md file. Scaffold with `orch init --type skill`.
```yaml
---
name: my-coding-standards
description: Coding standards and conventions for our team. Use when reviewing or writing code.
license: MIT
---
# Coding Standards
## Naming Conventions
- Use snake_case for functions and variables
- Use PascalCase for classes
...
```
Publish from the skill directory by running: orch publish
---
## 6. Always-On Services
For Discord bots, webhook listeners, monitors, or any long-running agent.
### orchagent.json
**Python:**
```json
{
"name": "my-discord-bot",
"type": "agent",
"description": "Always-on Discord support bot",
"run_mode": "always_on",
"runtime": { "command": "python3 main.py" },
"entrypoint": "main.py",
"supported_providers": ["anthropic"],
"required_secrets": ["DISCORD_BOT_TOKEN", "ANTHROPIC_API_KEY"],
"tags": ["discord", "always-on"],
"bundle": {
"include": ["*.py", "knowledge/*.md", "requirements.txt"],
"exclude": ["tests/", "__pycache__"]
}
}
```
**JavaScript:**
```json
{
"name": "my-discord-bot",
"type": "agent",
"description": "Always-on Discord support bot",
"run_mode": "always_on",
"runtime": { "command": "node main.js" },
"entrypoint": "main.js",
"supported_providers": ["anthropic"],
"required_secrets": ["DISCORD_BOT_TOKEN", "ANTHROPIC_API_KEY"],
"tags": ["discord", "always-on"],
"bundle": {
"include": ["*.js", "knowledge/*.md", "package.json", "package-lock.json"],
"exclude": ["tests/", "node_modules/"]
}
}
```
**Key fields:**
- `"run_mode": "always_on"` — required for `orch service deploy`
- `"runtime": {"command": "python3 main.py"}` or `{"command": "node main.js"}` — the command to run
### Service Runner Boot Sequence
When deployed via `orch service deploy`:
1. Health server starts on port 8080
2. Bundle downloaded and extracted to `/app/workspace`
3. Dependencies installed (`requirements.txt` or `package.json`)
4. Command resolved (priority): ORCHAGENT_COMMAND env var, then orchagent.json run_command field, then auto-detect entrypoints (main.py, app.py, bot.py, server.py, index.py, index.js, main.js, app.js, bot.js, server.js)
5. Your process starts with `PYTHONUNBUFFERED=1`
6. SIGTERM/SIGINT forwarded to your process
### Deploying
```bash
# Publish the agent first
orch publish
# Add secrets to workspace vault (if not already set)
orch secrets set DISCORD_BOT_TOKEN "your-token"
orch secrets set ANTHROPIC_API_KEY "sk-ant-..."
# Deploy — required_secrets auto-resolved from vault, no --secret flags needed
orch service deploy org/my-bot \
--env DISCORD_CHANNEL_IDS=123456789
# Use --secret only for extras not in required_secrets
orch service deploy org/my-bot --secret MONITORING_TOKEN
# Manage
orch service list
orch service info <service-id>
orch service logs <service-id>
orch service restart <service-id>
orch service delete <service-id>
# Manage secrets (workspace secrets attached to this service)
orch service secret add <service-id> MY_SECRET # attach extra by name
orch service secret remove <service-id> MY_SECRET # detach by name
# Manage environment variables
orch service env set <service-id> MY_VAR="my value" # set KEY=VALUE pairs
orch service env unset <service-id> MY_VAR # remove by key
orch service env list <service-id> # list all env vars
```
### Discord Bot Pattern (Three-Tier Architecture)
Proven pattern for cost-efficient Discord bots:
**Tier 1 — Classifier (Haiku, ~200 tokens):** Should the bot respond? Is it FAQ or deep question?
```python
# Cheap, fast classification
response = client.messages.create(
model="claude-haiku-4-5-20251001",
max_tokens=150,
messages=[{"role": "user", "content": f"Classify: {message}"}]
)
# Returns: {"respond": true, "tier": "faq"|"deep", "topics": ["billing"]}
```
**Tier 2 — FAQ (Sonnet, prompt-cached):** Answer using general knowledge docs.
```python
response = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
system=[{"type": "text", "text": faq_docs, "cache_control": {"type": "ephemeral"}}],
messages=[...]
)
```
**Tier 3 — Deep Docs (Sonnet, prompt-cached):** Load topic-specific docs for detailed answers.
**Key env vars for Discord bots:**
- `DISCORD_BOT_TOKEN` — from Discord Developer Portal
- `ANTHROPIC_API_KEY` — for Claude API calls
- `DISCORD_CHANNEL_IDS` — comma-separated channel IDs to monitor
### Service Environment Variables
Always available in service containers:
```
ORCHAGENT_BUNDLE_URL # Signed URL to your code bundle
ORCHAGENT_SERVICE_ID # Unique service ID
ORCHAGENT_SERVICE_NAME # Service name
ORCHAGENT_AGENT_NAME # Agent name
ORCHAGENT_AGENT_VERSION # Agent version
ORCHAGENT_GATEWAY_URL # Gateway base URL
ORCHAGENT_COMMAND # Command to run (if overridden)
PORT # Port to bind (set by infrastructure, default 8080)
```
Plus secrets from `required_secrets` (auto-resolved) and any extras from `--secret` and `--env` flags.
---
## 7. Orchestration (Agent-to-Agent Calls)
### Quick Setup
The fastest way to scaffold an orchestrator with correct manifest + custom_tools:
```bash
orch scaffold orchestration org/tool-a org/tool-b@v2
```
This generates `orchagent.json` (with `manifest.dependencies` and matching `loop.custom_tools`), `prompt.md`, and `schema.json`. Resolves latest versions, rejects non-callable agents.
### Manifest Declaration
```json
{
"name": "orchestrator",
"type": "agent",
"manifest": {
"manifest_version": 1,
"dependencies": [
{ "id": "org/tool-a", "version": "v1" },
{ "id": "org/tool-b", "version": "v2" }
],
"orchestration_mode": "strict",
"max_hops": 3,
"timeout_ms": 120000
}
}
```
**Important:** Managed-loop orchestrators (`loop: {...}`) must also declare `custom_tools` matching their dependencies. Without custom_tools, the LLM has no way to invoke dependencies and will waste all turns. The `orch scaffold orchestration` command handles this automatically. The gateway warns during publish if custom_tools are missing.
### SDK Usage
**Python** (PyPI: `orchagent-sdk`, module: `orchagent`):
```python
# pip install orchagent-sdk (package name)
from orchagent import AgentClient # module name
# In a tool agent (code_runtime):
client = AgentClient() # Reads ORCHAGENT_SERVICE_KEY from env
result = await client.call("org/agent@v1", {"key": "value"})
# Parallel calls:
import asyncio
a, b = await asyncio.gather(
client.call("org/tool-a@v1", data_a),
client.call("org/tool-b@v1", data_b),
)
```
**JavaScript** (npm: `orchagent-sdk`):
```javascript
const { AgentClient } = require('orchagent-sdk');
// In a tool agent (code_runtime):
const client = new AgentClient(); // Reads ORCHAGENT_SERVICE_KEY from env
const result = await client.call('org/agent@v1', { key: 'value' });
// Parallel calls:
const [a, b] = await Promise.all([
client.call('org/tool-a@v1', dataA),
client.call('org/tool-b@v1', dataB),
]);
```
**Auto-propagated context:** call chain (cycle detection), deadline, max hops, billing org, downstream budget, orchestration mode.
**Orchestration mode:** Orchestrators with dependencies default to `"strict"` — bash is disabled in the managed loop and the agent must call at least one dependency before submitting results. Set `"orchestration_mode": "flexible"` in the manifest to keep bash available. Strict mode inherits downward in chains (strict parent forces strict children). Error codes: `STRICT_MODE_BASH_DISABLED`, `STRICT_MODE_DEPENDENCY_REQUIRED`.
**Publishing order:** Skills first, then tools, then agents (bottom-up).
---
## 8. orchagent.json Complete Field Reference
**Required:**
- **name** (string) — Agent name (lowercase, hyphens)
**Type and Mode:**
- **type** (string, default "agent") — prompt, tool, agent, or skill. Always set this explicitly.
- **run_mode** (string, default "on_demand") — on_demand or always_on
**Execution:**
- **entrypoint** (string, default "main.py") — File to execute. Use "main.js" for JavaScript agents
- **runtime** (object) — e.g. {"command": "python3 main.py"} or {"command": "node main.js"} for code_runtime
- **loop** (object) — Managed loop config (max_turns, custom_tools)
- **timeout_seconds** (int, default 300) — Max execution time in seconds
- **default_endpoint** (string, default "run") — Default API endpoint name
**LLM:**
- **supported_providers** (string array) — "anthropic", "openai", "gemini", "any"
- **default_models** (object) — Per-provider model IDs. Anthropic: `claude-sonnet-4-5-20250929`, `claude-haiku-4-5-20251001`, `claude-opus-4-6`. OpenAI: `gpt-4o`, `gpt-4o-mini`. Gemini: `gemini-2.5-pro`, `gemini-2.5-flash`
**Metadata:**
- **description** (string) — What the agent does
- **tags** (string array) — Discovery tags
**Bundling:**
- **bundle.include** (string array) — Glob patterns to include in zip
- **bundle.exclude** (string array) — Glob patterns to exclude
**Skills:**
- **default_skills** (string array) — Skills auto-attached on every call
- **skills_locked** (bool, default false) — Prevent callers from overriding skills
**Orchestration:**
- **manifest** (object) — Dependencies, max_hops, timeout_ms, orchestration_mode
- **manifest.orchestration_mode** (string, default "strict" when deps declared) — "strict" or "flexible". Strict disables bash and requires dependency calls. Inherits downward in chains.
- **callable** (bool, default true) — Can be called by other agents. Set to `false` to block agent-to-agent calls (enforced at runtime with 403)
- **sdk_compatible** (bool) — Auto-detected by CLI from requirements.txt. Do not set manually
**Distribution:**
- **source_url** (string) — Git URL for local execution
- **allow_local_download** (bool, default true) — Allow local execution by users. Use `orch publish --no-local-download` to disable
- **required_secrets** (string array) — Workspace secret names to inject as env vars at runtime. Defaults to `[]` if omitted. See Section 15 for the full flow
---
## 9. Publishing Checklist
```bash
# Initialize a new agent
orch init --type tool # Python (default)
orch init --type tool --language javascript # JavaScript
orch init --template discord-js # JS Discord bot template
# Test locally
echo '{"text": "test"}' | python3 main.py # Python
echo '{"text": "test"}' | node main.js # JavaScript
# Publish
orch publish
# Publish with options
orch publish --no-local-download # Make server-only (local download is ON by default)
orch publish --docker # Custom Docker environment
orch publish --skills org/skill # Attach default skills
orch publish --skills-locked # Lock skills (immutable)
orch publish --verbose # List individual bundled files
orch publish --all # Publish all agents in monorepo (auto-orders by deps)
orch publish --all --dry-run # Preview ordering without publishing
```
**What happens on publish:**
1. CLI reads orchagent.json, prompt.md, schema.json
2. Code files bundled as zip (respecting bundle.include/exclude)
3. Uploaded to gateway and stored
4. Version auto-assigned (v1, v2, v3...)
5. Returns agent record + service key
**Versions:** Auto-increment. All versions remain accessible. Callers can pin: `org/agent@v2`.
---
## 10. Response Envelope
All agent responses follow this format:
**Success:**
```json
{
"data": { "your": "result" },
"warnings": [],
"metadata": {
"request_id": "req_...",
"processing_time_ms": 1250,
"execution_time_ms": 456,
"usage": { "input_tokens": 100, "output_tokens": 50 }
}
}
```
Note: `execution_time_ms` is included for tool and agent types. `usage` (token counts) is included for agent-type only. Prompt agents return `processing_time_ms` and `request_id`.
**Error:**
```json
{
"error": {
"code": "SANDBOX_ERROR",
"message": "...",
"is_retryable": false,
"hint": "Optional suggestion for how to fix"
},
"metadata": { "request_id": "req_...", "processing_time_ms": 123 }
}
```
**SSE Streaming** (agent-type by default; tool streaming available behind feature flag. Not available for prompt agents. Use `stream=true` query param or `Accept: text/event-stream` header):
```
event: progress
data: {"type": "tool_call", "turn": 1, "tool": "bash", ...}
event: result
data: {"data": {...}, "metadata": {...}}
event: error
data: {"error": {"code": "...", "message": "..."}, "metadata": {...}}
```
---
## 11. Calling Agents via HTTP API
You can call any published agent directly via HTTP from your own app, serverless function, or any language:
**Endpoint:** `POST https://api.orchagent.io/{org}/{agent}/{version}/run`
**Headers:**
- Authorization: Bearer YOUR_API_KEY
- Content-Type: application/json
- Accept: text/event-stream (optional, for SSE streaming — agent-type only)
**Query params:**
- `stream=true` (optional, alternative to Accept header for SSE streaming)
**Body:** JSON matching the agent's input schema.
**Example (Python):**
```python
import requests
headers = {"Authorization": "Bearer " + api_key}
payload = {"code": "def foo(): pass", "focus_areas": "security"}
response = requests.post(
"https://api.orchagent.io/myorg/code-reviewer/v1/run",
headers=headers,
json=payload
)
result = response.json()
# result == {"data": {...}, "metadata": {...}}
```
The response follows the standard envelope format (see Response Envelope section).
---
## 12. Debugging & Common Errors
**"peer closed connection"** — Sandbox expired. Usually means setup (pip install) consumed most of the timeout. Increase `timeout_seconds` or use a Dockerfile with pre-installed deps.
**Empty/null result** — Your code printed something other than JSON to stdout, or printed nothing. Check for stray `print()` statements.
**"DEPENDENCY_NOT_ALLOWED"** — Agent calling another agent not declared in `manifest.dependencies`.
**"DEPENDENCY_CYCLE"** — Circular call chain detected (A calls B calls A).
**Dependency failure errors** — When a dependency call fails, the error response includes a `failed_dependency` field with the dependency name and upstream error. Check this field to identify which dependency in a multi-agent chain failed.
**"MISSING_BILLING_ORG"** — Orchestrator didn't pass `ORCHAGENT_BILLING_ORG_ID` to sub-call. Use `AgentClient()` which auto-reads env vars.
**Module not found in sandbox (Python)** — Add to requirements.txt. Remember: `orchagent` (the CLI) is not `orchagent-sdk` (the Python SDK). Use `orchagent-sdk` in requirements.txt.
**Module not found in sandbox (JavaScript)** — Add to package.json `dependencies`. Include `package-lock.json` for reproducible installs (`npm ci` is used when lockfile is present). The orchestration SDK is `orchagent-sdk` on npm.
**"MISSING_SECRETS"** — Agent declares `required_secrets` but one or more don't exist in the workspace. Add them via web dashboard (Settings > Secrets). See Section 15.
**Timeout but code is fast** — For tool agents, pip install time consumes the timeout. For agent-type, the 120s setup buffer covers it. Either way, use `orch publish --docker` with a Dockerfile to pre-install heavy deps.
### Custom Docker Environments
Pre-install heavy dependencies to skip sandbox pip install time:
```dockerfile
FROM python:3.11-slim
RUN pip install numpy pandas scikit-learn --no-cache-dir
```
```bash
orch publish --docker
```
The Dockerfile is hashed for deduplication — same Dockerfile reuses the same environment.
---
## 13. Local Testing
```bash
# Tool agent: pipe JSON to stdin
echo '{"text": "hello", "count": 3}' | python3 main.py # Python
echo '{"text": "hello", "count": 3}' | node main.js # JavaScript
# With a file
echo '{"files": [{"path": "/tmp/test.pdf", "original_name": "test.pdf", "content_type": "application/pdf", "size_bytes": 1024}]}' | python3 main.py # Python
echo '{"files": [{"path": "/tmp/test.pdf", "original_name": "test.pdf", "content_type": "application/pdf", "size_bytes": 1024}]}' | node main.js # JavaScript
# Validate config + run test suite (auto-detects pytest/vitest)
orch test
# Watch mode — re-run on file changes
orch test --watch
# Validate then run the agent once with real input
orch test --run --data '{"text": "hello", "count": 3}'
# Diagnose setup issues
orch doctor
```
### Development Server
```bash
# Start dev server with hot-reload (default port 4900)
orch dev
# Custom port
orch dev --port 3001
# Verbose output (shows stderr, LLM provider info)
orch dev --verbose
# Test with orch run while dev server is running (in another terminal)
orch run . --local --data '{"text": "hello"}'
```
Supports all three engines (code_runtime, direct_llm, managed_loop). Watches files and auto-reloads on changes. Endpoints: POST /run (execute), GET /health (config info).
### Cost Tracking & Estimation
The platform tracks your underlying LLM provider costs (what Anthropic, OpenAI, or Google charge you per token — not orchagent platform fees) automatically for every run. How cost data is captured depends on the execution engine:
| Engine | How Costs Are Tracked | Requirements |
|--------|----------------------|--------------|
| `direct_llm` (prompt type) | Gateway makes the LLM call directly | None — fully automatic |
| `managed_loop` (agent type) | Platform-controlled loop routes LLM calls through the gateway proxy | None — fully automatic |
| code_runtime (tool / code agent) | LLM calls from your code are intercepted via proxy tokens injected into standard SDK env vars | Use standard SDKs (anthropic, openai, google-genai) that read from ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY |
**If your code agent shows $0.00 LLM cost**, check that:
1. You are using a standard provider SDK (not raw HTTP calls)
2. You are reading API keys from environment variables (not hardcoding them)
3. The env var names match the standard names (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GEMINI_API_KEY`)
4. Those keys are listed in `required_secrets` in your orchagent.json
```bash
# View execution trace timeline (per-call LLM costs, tokens, model)
orch trace <run-id>
# Show estimated cost based on last 30 days of runs
orch estimate org/my-agent
# JSON output for scripting
orch estimate org/my-agent --json
# Show estimate before running (prompts for confirmation)
orch run org/my-agent --data '{}' --estimate
```
### Debugging & Observability
```bash
# View execution trace timeline (LLM calls, tool calls, tokens, costs)
orch trace <run-id>
# Replay a previous run with the original input/config
orch replay <run-id>
# Replay with a reason (stored in audit log)
orch replay <run-id> --reason "testing fix"
# Visualize orchestration call graph (for multi-agent runs)
orch dag <run-id>
# Live mode — polls while run is active
orch dag <run-id> --live
# Compare two agent versions
orch diff org/my-agent@v1 v2
# Workspace performance metrics (success rates, latency, errors)
orch metrics
orch metrics --days 7 --agent my-agent
```
### Security Scanning
```bash
# Run vulnerability scan (prompt injection, persona roleplay, logic traps)
orch security test org/my-agent@latest
# Filter by severity
orch security test org/my-agent --severities critical high
# Output as markdown report
orch security test org/my-agent --output markdown --output-file report.md
```
Currently supports prompt-type agents. Tests 35 attack categories against the agent's prompt.
---
## 14. Scheduling
Run agents on a cron schedule or via webhooks. Any on-demand agent (prompt, tool, or code agent) can be scheduled.
### Creating a Schedule
```bash
# Cron schedule (every Monday at 9am UTC)
orch schedule create org/my-agent --cron "0 9 * * 1"
# With timezone
orch schedule create org/my-agent --cron "0 9 * * 1" --timezone "America/New_York"
# With input data (passed as JSON on stdin to the agent)
orch schedule create org/my-agent \
--cron "0 9 * * 1" \
--input '{"repos": ["owner/repo1", "owner/repo2"]}'
# With specific LLM provider
orch schedule create org/my-agent --cron "0 */6 * * *" --provider anthropic
# Pin to a specific version (disables auto-update on new publishes)
orch schedule create org/my-agent@v3 --cron "0 9 * * 1" --pin-version
# Webhook trigger (returns a unique URL instead of cron)
orch schedule create org/my-agent --webhook
```
### Cron Syntax
Standard 5-field cron expressions:
```
┌───────── minute (0-59)
│ ┌─────── hour (0-23)
│ │ ┌───── day of month (1-31)
│ │ │ ┌─── month (1-12)
│ │ │ │ ┌─ day of week (0-6, Sun=0)
│ │ │ │ │
* * * * *
```
Examples: `"0 9 * * 1"` (Monday 9am), `"0 */6 * * *"` (every 6h), `"30 2 * * *"` (daily 2:30am), `"0 9 1 * *"` (1st of month 9am).
### Managing Schedules
```bash
orch schedule list # List all schedules
orch schedule list --agent my-agent # Filter by agent
orch schedule info <schedule-id> # Details + recent runs + events
orch schedule runs <schedule-id> # Run history
orch schedule update <id> --cron "0 10 * * 1" # Change cron expression
orch schedule delete <schedule-id> # Delete schedule
# All subcommands support --json for scripting
orch schedule create org/my-agent --cron "0 9 * * 1" --json
orch schedule list --json
```
### Manual Trigger
Test a scheduled agent without waiting for the cron:
```bash
orch schedule trigger <schedule-id>
# Override input for this run only
orch schedule trigger <schedule-id> --input '{"debug": true}'
```
### Failure Handling
- The platform tracks consecutive failures per schedule.
- After 3 consecutive failures (configurable), the schedule is **auto-disabled**.
- Auto-disabled schedules show `AUTO-DISABLED` in `orch schedule list`.
- Fix the issue, then re-enable: `orch schedule update SCHEDULE_ID --enable`.
### Full Lifecycle Example
```bash
# 1. Build and publish
orch publish
# 2. Add workspace secrets (if agent uses required_secrets)
# Via web dashboard: Settings > Secrets
# 3. Test with a manual run
orch run org/my-agent --data '{"repos": ["owner/repo"]}'
# 4. Create the schedule
orch schedule create org/my-agent \
--cron "0 9 * * 1" \
--timezone "America/New_York" \
--input '{"repos": ["owner/repo1", "owner/repo2"]}'
# 5. Monitor
orch schedule info <schedule-id>
orch schedule runs <schedule-id>
```
---
## 15. Workspace Secrets (required_secrets)
Secrets let your agent access API keys, tokens, and configuration at runtime without hardcoding them in code. All secrets live in a single **workspace vault** — one flat list of encrypted key-value pairs. No categories, no separate "LLM keys" section.
### How It Works
1. **Declare** in orchagent.json:
```json
{ "required_secrets": ["ANTHROPIC_API_KEY", "DISCORD_WEBHOOK_URL"] }
```
2. **Set values** in your workspace vault:
```bash
orch secrets set ANTHROPIC_API_KEY sk-ant-...
orch secrets set DISCORD_WEBHOOK_URL https://hooks.example.com/...
```
Or via web dashboard > **Settings > Secrets**.
3. **At runtime**, the gateway matches secret names, decrypts, and injects as environment variables:
```python
# Python
import os
api_key = os.environ["ANTHROPIC_API_KEY"]
```
```javascript
// JavaScript
const apiKey = process.env.ANTHROPIC_API_KEY;
```
### Publish Enforcement
`orch publish` scans your code for environment variable references and warns if they're not listed in `required_secrets`. The `required_secrets` field defaults to `[]` — if your agent genuinely needs no secrets, you can omit it.
### Publish-Time Scanning
`orch publish` also scans your code for `os.environ` / `os.getenv` (Python) and `process.env` (JavaScript) references not listed in `required_secrets` and warns you:
```
⚠ Your code references environment variables not in required_secrets:
MY_KEY, OTHER_VAR
If these should be workspace secrets, add them to required_secrets
in orchagent.json so they're available in the sandbox at runtime.
(Platform-injected vars like LLM API keys are already excluded.)
```
### What NOT to Add to required_secrets
These are auto-injected by the gateway — adding them to `required_secrets` can overwrite the auto-injected values and break things:
- `ORCHAGENT_SERVICE_KEY` — auto-created for orchestrator agents. **Adding this breaks orchestration.**
- `ORCHAGENT_BILLING_ORG_ID`, `ORCHAGENT_REQUEST_ID`, `ORCHAGENT_ROOT_RUN_ID` — always auto-injected.
- `ANTHROPIC_API_KEY` / `OPENAI_API_KEY` / `GEMINI_API_KEY` — for managed loop agents, these are auto-injected as proxy tokens when the caller provides BYOK keys. For code agents (`runtime.command`), **do** add these to `required_secrets` so the platform can discover your LLM keys, replace them with proxy tokens at runtime, and track costs automatically. Your code still reads them the same way (`os.environ["ANTHROPIC_API_KEY"]`) — the proxy is transparent to standard SDKs.
### Missing Secrets
If a declared secret doesn't exist in the workspace:
- **API/CLI runs:** 400 error with code `MISSING_SECRETS`: *"Agent requires secret(s) not found in workspace: MY_KEY. Add them in Settings > Secrets."*
- **Scheduled runs:** Run fails and counts toward the consecutive failure streak. After 3 failures, the schedule is auto-disabled.
- **Service deploys:** Deploy fails with an actionable error listing which secrets are missing.
### Secrets for Always-On Services
Service deploys **auto-read `required_secrets`** from the agent — no `--secret` flags needed:
```bash
# Agent declares required_secrets: ["DISCORD_BOT_TOKEN", "ANTHROPIC_API_KEY"]
# Just deploy — secrets are auto-resolved from the vault
orch service deploy org/my-bot
# Use --secret only for extras not declared on the agent
orch service deploy org/my-bot --secret MONITORING_TOKEN
# Manage service secrets after deploy
orch service secret add <service-id> NEW_SECRET # attach extra
orch service secret remove <service-id> OLD_SECRET # detach
```
On version updates and restarts, the platform re-merges `required_secrets` from the latest agent version, so new secrets are picked up automatically.
---
## 16. Calling the Gateway from Agent Code
Agent code running in a sandbox can call gateway HTTP endpoints directly — for example, the GitHub Activity Proxy or workspace APIs.
### Authentication
Add an API key via `required_secrets` and use it as a Bearer token:
**Python:**
```python
import os, httpx
api_key = os.environ["ORCHAGENT_API_KEY"] # Set in required_secrets
gateway_url = "https://api.orchagent.io"
headers = {"Authorization": f"Bearer {api_key}"}
async with httpx.AsyncClient() as client:
resp = await client.get(
f"{gateway_url}/github/activity/repos/owner/repo/commits",
headers=headers,
params={"since": "2026-02-01T00:00:00Z"},
)
commits = resp.json()
```
**JavaScript:**
```javascript
const apiKey = process.env.ORCHAGENT_API_KEY; // Set in required_secrets
const gatewayUrl = 'https://api.orchagent.io';
const resp = await fetch(
gatewayUrl + '/github/activity/repos/owner/repo/commits?since=2026-02-01T00:00:00Z',
{ headers: { Authorization: 'Bearer ' + apiKey } }
);
const commits = await resp.json();
```
**Important:** This pattern is for calling gateway HTTP endpoints directly. For calling other **agents**, use the orchagent SDK instead (see Section 7) — it handles auth, call chains, and deadlines automatically via `ORCHAGENT_SERVICE_KEY`.
---
## Quick Decision Tree
**"I want to wrap a prompt template"** → type: `prompt`
**"I want to run Python/JS code (called by other agents)"** → type: `tool`
**"I want an LLM that can use tools and reason"** → type: `agent` (managed loop)
**"I want custom code with its own LLM calls"** → type: `agent` + `runtime.command` (code agent — Section 3)
**"I want a long-running bot/service"** → type: `agent` + `run_mode: "always_on"` + `runtime.command`
**"I want to share knowledge/instructions"** → type: `skill`
**"I want agent A to call agent B"** → Add `manifest.dependencies` + use `orchagent-sdk`
**"I want to run an agent on a schedule"** → Publish, then `orch schedule create` (Section 14)
**"I want full LLM cost tracking with zero configuration"** → type: `prompt` or type: `agent` (managed loop). Code agents also get cost tracking when using standard SDKs — see Section 13.Skills are knowledge files for AI tools. Install locally to use with Claude Code, Cursor, etc.
Install to current project
orch skill install orchagent-public/agent-builderInstall globally (all projects)
orch skill install orchagent-public/agent-builder --globalOnce installed, your AI tool will automatically use this skill's knowledge.
Input and output data structures
Integrate this agent via CLI or API
POST /orchagent-public/agent-builder/v12/runFree: 1,000 runs/day# Install (one-time)
npm install -g orchagent
# Call the agent
orch run orchagent-public/agent-builder --data '{"code":"...","focus_areas":"...","key":"...","path":".","variable":"...","variables":"..."}'Get your API key from the dashboard