Documentation
Everything you need to integrate with a21e and improve agent performance through better intent translation, context continuity, and execution control.
These docs are for solo builders, dev teams, and enterprise operators who want more value from every agent run. a21e acts as an Agent Performance Layer between user intent and base model execution.
The platform improves outcomes through five systems working together: intent translation, adaptive memory, secure key management, task-aware skill execution, and closed-loop learning. Start with one workflow and expand as your usage matures.
Who this is for
Different roles care about different things. Here’s what matters for each.
Running agents (everyone)
You want results, not reruns. Describe the outcome in plain language; a21e translates intent, preserves your context, and executes with the right skills. You get consistent outputs without repeating setup on every task.
Start with Getting Started or OpenAI Compatibility.
Operating at team and org scale
As usage expands, a21e helps teams keep context and standards aligned across people, repos, and projects. Enterprise teams add governance controls, auditability, and deployment boundaries without losing execution speed.
See Why a21e and API Reference for implementation details.
Why a21e
Intent Translation
Requests are normalized and routed through a typed pipeline so outputs match user goals more consistently.
Context Continuity
Memory keeps preferences and prior decisions active across runs so users stop repeating setup and constraints.
Skill-Aware Execution
Execution paths can activate task-specific capabilities so responses are more complete and operationally useful.
Adaptive memory
Four integrated pillars — Auto-Capture, Memory Health, Transparency, and Collaboration Intelligence — learn from every interaction so quality compounds over time.
Secure key storage
Save your LLM API keys once with AES-256-GCM encryption. Keys are resolved automatically on every run — no manual headers, no plaintext storage.
Closed-Loop Learning
Feedback and execution signals improve future routing, selection, and quality over time.
Governance and Trust
Enterprise controls support policy enforcement, auditability, and secure operation in regulated environments.
How It Works
Every request flows through a 6-stage pipeline:
Normalize
Validates and structures raw input into a canonical intent format. Handles auto-normalization and rejection.
Compile
Extracts structured fields: task, industry, action verb, constraints, and success criteria.
Route
Matches the compiled intent to the best prompts on the platform using semantic scoring.
Compose
Assembles matched prompts into an executable package with strategy (single, sequential, parallel).
Execute
Calls the LLM provider with composed prompts. Prompt content never leaves the server boundary.
Feedback
Execution signals (edit distance, reprompt count, test results) feed back into routing scores.
Getting Started
- Create an account at
a21e.com/sign-up - Go to Settings and create an API key
- Submit your first intent using the API
Submit an Intent
curl -X POST https://api.a21e.com/v1/rpc \
-H "Content-Type: application/json" \
-H "Authorization: Bearer a21e_YOUR_KEY" \
-d '{
"jsonrpc": "2.0",
"method": "2024-01/intent.submit",
"params": {
"input": "Write unit tests for a React login form with email validation"
},
"id": 1
}'Execute the Package
curl -X POST https://api.a21e.com/v1/rpc \
-H "Content-Type: application/json" \
-H "Authorization: Bearer a21e_YOUR_KEY" \
-d '{
"jsonrpc": "2.0",
"method": "2024-01/package.execute",
"params": {
"package_id": "PACKAGE_UUID",
"input": "LoginForm component with email/password fields, uses React Hook Form"
},
"id": 2
}'Provider API keys (BYOK)
What is BYOK?
BYOK stands for “Bring Your Own Key.” Think of it like using your own SIM card in a phone: a21e provides the orchestration (the phone), and your API key connects directly to the AI provider (the network). The provider bills you for tokens; a21e charges one credit per enhancement (each prompt, skill, or artifact used in a run).
What you need
- An a21e account (sign up free at a21e.com).
- An a21e API key — go to Settings → API key.
- An API key from your provider (see steps below for each one).
Step-by-step: OpenAI (GPT models)
- Go to platform.openai.com/api-keys and sign in (or create a free account).
- Click Create new secret key. Give it a name like “a21e” and click Create.
- Copy the key immediately — you won't be able to see it again. It looks like:
sk-proj-abc123... - If copy succeeds, you're done! Skip to “Save your key” below.
Common mistake: OpenAI keys always start with sk-. If your key doesn't start with that, you may be looking at the wrong page.
Step-by-step: Anthropic (Claude models)
- Go to console.anthropic.com/settings/keys and sign in.
- Click Create Key, name it, and confirm.
- Copy the key. It starts with
sk-ant-.
Step-by-step: Google (Gemini models)
- Go to aistudio.google.com/app/apikey and sign in with Google.
- Click Create API key. Accept Terms of Service if prompted.
- Copy the key. It starts with
AIza.
Step-by-step: xAI (Grok models)
- Go to console.x.ai and sign in.
- Navigate to API keys and click Create API key.
- Copy the key and store it safely.
Save your key
Once you have your provider key, save it in a21e so it's used automatically on every request. Go to Settings → Provider keys, choose the provider, paste your key, and click Save key. Keys are encrypted with AES-256-GCM and never shown again after saving.
Three ways to use your key
A. Saved key (recommended)
Save your key in the Provider Key Manager. It's used automatically — no extra headers needed.
B. API headers (per-request)
Send X-A21E-Provider: openai and X-A21E-Provider-Key: sk-... with each request.
C. Composite key
Bundle both keys into one string: a21e:YOUR_KEY:byok:openai:sk-...
Quick reference
| Provider | Where to get a key | Key starts with | a21e value |
|---|---|---|---|
| OpenAI | platform.openai.com/api-keys | sk- | openai |
| Anthropic | console.anthropic.com/settings/keys | sk-ant- | anthropic |
| aistudio.google.com/app/apikey | AIza | google | |
| xAI | console.x.ai | (varies) | xai |
FAQ / Troubleshooting
My key doesn't work
Check that the key starts with the expected prefix (see table above). Make sure you haven't accidentally added spaces. You can use the “Test connection” button in the Provider Key Manager to verify.
Which provider should I choose?
OpenAI (GPT-4o) is the most widely compatible. If you already have a key from any provider, use that one. You can save keys from multiple providers.
Do I still need credits with BYOK?
Yes. BYOK plans still use a21e credits for prompt execution and orchestration. If you run out of credits, add credits or renew your plan to continue.
Who bills me for usage?
Your LLM provider bills you for tokens used. a21e bills credits for platform execution (prompt delivery, orchestration, memory, and skills).
Adaptive memory
Adaptive memory is a unified system that learns from every interaction. Instead of losing context between runs, it auto-captures what matters and keeps memory healthy over time. It solves ten common human-AI friction points through four integrated pillars.
1. Auto-Capture Engine
Extracts knowledge, detects patterns, and tracks corrections from your interactions. Preferences, project decisions, and style choices are captured automatically — no manual tagging required. Sensitivity classification blocks security credentials, legal-privileged material, and PII from auto-capture.
2. Memory Health System
Time-based decay prevents stale memories from polluting context. A contradiction detector flags conflicting entries. Health scoring and token telemetry ensure the most relevant, healthy memories are used each run.
3. Transparency Layer
Every memory-influenced run includes attribution: which memories were used, why they were selected, and how confident the system is. Progressive disclosure shows a badge summary for quick review and full debug details when you need them.
4. Collaboration Intelligence
Decision review triggers surface choices that need human sign-off. Scope conflict resolution prevents org-level and user-level memories from contradicting each other. Hard-enforced policies cannot be overridden; soft-guidance memories can be with a logged reason.
Memory scopes
User
Personal preferences and style that follow you across projects.
Organization
Shared team standards and decisions visible to all org members.
Project
Project-specific context scoped to a single workflow or package.
API endpoints
The memory layer is managed automatically during execution. Memory analysis, contradiction detection, and health scoring run server-side. Programmatic memory management endpoints are coming soon.
API Reference
All methods use JSON-RPC 2.0 on POST /v1/rpc. Method names use the format 2024-01/domain.action.
Intent
2024-01/intent.submitSubmit a new intent for processing through the pipeline.
2024-01/intent.clarifyProvide clarification for intents with needs_clarification status.
2024-01/intent.getRetrieve a specific intent by ID.
2024-01/intent.listList your intents with optional filtering.
Package
2024-01/package.executeExecute a composed prompt package. Response includes execution_id, outputs, credits_used (one credit per enhancement), and prompts_consumed (number of prompts in the package that were executed).
Feedback
2024-01/feedback.submitSubmit outcome feedback on executions.
Analytics
2024-01/analytics.creatorDashboard metrics for prompt creators.
2024-01/analytics.creator.promptAnalytics for a specific prompt.
Library
2024-01/library.addAdd a package to your library.
2024-01/library.add_lastSave your most recent successful execution.
2024-01/library.listList saved library packages.
2024-01/library.removeRemove a library entry.
Prompt Lab
2024-01/lab.generate_candidatesGenerate candidate prompt improvements.
2024-01/lab.start_evaluationBegin A/B evaluation of a candidate prompt.
2024-01/lab.get_evaluationFetch evaluation status and results.
2024-01/lab.promotePromote a successful candidate to production.
2024-01/lab.rollbackRevert to the previous prompt version.
2024-01/lab.eventsQuery lab events and metrics.
OpenAI Compatibility
a21e exposes an OpenAI-compatible API, so any tool that speaks the OpenAI format (Cursor, Continue, custom agents) can use a21e as a backend.
Endpoints
GET /v1/modelsLists available “models” including library entries and (unless library-only mode is enabled) published prompts.
POST /v1/chat/completionsAccepts OpenAI-format chat requests. Supports streaming.
Model Names
| Model ID | Behavior |
|---|---|
a21e-auto | Full intent pipeline (normalize, route, compose, execute) |
a21e-prompt:slug | Execute a specific prompt by its published slug |
a21e-pattern:id | Re-execute a saved library package |
If you enable Library-only mode in the app, routing and model discovery are restricted to prompts you saved in your library. Published catalog slugs are hidden from /v1/models, and execution of unsaved catalog prompts is blocked.
Cursor Setup
- Open Cursor Settings → Models
- Add a new model:
a21e-auto - Set “Override OpenAI Base URL” to your a21e API URL (e.g.
https://api.a21e.com/v1) - Paste your a21e API key as the “API Key”
- Select
a21e-autoas your active model
Authentication
API Key
All API keys use the a21e_ prefix. Pass your key via either method:
Authorization: Bearer a21e_YOUR_KEYX-API-Key: a21e_YOUR_KEY
Bring Your Own Key (BYOK)
Use your own LLM provider key for execution by passing BYOK headers:
X-A21E-Execution-Mode: byokX-A21E-Provider: openai | anthropic | google | xaiX-A21E-Provider-Key: sk-...
Composite Key Format
For single-field clients (like Cursor) that only support one API key field, use the composite key format:
a21e:YOUR_A21E_KEY:byok:PROVIDER:PROVIDER_KEYExample: a21e:a21e_abc123:byok:openai:sk-xyz789
The server parses this into the individual components automatically.
Error Handling
Errors follow the JSON-RPC 2.0 error format:
| Code | Name | Description |
|---|---|---|
-32700 | Parse error | Invalid JSON in request body |
-32600 | Invalid request | Missing required JSON-RPC fields |
-32601 | Method not found | Unknown RPC method name |
-32602 | Invalid params | Parameter validation failed |
-32603 | Internal error | Unexpected server error |
-32000 | Intent rejected | Intent failed normalization |
-32001 | Needs clarification | Intent requires more information |
-32002 | No matching prompts | No prompts matched the intent |
-32003 | Insufficient credits | Not enough credits for execution |
-32004 | Package not found | Invalid or expired package ID |
-32005 | Execution failed | LLM provider returned an error |
-32006 | Originality violation | Content failed originality check |
No match, no charge
When we can't match your intent to any prompt (error code -32002, “No matching prompts”), we never charge. Credits are only deducted when enhancements are executed (package.execute). If intent.submit returns no package, you get a clear error with fulfillment: "no_match" and credits_charged: 0 in the error details. You only pay for enhancements that deliver a matched result.
Intent Status Taxonomy
| Status | Meaning |
|---|---|
accepted | Intent passed validation and a package was composed |
auto_normalized | Input was ambiguous but auto-corrected; package was composed |
needs_clarification | Intent requires more information; use intent.clarify to respond |
rejected | Intent was invalid, harmful, or outside platform scope |
Code Examples
Handle Clarification
# If intent.submit returns status "needs_clarification":
curl -X POST https://api.a21e.com/v1/rpc \
-H "Content-Type: application/json" \
-H "Authorization: Bearer a21e_YOUR_KEY" \
-d '{
"jsonrpc": "2.0",
"method": "2024-01/intent.clarify",
"params": {
"intent_id": "INTENT_UUID",
"clarification_input": "I want Jest tests with React Testing Library"
},
"id": 3
}'Submit Feedback
curl -X POST https://api.a21e.com/v1/rpc \
-H "Content-Type: application/json" \
-H "Authorization: Bearer a21e_YOUR_KEY" \
-d '{
"jsonrpc": "2.0",
"method": "2024-01/feedback.submit",
"params": {
"type": "signal",
"execution_id": "EXECUTION_UUID",
"outcome": "success",
"signals": {
"edit_distance": 0.12,
"reprompt_count": 0,
"test_results": { "passed": 8, "failed": 0, "total": 8 }
}
},
"id": 4
}'Use the OpenAI Shim
# Works with any OpenAI-compatible client
curl -X POST https://api.a21e.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer a21e_YOUR_KEY" \
-d '{
"model": "a21e-auto",
"messages": [
{ "role": "user", "content": "Refactor this function to use async/await" }
]
}'BYOK with Composite Key
# Single-field composite key (useful for Cursor/IDEs)
curl -X POST https://api.a21e.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer a21e:a21e_YOUR_KEY:byok:openai:sk-YOUR_OPENAI_KEY" \
-d '{
"model": "a21e-auto",
"messages": [
{ "role": "user", "content": "Add error handling to this endpoint" }
]
}'Next best step
Set up quickly, then run your first real task.