Documentation

Everything you need to integrate with a21e and improve agent performance through better intent translation, context continuity, and execution control.

These docs are for solo builders, dev teams, and enterprise operators who want more value from every agent run. a21e acts as an Agent Performance Layer between user intent and base model execution.

The platform improves outcomes through five systems working together: intent translation, adaptive memory, secure key management, task-aware skill execution, and closed-loop learning. Start with one workflow and expand as your usage matures.

Who this is for

Different roles care about different things. Here’s what matters for each.

Running agents (everyone)

You want results, not reruns. Describe the outcome in plain language; a21e translates intent, preserves your context, and executes with the right skills. You get consistent outputs without repeating setup on every task.

Start with Getting Started or OpenAI Compatibility.

Operating at team and org scale

As usage expands, a21e helps teams keep context and standards aligned across people, repos, and projects. Enterprise teams add governance controls, auditability, and deployment boundaries without losing execution speed.

See Why a21e and API Reference for implementation details.

Why a21e

Intent Translation

Requests are normalized and routed through a typed pipeline so outputs match user goals more consistently.

Context Continuity

Memory keeps preferences and prior decisions active across runs so users stop repeating setup and constraints.

Skill-Aware Execution

Execution paths can activate task-specific capabilities so responses are more complete and operationally useful.

Adaptive memory

Four integrated pillars — Auto-Capture, Memory Health, Transparency, and Collaboration Intelligence — learn from every interaction so quality compounds over time.

Secure key storage

Save your LLM API keys once with AES-256-GCM encryption. Keys are resolved automatically on every run — no manual headers, no plaintext storage.

Closed-Loop Learning

Feedback and execution signals improve future routing, selection, and quality over time.

Governance and Trust

Enterprise controls support policy enforcement, auditability, and secure operation in regulated environments.

How It Works

Every request flows through a 6-stage pipeline:

Intent
Normalize
Compile
Route
Compose
Execute

Normalize

Validates and structures raw input into a canonical intent format. Handles auto-normalization and rejection.

Compile

Extracts structured fields: task, industry, action verb, constraints, and success criteria.

Route

Matches the compiled intent to the best prompts on the platform using semantic scoring.

Compose

Assembles matched prompts into an executable package with strategy (single, sequential, parallel).

Execute

Calls the LLM provider with composed prompts. Prompt content never leaves the server boundary.

Feedback

Execution signals (edit distance, reprompt count, test results) feed back into routing scores.

Getting Started

  1. Create an account at a21e.com/sign-up
  2. Go to Settings and create an API key
  3. Submit your first intent using the API

Submit an Intent

bash
curl -X POST https://api.a21e.com/v1/rpc \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer a21e_YOUR_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "method": "2024-01/intent.submit",
    "params": {
      "input": "Write unit tests for a React login form with email validation"
    },
    "id": 1
  }'

Execute the Package

bash
curl -X POST https://api.a21e.com/v1/rpc \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer a21e_YOUR_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "method": "2024-01/package.execute",
    "params": {
      "package_id": "PACKAGE_UUID",
      "input": "LoginForm component with email/password fields, uses React Hook Form"
    },
    "id": 2
  }'

Provider API keys (BYOK)

What is BYOK?

BYOK stands for “Bring Your Own Key.” Think of it like using your own SIM card in a phone: a21e provides the orchestration (the phone), and your API key connects directly to the AI provider (the network). The provider bills you for tokens; a21e charges one credit per enhancement (each prompt, skill, or artifact used in a run).

What you need

  1. An a21e account (sign up free at a21e.com).
  2. An a21e API key — go to Settings → API key.
  3. An API key from your provider (see steps below for each one).

Step-by-step: OpenAI (GPT models)

  1. Go to platform.openai.com/api-keys and sign in (or create a free account).
  2. Click Create new secret key. Give it a name like “a21e” and click Create.
  3. Copy the key immediately — you won't be able to see it again. It looks like: sk-proj-abc123...
  4. If copy succeeds, you're done! Skip to “Save your key” below.

Common mistake: OpenAI keys always start with sk-. If your key doesn't start with that, you may be looking at the wrong page.

Step-by-step: Anthropic (Claude models)

  1. Go to console.anthropic.com/settings/keys and sign in.
  2. Click Create Key, name it, and confirm.
  3. Copy the key. It starts with sk-ant-.

Step-by-step: Google (Gemini models)

  1. Go to aistudio.google.com/app/apikey and sign in with Google.
  2. Click Create API key. Accept Terms of Service if prompted.
  3. Copy the key. It starts with AIza.

Step-by-step: xAI (Grok models)

  1. Go to console.x.ai and sign in.
  2. Navigate to API keys and click Create API key.
  3. Copy the key and store it safely.

Save your key

Once you have your provider key, save it in a21e so it's used automatically on every request. Go to Settings → Provider keys, choose the provider, paste your key, and click Save key. Keys are encrypted with AES-256-GCM and never shown again after saving.

Three ways to use your key

A. Saved key (recommended)

Save your key in the Provider Key Manager. It's used automatically — no extra headers needed.

B. API headers (per-request)

Send X-A21E-Provider: openai and X-A21E-Provider-Key: sk-... with each request.

C. Composite key

Bundle both keys into one string: a21e:YOUR_KEY:byok:openai:sk-...

Quick reference

ProviderWhere to get a keyKey starts witha21e value
OpenAIplatform.openai.com/api-keyssk-openai
Anthropicconsole.anthropic.com/settings/keyssk-ant-anthropic
Googleaistudio.google.com/app/apikeyAIzagoogle
xAIconsole.x.ai(varies)xai

FAQ / Troubleshooting

My key doesn't work

Check that the key starts with the expected prefix (see table above). Make sure you haven't accidentally added spaces. You can use the “Test connection” button in the Provider Key Manager to verify.

Which provider should I choose?

OpenAI (GPT-4o) is the most widely compatible. If you already have a key from any provider, use that one. You can save keys from multiple providers.

Do I still need credits with BYOK?

Yes. BYOK plans still use a21e credits for prompt execution and orchestration. If you run out of credits, add credits or renew your plan to continue.

Who bills me for usage?

Your LLM provider bills you for tokens used. a21e bills credits for platform execution (prompt delivery, orchestration, memory, and skills).

Adaptive memory

Adaptive memory is a unified system that learns from every interaction. Instead of losing context between runs, it auto-captures what matters and keeps memory healthy over time. It solves ten common human-AI friction points through four integrated pillars.

1. Auto-Capture Engine

Extracts knowledge, detects patterns, and tracks corrections from your interactions. Preferences, project decisions, and style choices are captured automatically — no manual tagging required. Sensitivity classification blocks security credentials, legal-privileged material, and PII from auto-capture.

2. Memory Health System

Time-based decay prevents stale memories from polluting context. A contradiction detector flags conflicting entries. Health scoring and token telemetry ensure the most relevant, healthy memories are used each run.

3. Transparency Layer

Every memory-influenced run includes attribution: which memories were used, why they were selected, and how confident the system is. Progressive disclosure shows a badge summary for quick review and full debug details when you need them.

4. Collaboration Intelligence

Decision review triggers surface choices that need human sign-off. Scope conflict resolution prevents org-level and user-level memories from contradicting each other. Hard-enforced policies cannot be overridden; soft-guidance memories can be with a logged reason.

Memory scopes

User

Personal preferences and style that follow you across projects.

Organization

Shared team standards and decisions visible to all org members.

Project

Project-specific context scoped to a single workflow or package.

API endpoints

The memory layer is managed automatically during execution. Memory analysis, contradiction detection, and health scoring run server-side. Programmatic memory management endpoints are coming soon.

API Reference

All methods use JSON-RPC 2.0 on POST /v1/rpc. Method names use the format 2024-01/domain.action.

Intent

2024-01/intent.submit

Submit a new intent for processing through the pipeline.

input(string)requiredThe task description (5-10,000 chars)
primary_intent(object)Structured intent with task, industry, action_verb, object_noun
secondary_intents(array)Additional intents with relationship (parallel, sequential, conditional)
constraints(object)Format, max_length, audience, language constraints
scope(object)Limit to specific prompt_ids, tags, or set_name
risk_tolerance(enum)low, moderate, high
time_pressure(enum)urgent, normal, relaxed
2024-01/intent.clarify

Provide clarification for intents with needs_clarification status.

intent_id(uuid)requiredThe intent requiring clarification
clarification_input(string)requiredClarification response (1-10,000 chars)
2024-01/intent.get

Retrieve a specific intent by ID.

intent_id(uuid)requiredThe intent to fetch
2024-01/intent.list

List your intents with optional filtering.

limit(integer)1-100, default 20
offset(integer)Default 0
status_filter(enum)accepted, auto_normalized, needs_clarification, rejected

Package

2024-01/package.execute

Execute a composed prompt package. Response includes execution_id, outputs, credits_used (one credit per enhancement), and prompts_consumed (number of prompts in the package that were executed).

package_id(uuid)requiredThe package to execute
input(string)requiredExecution input (1-100,000 chars)
model(string)LLM model override (e.g. gpt-4o, claude-sonnet-4-20250514)
provider_api_key(string)BYOK provider API key

Feedback

2024-01/feedback.submit

Submit outcome feedback on executions.

type(enum)requiredexplicit or signal
execution_id(uuid)Execution reference
package_id(uuid)Package reference
outcome(enum)success, failure, partial
reason(string)Human-readable reason
signals(object)edit_distance, reprompt_count, time_to_accept_ms, test_results

Analytics

2024-01/analytics.creator

Dashboard metrics for prompt creators.

period_start(string)ISO date start
period_end(string)ISO date end
2024-01/analytics.creator.prompt

Analytics for a specific prompt.

prompt_id(uuid)requiredPrompt to analyze
period_start(string)ISO date start
period_end(string)ISO date end

Library

2024-01/library.add

Add a package to your library.

package_id(uuid)requiredPackage to save
label(string)Custom label (max 200 chars)
2024-01/library.add_last

Save your most recent successful execution.

label(string)Custom label (max 200 chars)
2024-01/library.list

List saved library packages.

limit(integer)1-100, default 50
offset(integer)Default 0
2024-01/library.remove

Remove a library entry.

entry_id(uuid)Library entry ID
package_id(uuid)Package ID (alternative)

Prompt Lab

2024-01/lab.generate_candidates

Generate candidate prompt improvements.

prompt_id(uuid)requiredPrompt to improve
intent_category(enum)requiredrefactor, add-feature-tests, security-audit
failure_clusters(string[])Failure patterns to address
2024-01/lab.start_evaluation

Begin A/B evaluation of a candidate prompt.

prompt_id(uuid)requiredPrompt being evaluated
candidate_version_id(uuid)requiredCandidate version to test
intent_category(enum)requiredrefactor, add-feature-tests, security-audit
canary_percent(number)Traffic split: 0.01-0.20 (default 0.10)
min_executions(integer)Minimum test runs before decision
2024-01/lab.get_evaluation

Fetch evaluation status and results.

evaluation_id(uuid)requiredEvaluation to check
2024-01/lab.promote

Promote a successful candidate to production.

evaluation_id(uuid)requiredEvaluation with positive results
2024-01/lab.rollback

Revert to the previous prompt version.

evaluation_id(uuid)requiredEvaluation to rollback
reason(string)Rollback justification (max 1000 chars)
2024-01/lab.events

Query lab events and metrics.

prompt_id(uuid)Filter by prompt
intent_category(enum)refactor, add-feature-tests, security-audit
limit(integer)1-100
offset(integer)Default 0

OpenAI Compatibility

a21e exposes an OpenAI-compatible API, so any tool that speaks the OpenAI format (Cursor, Continue, custom agents) can use a21e as a backend.

Endpoints

GET /v1/models

Lists available “models” including library entries and (unless library-only mode is enabled) published prompts.

POST /v1/chat/completions

Accepts OpenAI-format chat requests. Supports streaming.

Model Names

Model IDBehavior
a21e-autoFull intent pipeline (normalize, route, compose, execute)
a21e-prompt:slugExecute a specific prompt by its published slug
a21e-pattern:idRe-execute a saved library package

If you enable Library-only mode in the app, routing and model discovery are restricted to prompts you saved in your library. Published catalog slugs are hidden from /v1/models, and execution of unsaved catalog prompts is blocked.

Cursor Setup

  1. Open Cursor Settings → Models
  2. Add a new model: a21e-auto
  3. Set “Override OpenAI Base URL” to your a21e API URL (e.g. https://api.a21e.com/v1)
  4. Paste your a21e API key as the “API Key”
  5. Select a21e-auto as your active model

Authentication

API Key

All API keys use the a21e_ prefix. Pass your key via either method:

  • Authorization: Bearer a21e_YOUR_KEY
  • X-API-Key: a21e_YOUR_KEY

Bring Your Own Key (BYOK)

Use your own LLM provider key for execution by passing BYOK headers:

  • X-A21E-Execution-Mode: byok
  • X-A21E-Provider: openai | anthropic | google | xai
  • X-A21E-Provider-Key: sk-...

Composite Key Format

For single-field clients (like Cursor) that only support one API key field, use the composite key format:

a21e:YOUR_A21E_KEY:byok:PROVIDER:PROVIDER_KEY

Example: a21e:a21e_abc123:byok:openai:sk-xyz789

The server parses this into the individual components automatically.

Error Handling

Errors follow the JSON-RPC 2.0 error format:

CodeNameDescription
-32700Parse errorInvalid JSON in request body
-32600Invalid requestMissing required JSON-RPC fields
-32601Method not foundUnknown RPC method name
-32602Invalid paramsParameter validation failed
-32603Internal errorUnexpected server error
-32000Intent rejectedIntent failed normalization
-32001Needs clarificationIntent requires more information
-32002No matching promptsNo prompts matched the intent
-32003Insufficient creditsNot enough credits for execution
-32004Package not foundInvalid or expired package ID
-32005Execution failedLLM provider returned an error
-32006Originality violationContent failed originality check

No match, no charge

When we can't match your intent to any prompt (error code -32002, “No matching prompts”), we never charge. Credits are only deducted when enhancements are executed (package.execute). If intent.submit returns no package, you get a clear error with fulfillment: "no_match" and credits_charged: 0 in the error details. You only pay for enhancements that deliver a matched result.

Intent Status Taxonomy

StatusMeaning
acceptedIntent passed validation and a package was composed
auto_normalizedInput was ambiguous but auto-corrected; package was composed
needs_clarificationIntent requires more information; use intent.clarify to respond
rejectedIntent was invalid, harmful, or outside platform scope

Code Examples

Handle Clarification

bash
# If intent.submit returns status "needs_clarification":
curl -X POST https://api.a21e.com/v1/rpc \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer a21e_YOUR_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "method": "2024-01/intent.clarify",
    "params": {
      "intent_id": "INTENT_UUID",
      "clarification_input": "I want Jest tests with React Testing Library"
    },
    "id": 3
  }'

Submit Feedback

bash
curl -X POST https://api.a21e.com/v1/rpc \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer a21e_YOUR_KEY" \
  -d '{
    "jsonrpc": "2.0",
    "method": "2024-01/feedback.submit",
    "params": {
      "type": "signal",
      "execution_id": "EXECUTION_UUID",
      "outcome": "success",
      "signals": {
        "edit_distance": 0.12,
        "reprompt_count": 0,
        "test_results": { "passed": 8, "failed": 0, "total": 8 }
      }
    },
    "id": 4
  }'

Use the OpenAI Shim

bash
# Works with any OpenAI-compatible client
curl -X POST https://api.a21e.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer a21e_YOUR_KEY" \
  -d '{
    "model": "a21e-auto",
    "messages": [
      { "role": "user", "content": "Refactor this function to use async/await" }
    ]
  }'

BYOK with Composite Key

bash
# Single-field composite key (useful for Cursor/IDEs)
curl -X POST https://api.a21e.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer a21e:a21e_YOUR_KEY:byok:openai:sk-YOUR_OPENAI_KEY" \
  -d '{
    "model": "a21e-auto",
    "messages": [
      { "role": "user", "content": "Add error handling to this endpoint" }
    ]
  }'

Next best step

Set up quickly, then run your first real task.