Secret
Secrets store credentials — API keys, tokens, service account keys — used by ModelProviders, Providers, and Agents. They are encrypted at rest. By default the agent itself never sees the value — Pai injects it into the sidecar or gateway server-side. Secrets can also be mounted as environment variables on the agent when the agent code genuinely needs them (see Exposing a secret to the agent below for the security trade-off).
pai add secret <name> --from-literal KEY=VALUE
pai get secrets
pai describe secret <name>
pai delete secret <name>
Creating a secret
# Single value
pai add secret gemini-key --from-literal api-key=AIzaSy...
# Multiple values (e.g. AWS credentials)
pai add secret aws-creds \
--from-literal access_key_id=AKIAIOSFODNN7EXAMPLE \
--from-literal secret_access_key=wJalrXUtnFEMI...
Listing secrets
pai get secrets
# NAME KEYS AGE
# gemini-key api-key 2d
# github-pat token 1d
# aws-creds access_key_id, secret_access_key 5h
Referencing a secret
Secrets are referenced by name from ModelProviders, Providers, and Agents. In the first two cases the value is injected server-side (inside the gateway or sidecar) — the agent container never holds the value.
In a ModelProvider — the gateway uses this key to call the LLM:
spec:
apiKeySecretRef:
name: gemini-key # secret name
key: api-key # key within the secret
In a Provider — the sidecar proxy injects this credential into outbound HTTPS requests:
spec:
auth:
secretRef: github-pat # secret name
secretKey: token # key within the secret (default: token)
Agents cannot read the secret value in either case — Pai does the injection on their behalf.
Exposing a secret to the agent
Some agents legitimately need a secret value at runtime — for example, signing a Slack webhook URL, encrypting a local database, or passing an API key to a library Pai doesn't proxy. Use spec.env[].secretRef on the Agent to mount a Secret as an environment variable:
spec:
env:
- name: SLACK_WEBHOOK
secretRef:
name: slack-credentials
key: webhook-url
| Field | Description |
|---|---|
name | Environment variable name inside the agent container |
secretRef.name | Secret to read from |
secretRef.key | Key within the Secret |
Unlike the ModelProvider and Provider paths, an env-mounted Secret is visible to the agent. The value lives in /proc/<pid>/environ and is readable by any process the agent runs. A compromised agent can exfiltrate it.
Reserve this path for secrets the agent genuinely needs to read itself. For anything Pai can proxy — LLM API keys, GitHub PATs, AWS credentials, Telegram bot tokens — use a ModelProvider or Provider instead so the value never reaches the agent container.
Common secret shapes
| Use case | Keys |
|---|---|
| Anthropic / Gemini / OpenAI API key | api-key |
| GitHub Personal Access Token | token |
| Telegram bot token | token |
| AWS credentials | access_key_id, secret_access_key |
| Azure service principal | client_id, client_secret |
| GCP service account | key.json (full JSON content) |
Deleting a secret
pai delete secret gemini-key
Deleting a secret that is referenced by a running agent or model will cause credential injection to fail. Delete the referencing resources first, or update them to use a different secret.
Security
- Secrets are stored encrypted in the platform.
- When referenced from a ModelProvider (
apiKeySecretRef), the value is loaded by the gateway and used to call the LLM — the agent never sees it. - When referenced from a Provider (
auth.secretRef), the value is mounted into the sidecar proxy and injected into outbound requests — the agent never sees it. - When referenced from an Agent (
env[].secretRef), the value is exposed to the agent container as an environment variable. A compromised agent can read it — use only when the agent genuinely needs the value. See Exposing a secret to the agent. pai get secretsshows key names only — values are never returned by the API.