OpenAI
Use GPT and o-series models from OpenAI as the LLM for your agents.
Get an API key
Sign up at platform.openai.com and create an API key under API Keys.
Setup
pai create model-provider openai \
--provider openai \
--api-key sk-...
This stores your API key securely and creates a ModelProvider that covers every model under your subscription.
Verify:
pai get model-providers
# NAME PROVIDER ENDPOINT MAX/DAY LAST USED AGE
# openai openai — 5s
Supported models
| Model | Reference | Best for |
|---|---|---|
| GPT-4o | openai/gpt-4o | Best overall — multimodal, fast |
| GPT-4o mini | openai/gpt-4o-mini | Fast, cost-efficient |
| o1 | openai/o1 | Deep reasoning |
| o3-mini | openai/o3-mini | Fast reasoning |
Use in an agent
spec:
models:
- openai/gpt-4o
Multiple models — first is primary, rest are fallbacks:
spec:
models:
- openai/gpt-4o # primary
- openai/gpt-4o-mini # fallback for cheaper calls
Token budgets
Cap how many tokens this subscription burns per day across every agent — a hard safety net against runaway spend.
apiVersion: pai.io/v1
kind: ModelProvider
metadata:
name: openai
spec:
provider: openai
apiKeySecretRef:
name: openai-key
key: api-key
maxTokensPerDay: 5000000 # daily cap shared across all agents
maxTokensPerRequest: 128000 # per-request context-window limit
When the daily cap is hit, the gateway returns HTTP 429 until midnight UTC. Agents that list another provider in spec.models automatically fall over to it.
Expose via the LLM Gateway
Set externalAccess.enabled: true to let developers outside the cluster — laptops, CI, scripts — route their own LLM traffic through this provider. The OpenAI API key stays inside Pai; clients authenticate with a Pai AccessKey instead.
spec:
externalAccess:
enabled: true
maxTokensPerDay: 2000000 # separate budget for external usage
Once enabled, developers connect with three commands:
pai login https://api.pairun.dev --access-key pak_...
eval $(pai gateway env)
# OPENAI_BASE_URL is set; OpenAI SDK / LangChain / CrewAI now route through Pai
See LLM Gateway for the full onboarding flow, AccessKey management, and per-developer rate limits.
Access control
Narrow which models agents may call on this provider with allowedModels / deniedModels, or attach prompt-injection guards. See Security controls on the Model page for the full field list.