Skip to main content

Quickstart

Run your first AI agent on Pai in under 5 minutes -- no container image, no YAML.

Prerequisites

  • pai CLI installed and logged in (see Getting Started)
  • An API key for any supported LLM provider (Anthropic, OpenAI, Google Gemini, OpenRouter, or any OpenAI-compatible endpoint like NVIDIA NIM or vLLM)

Step 1: Store your API key

A ModelProvider holds your LLM credentials. You only need to create it once per API subscription — all agents in the namespace can then reference models from it. Pick the provider you have a key for:

pai create model-provider anthropic \
--provider anthropic \
--api-key YOUR_ANTHROPIC_API_KEY

Models referenced as anthropic/claude-sonnet-4-6, anthropic/claude-haiku-4-5, etc. Get a key at console.anthropic.com.

Multiple providers

You can create more than one ModelProvider in the same namespace (e.g. Anthropic for production and NVIDIA NIM for experiments). Agents pick which one to use by prefixing the model name with the ModelProvider name: anthropic/claude-sonnet-4-6 or nvidia/z-ai/glm4.7.

Verify:

pai get model-providers
# NAME PROVIDER ENDPOINT MAX/DAY LAST USED AGE
# ────────────────────────────────────────────────────────────
# anthropic anthropic — 5s

The LAST USED column shows how recently each provider served an LLM call ( means unused yet). Handy for spotting stale providers you can clean up.

Step 2: Create an Agent

An Agent describes what the agent can do -- which model to use, what tools it has, and its system prompt. No container image needed; the Pai harness provides the runtime.

pai apply -f - <<'EOF'
apiVersion: pai.io/v1
kind: Agent
metadata:
name: my-agent
spec:
models:
- anthropic/claude-sonnet-4-6
system: |
You are a helpful assistant. When asked to write files,
save them under /tmp/.
EOF

Step 3: Run a task

pai run hello --agent my-agent --task "Write a haiku about Kubernetes and save it to /tmp/haiku.txt"

You'll see the agent's reasoning and tool calls stream in real time. When it finishes, check the output:

pai logs hello

That's it -- no Dockerfile, no YAML deployment, no infrastructure to manage.

What's next?

  • Chat with a running agent -- pai chat hello to send follow-up messages
  • Long-running service agents -- deploy a persistent agent with a custom image using pai apply -f agent.yaml. See the Agent reference
  • Add external services -- give your agent access to GitHub, Telegram, AWS and more with Providers
  • Browser automation -- connect a local Chrome browser with the CDP relay