Skip to main content

OpenClaw Agent

This walkthrough deploys a full-featured OpenClaw agent with multiple LLM models and external service integrations. By the end, you will have an agent that can:

  • Use both Gemini and Claude for LLM reasoning
  • Create pull requests and read issues on GitHub
  • Receive and respond to messages via Telegram
  • Be accessed through a public HTTPS URL

Prerequisites

  • Pai installed on your cluster (see Getting Started)
  • A Gemini API key
  • An Anthropic API key
  • A GitHub Personal Access Token with repo scope
  • A Telegram bot token (see Telegram Integration)

Step 1: Create secrets

Store all credentials as Pai-managed secrets:

# LLM API keys
pai add secret gemini-key --from-literal api-key=YOUR_GEMINI_API_KEY
pai add secret anthropic-key --from-literal api-key=YOUR_ANTHROPIC_API_KEY

# GitHub PAT
pai add secret github-pat --from-literal token=ghp_YOUR_GITHUB_PAT

# Telegram bot token
pai add secret telegram-token --from-literal token=YOUR_TELEGRAM_BOT_TOKEN

Verify your secrets:

pai get secrets

# NAME KEYS AGE
# gemini-key api-key 5s
# anthropic-key api-key 5s
# github-pat token 4s
# telegram-token token 4s

Step 2: Add model providers

Create a ModelProvider for each LLM subscription — one per API key. Every model offered by that provider becomes available to any agent that references it as provider/model-id.

# Google -- fast, high-volume models (Gemini family)
pai create model-provider google \
--provider gemini \
--api-key-secret gemini-key \
--max-tokens-day 1000000

# Anthropic -- high-quality reasoning (Claude family)
pai create model-provider anthropic \
--provider anthropic \
--api-key-secret anthropic-key \
--max-tokens-day 500000

Verify:

pai get model-providers

# NAME PROVIDER MAX/DAY AGE
# google gemini 1000000 10s
# anthropic anthropic 500000 8s

Step 3: Add providers

Create Providers for GitHub and Telegram:

# GitHub -- allow PR creation and issue reading
pai add service github-writer \
--provider github \
--secret-token github-pat \
--repos "your-org/your-repo"

# Telegram -- allow bot messaging
pai add service telegram-bot \
--provider telegram \
--secret-token telegram-token

Verify:

pai get services

# NAME PROVIDER HOST AUTH AGE
# github-writer github api.github.com pat 5s
# telegram-bot telegram api.telegram.org bot-token 4s

Step 4: Deploy the agent

Create a file called openclaw-agent.yaml:

apiVersion: pai.io/v1
kind: Agent
metadata:
name: openclaw
spec:
type: service
image: ghcr.io/pai-platform/openclaw:latest
runAsUser: 1000
models:
- google/gemini-2.5-flash
- anthropic/claude-sonnet-4-6
providers:
- github-writer
- telegram-bot
inbound:
port: 3000
allowCIDRs:
- "0.0.0.0/0"
volumes:
- name: workspace
mountPath: /home/node/workspace
size: "5Gi"
configFiles:
- path: /home/node/.openclaw/openclaw.json
content: |
{
"channels": ["web", "telegram"],
"defaultModel": "gemini-flash",
"models": {
"gemini-flash": {
"displayName": "Gemini Flash",
"description": "Fast responses, high volume"
},
"claude-sonnet": {
"displayName": "Claude Sonnet",
"description": "Deep reasoning, high quality"
}
}
}
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"

Deploy:

pai create -f openclaw-agent.yaml

Step 5: Wait for the agent to start

Watch the status until it reaches Running:

pai status openclaw

# Name: openclaw
# Namespace: pai-system
# Status: Running
# URL: https://k9m2x4.pairun.dev
# Image: ghcr.io/pai-platform/openclaw:latest
#
# Models:
# - gemini-flash (gemini / gemini-2.0-flash) -- 0 / 1,000,000 tokens today
# - claude-sonnet (anthropic / claude-sonnet-4-6) -- 0 / 500,000 tokens today
#
# Services:
# - github-writer (github / api.github.com) -- 0 requests today
# - telegram-bot (telegram / api.telegram.org) -- 0 requests today

Step 6: Access the web UI

Open the OpenClaw web interface:

pai dashboard openclaw

This opens https://k9m2x4.pairun.dev in your browser. You will see the OpenClaw chat interface with a model selector showing both Gemini Flash and Claude Sonnet.

Step 7: Configure Telegram

To pair Telegram with your agent:

  1. Open Telegram and find your bot (the one you created with BotFather)
  2. Send /start to the bot
  3. The bot is now connected to your OpenClaw agent via the Telegram Provider

Messages sent to the bot are routed through the Pai sidecar proxy, which injects the bot token and forwards to the Telegram API. See the Telegram Integration guide for detailed setup.

Step 8: Test GitHub integration

Ask the agent to interact with your repository through the chat UI:

List the open issues in your-org/your-repo

The agent will call the GitHub API through the sidecar proxy, which:

  1. Intercepts the HTTPS request to api.github.com
  2. Resolves the action to issues:read
  3. Checks the policy (allowed)
  4. Checks the scope (repository is in the allowed list)
  5. Injects the GitHub PAT as a Bearer token
  6. Forwards the request to GitHub

You can verify the activity in the logs:

pai logs openclaw --tail 20

Updating the agent

To change the configuration, edit openclaw-agent.yaml and apply:

pai apply -f openclaw-agent.yaml

Pai will perform a rolling update of the deployment, preserving the hostname and URL.

Monitoring

Check token consumption and service usage:

# Overall status
pai status openclaw

# Recent events
pai events openclaw

# Live logs
pai logs openclaw

Cleanup

To remove the agent and all associated resources:

pai delete agent openclaw
pai delete model gemini-flash
pai delete model claude-sonnet
pai delete service github-writer
pai delete service telegram-bot
pai delete secret gemini-key
pai delete secret anthropic-key
pai delete secret github-pat
pai delete secret telegram-token