OpenClaw on Kubernetes : A Step-by-Step Guide
So I wanted to have my own OpenClaw running at home, on my own server, not depending on some cloud service. OpneClaw it's basically an multi-agent infrastructure that you can talk to via Telegram, browser, whatever you want. Here is how I did it on my Rancher/RKE2 cluster.
My Setup
Before starting, here is what I have:
- Rancher v2.13.3
- RKE2 cluster (4 nodes)
- kubectl + kustomize
- A storage class called
local-path - An Anthropic API key (you get it from console.anthropic.com)
How OpenClaw is meant to be deployed
The official OpenClaw repo (github.com/openclaw/openclaw) has a scripts/k8s/ folder with everything you need:
scripts/k8s/
├── deploy.sh # creates namespace + secret, deploys via kustomize
├── create-kind.sh # local Kind cluster for testing
└── manifests/
├── kustomization.yaml
├── configmap.yaml # openclaw.json + AGENTS.md
├── deployment.yaml
├── pvc.yaml
└── service.yaml
You can run ./scripts/k8s/deploy.sh and it handles everything. I took these manifests, adapted them for my Rancher cluster (different storage class, my config, my AGENTS.md), and kept them in my own git repo.
The manifests use Kustomize instead of just running kubectl apply -f on each file separately. The reason is simple: with Kustomize you have one kustomization.yaml that lists all your files, so kubectl apply -k ./folder/ applies everything in one shot in the right order. It also makes it easy to manage overlays later (dev vs prod configs for example) without duplicating files. It is built into kubectl so no extra tool to install.
Step 1 — Create the Namespace
kubectl create namespace openclaw
Step 2 — Create the Secrets
You need two things in the secret: your Anthropic API key, and a gateway token (used to authenticate the web UI).
Important: never put secrets in git. Always create them directly from command line.
# Generate a random gateway token
GATEWAY_TOKEN=$(openssl rand -hex 24)
kubectl create secret generic openclaw-secrets \
--from-literal=ANTHROPIC_API_KEY="sk-ant-XXXXXXX" \
--from-literal=OPENCLAW_GATEWAY_TOKEN="$GATEWAY_TOKEN" \
-n openclaw
# Save the token — you need it to access the UI
echo "Your token: $GATEWAY_TOKEN"
Key names must be exact — ANTHROPIC_API_KEY not anthropicApiKey.
Step 3 — Create the PVC
Use your cluster's default storage class. I use Longhorn (replicated, survives node failures). Do not use local-path — if the PVC gets deleted, data is gone permanently.
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openclaw
namespace: openclaw
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 10Gi
Step 4 — Create the Config Files
OpenClaw uses a ConfigMap for two things: openclaw.json (the gateway config) and AGENTS.md (instructions for the agent).
Create configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: openclaw-config
labels:
app: openclaw
data:
openclaw.json: |
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-haiku-4-5-20251001"
}
}
},
"gateway": {
"mode": "local",
"bind": "loopback",
"port": 18789,
"auth": {
"mode": "token"
},
"controlUi": {
"enabled": true,
"allowedOrigins": ["http://localhost:18789"]
}
}
}
AGENTS.md: |
# OpenClaw Assistant
You are a helpful assistant running on a personal Kubernetes cluster.
Be direct and concise.
LLM Choice
This is a personal assistant running 24/7. Every message costs tokens. Here is the rough pricing (early 2026):
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| claude-haiku-4-5 | ~$0.80 | ~$4 |
| claude-sonnet-4 | ~$3 | ~$15 |
| claude-opus-4 | ~$15 | ~$75 |
Haiku is ~5x cheaper than Sonnet. For daily tasks it is more than enough. Switch to Sonnet per-session when you need deeper reasoning.
About allowedOrigins
The gateway needs to know which browser origins can connect. Since I access via kubectl port-forward on localhost:18789, I set exactly that. If you access from a different host or over HTTPS/Tailscale, update this accordingly.
Also useful to know:
- Pass the token as
#token=...in the URL fragment (not?token=) — fragments are not sent to the server, more secure - The
gatewayUrlis saved in localStorage after first load - For HTTPS/TLS use
wss://notws://
Step 5 — Create the Deployment and Service
deployment.yaml
A few things worth knowing about this file:
- Init container (
init-config): runs before the main container. It copiesopenclaw.jsonandAGENTS.mdfrom the ConfigMap into the PVC. This happens on every restart — so the ConfigMap is always the source of truth for config. - Image: pinned to
2026.4.11, notlatest. Always pin to a release. readOnlyRootFilesystem: true: the container filesystem is read-only for security. That's why we mount/tmpseparately as an emptyDir — npm needs a writable cache somewhere.npm_config_cache=/tmp/.npm-cache: fixes npm permission errors inside the pod.- No CPU/memory limits — let it use what it needs.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openclaw
labels:
app: openclaw
spec:
replicas: 1
selector:
matchLabels:
app: openclaw
strategy:
type: Recreate
template:
metadata:
labels:
app: openclaw
spec:
automountServiceAccountToken: false
securityContext:
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
initContainers:
- name: init-config
image: busybox:1.37
command:
- sh
- -c
- |
cp /config/openclaw.json /home/node/.openclaw/openclaw.json
mkdir -p /home/node/.openclaw/workspace
cp /config/AGENTS.md /home/node/.openclaw/workspace/AGENTS.md
securityContext:
runAsUser: 1000
runAsGroup: 1000
resources:
requests:
memory: 32Mi
cpu: 50m
limits:
memory: 64Mi
cpu: 100m
volumeMounts:
- name: openclaw-home
mountPath: /home/node/.openclaw
- name: config
mountPath: /config
containers:
- name: gateway
image: ghcr.io/openclaw/openclaw:2026.4.11
command: [node, /app/dist/index.js, gateway, run]
ports:
- containerPort: 18789
env:
- name: HOME
value: /home/node
- name: OPENCLAW_CONFIG_DIR
value: /home/node/.openclaw
- name: NODE_ENV
value: production
- name: npm_config_cache
value: /tmp/.npm-cache
- name: OPENCLAW_GATEWAY_TOKEN
valueFrom:
secretKeyRef:
name: openclaw-secrets
key: OPENCLAW_GATEWAY_TOKEN
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: openclaw-secrets
key: ANTHROPIC_API_KEY
optional: true
resources:
requests:
memory: 512Mi
cpu: 250m
livenessProbe:
exec:
command: [node, -e, "require('http').get('http://127.0.0.1:18789/healthz', r => process.exit(r.statusCode < 400 ? 0 : 1)).on('error', () => process.exit(1))"]
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
exec:
command: [node, -e, "require('http').get('http://127.0.0.1:18789/readyz', r => process.exit(r.statusCode < 400 ? 0 : 1)).on('error', () => process.exit(1))"]
initialDelaySeconds: 15
periodSeconds: 10
volumeMounts:
- name: openclaw-home
mountPath: /home/node/.openclaw
- name: tmp-volume
mountPath: /tmp
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: [ALL]
volumes:
- name: openclaw-home
persistentVolumeClaim:
claimName: openclaw
- name: config
configMap:
name: openclaw-config
- name: tmp-volume
emptyDir: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: openclaw
labels:
app: openclaw
spec:
type: ClusterIP
selector:
app: openclaw
ports:
- port: 18789
targetPort: 18789
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openclaw
resources:
- pvc.yaml
- configmap.yaml
- deployment.yaml
- service.yaml
Apply everything
kubectl apply -k ./kustomize/
Step 6 — Access the UI
kubectl port-forward svc/openclaw 18789:18789 -n openclaw
Then open: http://localhost:18789#token=YOUR_TOKEN
Get your token anytime:
Linux/Mac:
kubectl get secret openclaw-secrets -n openclaw \
-o jsonpath='{.data.OPENCLAW_GATEWAY_TOKEN}' | base64 -d
Final Thoughts
It took me a few hours, mostly because I started with the wrong Helm chart. Once I switched to the official Kustomize manifests it was much cleaner. Now I have my own AI agent running 24/7 at home, connected to Telegram, with persistent memory. Pretty cool for automating things and having a real assistant that remembers context between conversations.
If you have questions feel free to reach out.