This guide installs only the Chamber agent on an existing Kubernetes cluster. To deploy a complete GPU-ready cluster (VPC, Kubernetes, GPU autoscaling, GPU drivers, and the Chamber agent), see Cluster Deployment.
Prerequisites
Before installing, ensure you have:Kubernetes cluster (1.24+)
Kubernetes cluster (1.24+)
Verify with
kubectl version. The agent supports Kubernetes 1.24 and later.Helm 3.x installed
Helm 3.x installed
Verify with
helm version. Install from helm.sh if needed.GPU nodes with NVIDIA drivers
GPU nodes with NVIDIA drivers
Nodes must have NVIDIA drivers installed. Verify with
nvidia-smi on GPU nodes.NVIDIA device plugin running
NVIDIA device plugin running
The NVIDIA device plugin must be deployed. Verify:
DCGM-Exporter running
DCGM-Exporter running
DCGM-Exporter is required for GPU metrics collection. Verify:If not installed, deploy via the NVIDIA GPU Operator or standalone:
Chamber cluster token
Chamber cluster token
Get this from the Chamber dashboard (see below).
Quick Install
Getting a Cluster Token
Log in to Chamber
Go to app.usechamber.io and sign in.
Verifying Installation
Check Pod Status
Check Logs
Check Version
Verify in Dashboard
Within 30 seconds of installation:- Go to app.usechamber.io
- Navigate to Capacity Pools
- Your cluster should appear with the correct GPU count
If GPUs show as 0, verify the NVIDIA device plugin is running and nodes have
nvidia.com/gpu resources.Network Requirements
The agent requires outbound HTTPS access to:| Host | Port | Purpose |
|---|---|---|
controlplane-api.usechamber.io | 443 | Control plane communication |

