Skip to main content
Chamber provides Terraform modules that deploy complete GPU-ready Kubernetes clusters on AWS and Google Cloud. A single terraform apply creates the entire infrastructure stack — from networking to GPU workload orchestration.

What Gets Deployed

Each module provisions the full stack your team needs to run GPU workloads with Chamber:

Terraform Modules vs. Helm Agent

Chamber offers two ways to connect a cluster. Choose based on your starting point:
Terraform ModulesHelm Agent Only
What it createsFull infrastructure stackChamber agent only
PrerequisitesCloud provider account + TerraformExisting Kubernetes cluster with GPUs
Time to deploy~15-20 minutes~30 seconds
GPU autoscalingIncluded (Karpenter)Not included
GPU driversIncluded (NVIDIA GPU Operator)Must be pre-installed
Best forNew clusters, greenfield deploymentsExisting clusters
Already have a Kubernetes cluster with GPU nodes? Skip the Terraform modules and install the Chamber agent directly via Helm.

Choose Your Cloud

Components

Both modules deploy the same core components:
Karpenter automatically provisions and deprovisions GPU nodes based on workload demand. When a GPU job is submitted, Karpenter launches the right instance type. When nodes sit idle, Karpenter consolidates or terminates them to reduce costs.
The NVIDIA GPU Operator installs GPU drivers, the device plugin (enabling nvidia.com/gpu resource requests), DCGM-Exporter (GPU metrics for Chamber dashboards), and GPU Feature Discovery (node labeling by GPU type).
The KAI Scheduler enables GPU time-sharing, allowing multiple containers to share a single GPU. This improves utilization and reduces costs for workloads that don’t need a full GPU.
The Chamber Agent connects your cluster to the Chamber control plane, reporting capacity, managing workloads, and collecting GPU metrics.