This guide walks you through setting up Chamber for your organization, from signup to submitting your first workload.
Prerequisites
- A Kubernetes cluster with GPU nodes
kubectl access to your cluster
helm v3 installed
Don’t have a GPU Kubernetes cluster yet? Use our Terraform modules to deploy a GPU-ready cluster on AWS or GCP in ~15 minutes.
Step 1: Create Your Organization
Create organization
After signing in, you’ll be prompted to create an organization. This is your top-level tenant in Chamber.
Invite team members
Navigate to Settings > Members to invite colleagues to your organization.
Step 2: Connect Your Cluster
Install the Chamber agent in your Kubernetes cluster to sync workload state with the control plane.
# Get your cluster token from the Chamber dashboard
# Navigate to Settings > Security > API Tokens -> New Token
# Install the agent
helm install chamber-agent oci://public.ecr.aws/chamber/chamber-agent-chart \
--version 0.8.9 \
--namespace chamber-system \
--create-namespace \
--set saas.url="wss://controlplane-api.usechamber.io/agent" \
--set saas.token="<YOUR_TOKEN>" \
--set saas.clusterId="your-gpu-cluster"
Verify the agent is running:
kubectl get pods -n chamber-system -l app.kubernetes.io/name=chamber-agent
Step 3: Create Your First Team
Teams represent teams or projects in your organization. They form a hierarchy for organizing capacity allocation.
Navigate to Teams
In the Chamber dashboard, click Teams in the sidebar.
Create root team
Click Create Team and enter:
- Name: e.g., “ML Platform”
- Description: Brief description of the team/project
Add projects (optional)
Create sub-teams for different teams or projects under your root team.
Step 4: Allocate Capacity
Reserve GPU capacity for your team from your connected cluster.
View capacity pools
Navigate to Capacity Pools to see your connected clusters.
Create reservation
Click on a pool, then Create Reservation:
- Select the team to receive capacity
- Specify the number of GPUs to reserve
Start with a small reservation and adjust based on utilization metrics. You can always increase or redistribute capacity later.
Step 5: Submit a Workload
With capacity reserved, your team can now submit GPU workloads.
apiVersion: batch/v1
kind: Job
metadata:
name: training-workload
labels:
chamber.io/team: ml-platform # Links workload to team
chamber.io/workload-class: reserved # Use reserved capacity
spec:
template:
spec:
containers:
- name: trainer
image: your-training-image:latest
resources:
limits:
nvidia.com/gpu: 4
restartPolicy: Never
Workloads without the chamber.io/workload-class label default to elastic and will use idle capacity.
Next Steps