Skip to main content
Build Coverage

Chamber CLI

Go from Python code to running GPU workload in one command. The Chamber CLI eliminates the complexity of containerization, registry management, and Kubernetes — so ML engineers and data scientists can focus on what matters: training models.
No Docker expertise required. Chamber auto-detects your project, generates optimized Dockerfiles, handles registry authentication, and submits your workload — all interactively guided.

AI Assistant — Ask Your Infrastructure Anything

Query your GPU infrastructure in natural language, directly from the terminal:
chamber chat "how many GPUs are running right now?"
Start an interactive conversation for back-and-forth troubleshooting, capacity planning, or workload analysis:
chamber chat

The Fastest Path to GPU Training

chamber run ./my-training-project --gpus 4 --team my-team
That’s it. One command. Chamber handles everything else:
1

Detects your project

Automatically identifies PyTorch, TensorFlow, or JAX. Finds your entrypoint and requirements.
2

Generates optimized Dockerfile

Creates a GPU-optimized container with CUDA, cuDNN, and your dependencies.
3

Guides you through setup

Missing Docker? No registry configured? Chamber walks you through each step interactively.
4

Builds, pushes, and submits

Handles authentication, builds your image, pushes to your registry, and submits to Chamber.

Quick Start

Install and run your first workload

Interactive Setup — No Prior Configuration Needed

Chamber CLI guides you through everything. Don’t have Docker? It’ll help you install it. No registry configured? It’ll walk you through setting one up:
$ chamber run ./my-training --gpus 4 --team abc123

No container registry configured

Chamber needs a container registry to store your Docker images.
You can use Google Artifact Registry, AWS ECR, or any Docker-compatible registry.

Select your registry type:

  [1] Google Artifact Registry (recommended for GCP users)
      Example: us-central1-docker.pkg.dev/my-project/ml-images

  [2] AWS ECR (recommended for AWS users)
      Example: 123456789012.dkr.ecr.us-east-1.amazonaws.com/ml-images

  [3] Other Docker registry

Select an option [1]: 1

Google Artifact Registry Setup

Enter your GAR registry URL: us-central1-docker.pkg.dev/my-project/ml-images

Save as default registry? (y/n) [y]: y
Default registry saved to ~/.chamber/config.json

[detect] Analyzing project structure...
[detect] Framework: PyTorch
[detect] Entrypoint: train.py
[setup] Checking prerequisites...
First-time users: Just run chamber run on your project. The CLI will guide you through every step — from installing Docker to configuring your registry to submitting your workload.

Automatic Prerequisite Detection

Missing a tool? Chamber detects it and offers to help:
$ chamber run ./my-project --gpus 4 --team abc123

⚠ Docker is not installed
  Docker is required to build and push container images.

Installation options:

  [1] Quick install (recommended)
      brew install --cask docker
  [2] Open installation guide in browser
  [3] Show manual installation instructions
  [4] Skip (continue anyway)

Select an option [1]:
The same guided experience applies to:
  • Docker — Required for building images
  • AWS CLI — For ECR authentication
  • gcloud CLI — For Google Artifact Registry authentication
  • Azure CLI — For ACR authentication

Why ML Engineers Love Chamber CLI

Zero Config Start

Run chamber run and follow the prompts. No YAML files, no Docker knowledge needed.

Smart Defaults

Auto-detects frameworks, entrypoints, and optimal GPU configurations.

One-Time Setup

Configure once, run forever. Settings are saved for future use.

Preview First

Use --dry-run to see exactly what will be generated before building.

Works Everywhere

Single binary with no dependencies. SSH-friendly for remote workstations.

Scriptable

JSON output mode for CI/CD pipelines and automation.

Quick Command Reference

What you want to doCommand
Ask a questionchamber chat "what GPUs are available?"
Start a conversationchamber chat
Submit a training workloadchamber run ./my-project --gpus 4 --team <id>
Preview before submittingchamber run ./my-project --gpus 4 --dry-run
List your workloadschamber workloads list
Check GPU capacitychamber capacity
View workload statuschamber workloads get <id>
Cancel a workloadchamber workloads cancel <id>

System Requirements

  • macOS (Intel or Apple Silicon) or Linux (x86_64 or ARM64)
  • A Chamber account
For auto-containerization (chamber run):
  • Docker — Chamber will help you install it
  • Cloud CLI (gcloud/aws/az) — Chamber will help you install it

Get Started

Install Chamber CLI and run your first workload