Features
One-Liner Submissions
Auto-containerize and submit workloads without Docker or Kubernetes expertise
Workload Submission
Submit GPU workloads with full control over resources, priority, and scheduling
Monitoring
Track workload status, retrieve GPU metrics, and get workload statistics
Capacity Management
Query available GPU capacity, budgets, and manage allocations
Distributed Training
Support for multi-node distributed training with gang and elastic scheduling
Teams & Templates
Manage teams and use workload templates for consistent configurations
Quick Examples
One-Liner: Auto-Containerize & Submit
The fastest way to get your training code running on GPUs. No Dockerfile or Kubernetes knowledge required:Install with
pip install chamber-sdk[run] to use this feature. See the Auto-Containerize & Run guide for details.Standard: Full Control
For production workloads where you need complete control:API Endpoint
The SDK connects to the Chamber API athttps://api.usechamber.io/v1 by default. You can override this for custom deployments:
Requirements
- Python 3.8 or higher
requestslibrary (installed automatically)
Security
The SDK is designed with security as a priority:- Input Validation - All user inputs are validated before being passed to external commands or APIs
- Credential Protection - Tokens and credentials are never logged or exposed in error messages
- Safe Command Execution - Shell injection is prevented through strict input sanitization
- Path Traversal Protection - File operations are restricted to intended directories
Supported Container Registries
Push container images to your preferred registry with automatic authentication:| Registry | Auto-Auth | Auto-Create Repo |
|---|---|---|
| Google Artifact Registry | ✅ via gcloud CLI | ✅ |
| AWS ECR | ✅ via AWS CLI | ✅ |
What’s New
- Seamless Multi-Cloud Registries - Full support for Google Artifact Registry and AWS ECR with auto-authentication
- Named Registries - Configure registries once, use by name (e.g.,
registry="prod") - Auto-Containerize & Run - Submit workloads in one line without Docker/K8s expertise
- Framework Detection - Auto-detect PyTorch, TensorFlow, JAX and select optimal base images
- Distributed Training - Auto-detect DeepSpeed, Accelerate, Ray and configure appropriately
- Workload Search - Advanced filtering with full-text search
- Aggregations - Get workload counts by status, GPU type, team, and more
- Teams API - List, create, and manage teams
- Templates - Use predefined workload templates

