Skip to main content

AI Assistant

Get answers without leaving your terminal. The chamber chat command connects you to Chamber’s AI assistant — ask about workload status, GPU utilization, capacity planning, and more in natural language.
chamber chat "how many GPUs are running right now?"
AI Chat requires browser-based authentication. Run chamber login (without --token) to authenticate. API tokens are not supported for chat.

Modes

Quick Question

Pass your question as an argument for a single response:
chamber chat "what's the status of my latest training job?"
chamber chat "which teams have the highest GPU utilization this week?"

Interactive Conversation

Run chamber chat without arguments to start a multi-turn conversation:
$ chamber chat

Chamber AI Assistant
Type /help for commands, /quit to exit

> what jobs are currently running?

You have 3 jobs running:
 gpt-finetune-v2 8x H100, running for 2h 15m
 embedding-train 4x A100, running for 45m
 llm-eval 2x H100, running for 12m

> how much capacity is left in the H100 pool?

The us-west-h100 pool has 16 of 64 GPUs available (75% utilized).
Your team has 12 GPU-hours remaining in this billing period.

> /quit
Interactive commands:
CommandAction
/quit or /exitEnd the session
/newStart a new conversation
/helpShow available commands
Esc or Ctrl+CCancel the current response

Named Conversations

Use -c to name a conversation and resume it later:
# Start a named conversation
chamber chat -c debug-job "my training job failed, can you help?"

# Come back later and continue where you left off
chamber chat -c debug-job "I tried your suggestion, still seeing OOM errors"

Pipe Input

Pipe logs, config files, or other data into chat for analysis:
# Analyze logs
kubectl logs my-training-pod | chamber chat "what errors are in these logs?"

# Review a config
cat .chamber.yaml | chamber chat "is this config optimized for distributed training?"
When stdout is piped (e.g., chamber chat "question" | pbcopy), streaming is automatically disabled and the full response is buffered before output.

Image Attachments

Attach an image for visual analysis:
chamber chat -i screenshot.png "what does this GPU utilization graph tell me?"
Supported formats: PNG, JPG, GIF, WebP.

List Conversations

View your recent chat history:
chamber chat --list

What You Can Ask

The AI assistant has access to your organization’s live infrastructure data. Example questions:

Workloads

  • “What jobs are running right now?”
  • “Why did my training job fail?”
  • “Show me jobs submitted by my team this week”
  • “What’s the average queue wait time?”

Capacity

  • “How much GPU capacity is available?”
  • “When does our H100 reservation expire?”
  • “Which pools have spare capacity?”
  • “Are we at risk of hitting our budget?”

Utilization

  • “What’s our GPU utilization this week?”
  • “Which teams are underutilizing their allocations?”
  • “Show me utilization trends for the last 30 days”
  • “Are there any idle GPUs I can use?”

Optimization

  • “How can I improve my training job’s GPU efficiency?”
  • “Should I use reserved or elastic for this workload?”
  • “What’s the best GPU type for my use case?”
  • “Help me right-size my resource requests”
The assistant also understands Chamber CLI commands and can suggest the right command for what you’re trying to do — and execute it for you with a single confirmation.

Command Execution

When the AI assistant determines that a CLI command can fulfill your request, it will suggest the command and ask for confirmation before running it locally:
$ chamber chat "run my training script in ./demo on 4 H100s"

Chambie AI Assistant
Thinking...

I'll set that up for you.

  ┌ Suggested command:
  │  chamber run ./demo --gpus 4 --gpu-type H100 --team abc123

  │  Run training in ./demo on 4x H100 GPUs
  └ Execute this command? [Y/n]:
Press Enter or type y to execute the command. Type n to skip it. This works in all modes — quick questions, interactive conversations, and piped input. A few things to know:
  • Confirmation required — The assistant always asks before executing. You stay in control.
  • Local execution — Commands run on your machine using the same chamber binary you’re already using.
  • Non-TTY mode — When input is piped (non-interactive), suggested commands are displayed but not executed, so scripts remain safe.
  • Ctrl+C — You can cancel a running command at any time.
Try asking the assistant to perform actions: “cancel workload abc123”, “submit a job with 8 H100s”, or “list my team’s running jobs”. It will translate your request into the right CLI command.

Command Reference

chamber chat [message] [flags]
FlagDescription
-c, --conversationNamed conversation thread (resume later with the same name)
-m, --modelAI model selection
-i, --imageAttach an image file (PNG, JPG, GIF, WebP)
--no-streamDisable streaming (buffer full response before output)
--rawDisable markdown rendering in terminal
-l, --listList recent conversations

Output

When running in a terminal, responses are rendered with formatted markdown including syntax highlighting, bold/italic text, and clickable links. Use --raw to disable formatting if needed. When piped to another command, streaming is disabled automatically and raw text is output:
# Copy response to clipboard
chamber chat "list my running jobs" | pbcopy

# Save to file
chamber chat "summarize this week's utilization" > report.txt