📣 From $2.95/Hr H100, H200, B200s, and B300s: train, fine-tune, and scale ML models affordably, without having to DIY the infrastructure   📣 Run Saturn Cloud on AWS, GCP, Azure, Nebius, Crusoe, or on-prem. 📣 From $2.95/Hr H100, H200, B200s, and B300s: train, fine-tune, and scale ML models affordably, without having to DIY the infrastructure   📣 Run Saturn Cloud on AWS, GCP, Azure, Nebius, Crusoe, or on-prem. 📣 From $2.95/Hr H100, H200, B200s, and B300s: train, fine-tune, and scale ML models affordably, without having to DIY the infrastructure   📣 Run Saturn Cloud on AWS, GCP, Azure, Nebius, Crusoe, or on-prem.
← Back to Blog

Run Claude Code on a Cloud GPU in 10 Minutes – No Root Workarounds Required

How to get Claude Code running in fully autonomous mode on an H100 on Saturn Cloud from sign-up to first agent output, with working commands.

Run Claude Code on a Cloud GPU in 10 Minutes – No Root Workarounds Required

Running Claude Code in autonomous mode on a cloud GPU is a common source of friction. Most GPU cloud providers provision instances with default root shell access; however, Claude Code’s --dangerously-skip-permissions flag, which enables non-interactive execution by suppressing confirmation prompts, can’t be invoked with root privileges.

On most platforms, satisfying these requirements involves manual administrative overhead: provisioning a non-privileged user, injecting public SSH keys for authentication, and delegating specific sudo permissions. This introduces configuration latency before the autonomous agent can be deployed.

On Saturn Cloud, you skip that entirely. Managed environments launch as a non-root user with CUDA, drivers, and Python pre-configured. Claude Code runs in autonomous mode immediately after installation. This guide covers the full setup from scratch, sign-up to running an agent in under 10 minutes.

What you need

Step 1: Launch a GPU workspace on Saturn Cloud

Log into your Saturn Cloud dashboard and go to Resources → New Python Server. Select your GPU instance type. For most Claude Code workloads, an H100 is the right choice – it has 80 GB of VRAM and handles large context windows comfortably. For heavier ML tasks, such as fine-tuning 70B models, you may want an H200 (141 GB HBM3e).

GPUVRAMRateBest for
H10080 GB HBM3From $2.95/hrClaude Code agents, most LLM workloads
H200141 GB HBM3eFrom $2.95/hrLarge model fine-tuning alongside agent tasks

Give your resource a name, set the disk size to at least 30 GB, and click Create. The workspace will be initialized and ready for access within 1 to 2 minutes.

Step 2: Open a terminal

From your resource card, click JupyterLab or VS Code to open your development environment. Both are available on every Saturn Cloud resource.

Open a terminal. In JupyterLab: File → New → Terminal. In VS Code: Terminal → New Terminal.

Verify your environment:

nvidia-smi

You should see your GPU listed with its driver and CUDA versions. If you’re on an H100, you’ll see 80 GB of VRAM available.

python --version

Python 3.10+ is pre-installed. You’re ready to proceed.

Step 3: Install Claude Code

Claude Code installs via npm. Node.js is available in the Saturn Cloud environment. Run:

npm install -g @anthropic-ai/claude-code

Verify the installation:

claude --version

You should see the Claude Code version number. If npm isn’t found, install it first:

curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
npm install -g @anthropic-ai/claude-code

Step 4: Set your Anthropic API key

Claude Code needs your Anthropic API key to authenticate. Set it as an environment variable:

export ANTHROPIC_API_KEY=your_api_key_here

To make this persist across sessions, add it to your shell profile. Saturn Cloud’s secrets manager is the cleaner option; add your key once in the dashboard under Secrets, then attach it to your resource environment variables so it’s available automatically on every launch.

Verify Claude Code can authenticate:

claude -p "test"

Step 5: Run Claude Code in autonomous mode

This is the step that requires a non-root user on most GPU cloud platforms. On Saturn Cloud, your environment already runs as a non-root user, so you can go straight to autonomous mode:

claude --dangerously-skip-permissions

The --dangerously-skip-permissions flag lets Claude Code execute commands, write files, install packages, and run code without asking for confirmation on each step. This is what makes it genuinely autonomous and useful for long-running tasks you want to run overnight or while you’re away from the machine.

Step 6: Give Claude Code a task

Once Claude Code is running in autonomous mode, give it a task. The agent will plan, execute, and iterate without further input from you. A few examples to get started:

Fine-tune a model

Fine-tune Llama 3 8B on this dataset: [your dataset path].
Use QLoRA with the unsloth library. Save checkpoints to /outputs.
Run for 3 epochs and report final loss.

Run a benchmark

Download the MMLU benchmark dataset and evaluate
meta-llama/Llama-3.1-8B-Instruct on it using lm-evaluation-harness.
Report accuracy per subject area and overall.

Set up an inference endpoint

Install vLLM and serve meta-llama/Llama-3.1-8B-Instruct
on port 8000 with an OpenAI-compatible API.
Test the endpoint with a sample request before finishing.

ML research

Implement and compare three learning rate schedulers
(cosine, linear warmup, polynomial decay) on a GPT-2 training run.
Log results to a CSV and plot the training curves.

Claude Code will read your prompt, install any required packages, write the code, run it, handle errors, and iterate until the task is complete. For long tasks, it’s worth running inside tmux so the session persists if you close your terminal.

Step 7: Keep your session alive with tmux

For tasks that run longer than your terminal session, use tmux so the agent keeps running even if you disconnect:

tmux new -s claude

Start Claude Code inside the tmux session. To detach (leave it running in the background): Ctrl+B, then D. To reattach later:

tmux attach -t claude

If you’re using JupyterLab, the terminal tab will stay active as long as your browser session is open. VS Code’s integrated terminal also persists. For overnight runs, tmux is the most reliable option.

What does it cost?

Saturn Cloud H100 instances via Nebius run at $2.95/hr. An overnight run of 8 hours costs roughly $24. The resource shuts down automatically after an idle period – configurable in your resource settings – so you’re not paying for a GPU sitting idle after your agent finishes.

Session lengthGPUApprox. cost
2 hours1x H100~$6
8 hours (overnight)1x H100~$24
24 hours1x H100~$71
8 hours (overnight)1x H200~$24

What to run next

Claude Code on an H100 is a capable setup for a wide range of ML engineering tasks. A few directions worth exploring from here:

  • Multi-GPU agent tasks: Saturn Cloud supports multi-GPU instances. Point Claude Code at a distributed training task using FSDP or DeepSpeed and let it configure and run the job. See the FSDP vs DDP vs DeepSpeed guide for the training strategy decisions.
  • Fine-tuning Llama 3: Claude Code can install Unsloth, set up QLoRA, and run a fine-tuning job end-to-end. See the Llama 3 fine-tuning guide for the manual version of the same workflow.
  • Deploying a NIM inference endpoint: Ask Claude Code to pull an NVIDIA NIM container and serve it with an OpenAI-compatible API. See the NVIDIA NIM on Saturn Cloud guide for the manual setup.
  • Persistent experiments: Saturn Cloud mounts /outputs as persistent storage. Tell Claude Code to save all checkpoints, logs, and results there so they survive after the instance shuts down.

The combination of a managed non-root environment and instant GPU access is what makes Saturn Cloud the fastest path to Claude Code running autonomously in the cloud. No user setup, no driver configuration, no environment debugging – just a working GPU and an agent ready to run.

The Saturn Cloud quickstart has everything you need to get your first resource running. H100 and H200 instances are available from $2.95/hr.

Keep reading

Related articles

Run Claude Code on a Cloud GPU in 10 Minutes – No Root Workarounds Required
Apr 3, 2026

Saturn Cloud vs AWS SageMaker for LLM Training

Run Claude Code on a Cloud GPU in 10 Minutes – No Root Workarounds Required
Apr 1, 2026

Running NVIDIA NIM on Saturn Cloud

Run Claude Code on a Cloud GPU in 10 Minutes – No Root Workarounds Required
Mar 31, 2026

How to Fine-Tune Llama 3 on GPU Clusters