GPU Cloud Comparison: 17 Neoclouds for AI in 2025

If you’re running AI training workloads and hitting GPU availability limits or cost walls on AWS, GCP, or Azure, a wave of specialized GPU cloud providers (often called “neoclouds”) offer an alternative. These providers focus exclusively on GPU infrastructure, often with simpler pricing, immediate availability, and hardware optimized for AI workloads.
This guide compares 17 GPU cloud providers across the dimensions that matter for production AI training: GPU pricing, InfiniBand networking, storage options, and platform capabilities. We focus on what’s publicly documented, noting where information requires sales contact.
Need a platform layer for your GPU cluster?
Saturn Cloud adds what neoclouds don't provide: dev environments, distributed training orchestration, job scheduling, and cost allocation. Works on Nebius, Crusoe, CoreWeave, or your own infrastructure. Chat with an engineer to learn more β
The Neocloud Landscape
The term “neocloud” refers to cloud providers primarily offering GPU-as-a-Service (GPUaaS). Unlike hyperscalers with broad service portfolios, neoclouds focus on delivering GPU compute with high-speed interconnects for AI and HPC workloads.
Between 10-15 neoclouds currently operate at meaningful scale in the US, with footprints growing across Europe, the Middle East, and Asia.
Why consider neoclouds over AWS, GCP, or Azure?
The hyperscaler GPU experience involves quota requests, waitlists, and premium pricing:
| Provider | H100 80GB | Availability |
|---|---|---|
| AWS | $6.88/hr (p5.48xlarge) | Quota approval required, multi-week waitlists common |
| Azure | $12.29/hr (ND96isr H100 v5) | Quota requests, capacity constraints |
| GCP | $11.06/hr (a3-highgpu-8g) | Limited regions, quota approval process |
| SF Compute | $1.45-1.50/hr | Self-service signup, provision in minutes |
Even AWS, after cutting prices 44% in June 2025, is still 4.7x more expensive than SF Compute. GCP and Azure are 7-8x more expensive.
SF Compute’s marketplace model offers a unique advantage: flexible time-based reservations. You can book GPU clusters for arbitrary windows (7 days starting next Tuesday, 2 weeks in March) at guaranteed pricing, with infrastructure provisioned from vetted partners to your specifications. This eliminates the need for long-term contracts while avoiding spot instance interruptions.
Beyond cost, neoclouds eliminate the friction of quota approvals. On AWS, requesting H100 quota often requires a support ticket explaining your use case, with approval taking days to weeks. GCP and Azure have similar processes. Neoclouds typically offer self-service access: sign up, add payment, deploy GPUs in minutes.
Infrastructure is also optimized differently. Neoclouds treat InfiniBand as standardβ400Gb/s per GPU for multi-node training. Hyperscalers charge premium tiers for similar networking (AWS EFA, GCP GPUDirect) and availability varies by region.
Market Segmentation
The GPU cloud market has fragmented into distinct tiers:
| Tier | Description | Known Examples |
|---|---|---|
| Bespoke Wholesale | Multi-year buildouts for frontier labs (not publicly priced) | FluidStack, Crusoe (Stargate), CoreWeave; most providers do bespoke deals |
| Sales-Gated Cloud | Standardized infrastructure, approval required | CoreWeave, TensorWave, Nscale (training) |
| Self-Service Neoclouds | On-demand, transparent pricing | Nebius, Lambda, Crusoe, Vultr, Hyperstack, DataCrunch/Verda, RunPod, OVHcloud, Voltage Park, GMI Cloud, Hot Aisle |
| Marketplaces | Aggregated supply, variable quality, lowest prices | Vast.ai, SF Compute, RunPod (Community), FluidStack (marketplace) |
Many providers span tiers. FluidStack operates both wholesale (62% of revenue) and marketplace (38%). RunPod offers Secure Cloud (neocloud) and Community Cloud (marketplace). The bespoke tier is likely larger than publicly known, as these deals are rarely disclosed. This guide focuses on publicly available offerings (Tiers 2-4).
GPU Hardware & Pricing
Start here: price is the most visible differentiator and determines your baseline cost. H100 pricing varies 4x across providers ($1.45-6.15/hr), so this table alone can eliminate half your options. Budget-conscious teams should look at SF Compute ($1.45/hr), Vast.ai ($1.74/hr), and Hyperstack ($1.90/hr). Enterprise buyers typically choose from Nebius, CoreWeave, Crusoe, or Lambda.
On-Demand GPU Pricing
| Provider | H100 | H200 | B200 | GB200 | Source |
|---|---|---|---|---|---|
| CoreWeave | PCIe $4.25 / SXM $6.16 | ~$6.30/hr | $8.60/hr | $10.50/hr | Link |
| Crusoe | $3.90/hr | $4.29/hr | Contact | Contact | Link |
| DataCrunch/Verda | $2.29/hr | $2.99/hr | $3.79/hr | β | Link |
| FluidStack | $2.10/hr | $2.30/hr | Contact | Contact | Link |
| GMI Cloud | $2.10/hr | From $2.50/hr | Pre-order | Pre-order | Link |
| Hot Aisle | β | β | β | β | N/A |
| Hyperstack | PCIe $1.90 / SXM $2.40 | $3.50/hr | Contact | Contact | Link |
| Lambda | PCIe $2.49 / SXM $2.99 | β | $4.99/hr | β | Link |
| Nebius | $2.95/hr | $3.50/hr | $5.50/hr | Pre-order | Link |
| Nscale | Contact | Contact | β | Contact | Link |
| OVHcloud | $2.99/hr | β | β | β | Link |
| RunPod | PCIe $1.99-2.39 / SXM $2.69-2.99 | $3.59/hr | $5.19-5.98/hr | β | Link |
| SF Compute | $1.45-1.50/hr | Contact | β | β | Link |
| TensorWave | β | β | β | β | N/A |
| Vast.ai | $1.49-1.87/hr | Varies | Varies | β | Link |
| Voltage Park | From $1.99/hr | Contact | Contact | Contact | Link |
| Vultr | $2.99/hr | Contact | Contact | β | Link |
*Vultr’s B200 is available via 36-month reserved commitment at $2.89/hr ($23.12/hr for 8x B200 HGX system); on-demand pricing not published.
AMD GPU Availability
AMD GPUs are typically 20-50% less than NVIDIA equivalents and are less supply-constrained. If your workload runs on ROCm (PyTorch and JAX have good support), AMD is worth considering. Vultr offers one of the cheapest MI300X options at $1.85/hr with managed Kubernetes. Hot Aisle and TensorWave are AMD-only specialists.
| Provider | MI300X Price | MI325X Price | MI355X Price | Source |
|---|---|---|---|---|
| Crusoe | $3.45/hr | β | Contact | Link |
| Hot Aisle | $1.99/hr | β | Pre-order | Link |
| Nscale | Pre-order | β | β | Link |
| TensorWave | Sold out | $1.95/hr | $2.85/hr | Link |
| Vultr | $1.85/hr | $2.00/hr | $2.59/hr (on-demand) / $2.29/hr (36-month) | Link |
Training Infrastructure
If you’re training models across 16+ GPUs (distributed training), this section determines whether you’ll actually get good performance. Single-GPU workloads or inference can skip aheadβstandard networking and local NVMe storage are fine. For multi-node training, network bandwidth between GPUs and shared filesystem throughput often bottleneck more than GPU speed.
Infrastructure Ownership Models
Ownership model predicts pricing stability and support quality. Owners (Crusoe, CoreWeave, Nebius) control their hardware and can offer consistent SLAs. Aggregators (Vast.ai, SF Compute) provide cheaper pricing but variable quality. Owner (colo) is middle groundβthey own GPUs but rent data center space. See Market Segmentation for how this maps to business model tiers.
| Provider | Model | Description | Source |
|---|---|---|---|
| CoreWeave | Owner | Acquired NEST DC ($322M); 250K+ GPUs across 32 DCs | Link |
| Crusoe | Owner | Vertically integrated; manufactures own modular DCs via Easter-Owens Electric acquisition | Link |
| DataCrunch/Verda | Owner (colo) | Owns GPUs; operates in Iceland and Finland | Link |
| FluidStack | Owner + Aggregator | 62% Private Cloud (custom-built for enterprises like Anthropic, Meta), 38% Marketplace; $10B debt financing from Macquarie | Link |
| GMI Cloud | Owner (colo) | Owns GPU hardware; offshoot of Realtek/GMI Technology | Link |
| Hot Aisle | Owner (colo) | Owns AMD GPUs; colocation at Switch Pyramid Tier 5 DC in Grand Rapids, MI | Link |
| Hyperstack | Owner (colo) | Owns GPU hardware; colocation partnerships | Link |
| Lambda | Owner (colo) | Owns GPU hardware; colocation in SF and Texas; NVIDIA leaseback partnership | Link |
| Nebius | Owner + Colo | Owns DCs in Finland; colocation in US and other regions | Link |
| Nscale | Owner | Owns data centers in Norway (Glomfjord, Stargate Norway JV with Aker) | Link |
| OVHcloud | Owner | Fully vertically integrated; designs/manufactures servers, builds/manages own DCs | Link |
| RunPod | Owner + Aggregator | Secure Cloud (Tier 3/4 partners) + Community Cloud (aggregated third-party hosts) | Link |
| SF Compute | Aggregator | Two-sided marketplace connecting GPU cloud providers | Link |
| TensorWave | Owner (colo) | Owns AMD GPU hardware; colocation across US data centers | Link |
| Vast.ai | Aggregator | Pure marketplace connecting 10K+ GPUs from individuals to datacenters | Link |
| Voltage Park | Owner (colo) | Owns H100 GPU hardware; colocation in Texas, Virginia, Washington | Link |
| Vultr | Colo | Operates across 32 global colocation facilities (Digital Realty, Equinix, QTS partnerships) | Link |
InfiniBand and High-Speed Networking
InfiniBand only matters for multi-node distributed training (16+ GPUs). If you’re training on 1-8 GPUs or running inference, skip this table. Providers using RoCE (TensorWave, Hot Aisle, Nscale) offer RDMA over standard Ethernet with lower cost but potentially higher tail latency under network congestion.
Note: This table describes publicly available cloud offerings. Bespoke wholesale buildouts can support arbitrary configurations. “Not documented” indicates information not publicly available.
| Provider | InfiniBand | Speed (per GPU) | Availability | Topology | Source |
|---|---|---|---|---|---|
| CoreWeave | Yes | 400Gb/s (Quantum-2) | H100/H200 clusters | Non-blocking fat-tree (rail-optimized) | Link |
| Crusoe | Yes | 400Gb/s | H100/H200 instances | Rail-optimized | Link |
| DataCrunch/Verda | Yes | 400Gb/s (NDR) | Instant clusters | Rail-optimized | Link |
| FluidStack | Yes | 400Gb/s | Dedicated clusters | Not documented | Link |
| GMI Cloud | Yes | 400Gb/s | H100/H200 clusters | Not documented | Link |
| Hot Aisle | RoCE only | 400Gb Ethernet | All nodes | Dell/Broadcom | Link |
| Hyperstack | Supercloud only | 400Gb/s (Quantum-2) | H100/H200 SXM | Not documented | Link |
| Lambda | Clusters only | 400Gb/s (Quantum-2) | 1-Click Clusters | Rail-optimized | Link |
| Nebius | Yes | 400Gb/s (Quantum-2) | All GPU nodes | Not documented | Link |
| Nscale | RoCE only | 400Gb Ethernet | All nodes | Nokia 7220 IXR | Link |
| OVHcloud | No | 25Gb Ethernet (Public) / 50-100Gb (Bare Metal) | Public Cloud GPU / Bare Metal | vRack OLA | Link |
| RunPod | Clusters only | 200-400Gb/s | Instant Clusters | Not documented | Link |
| SF Compute | K8s only | 400Gb/s | K8s clusters only | Not documented | Link |
| TensorWave | RoCE only | 400Gb Ethernet | All nodes | Aviz ONES fabric | Link |
| Vast.ai | No | Varies by host | Marketplace | Varies by host | Link |
| Voltage Park | Yes | 400Gb/s | IB tier ($2.49/hr) | Not documented | Link |
| Vultr | Yes | 400Gb/s (Quantum-2) | H100/H200 clusters | Non-blocking | Link |
Storage Options
Storage matters for two scenarios: (1) multi-node training requiring shared filesystems so all GPUs access the same data, and (2) checkpointing large models to object storage. For single-node work, local NVMe is usually sufficient. This table describes publicly available offerings; bespoke buildouts integrate customer-specified storage.
Key differentiator: VAST Data integration (Lambda, Voltage Park, CoreWeave, GMI Cloud) provides high-performance shared storage with S3 compatibility. Several providers (Crusoe, FluidStack, TensorWave) lack native object storageβyou’ll need to run MinIO or similar yourself.
| Provider | Block Storage | Object Storage | Shared FS | Technology | Source |
|---|---|---|---|---|---|
| CoreWeave | Yes | S3 Hot $0.06 / Warm $0.03 / Cold $0.015 | $0.07/GB/mo | VAST, WEKA, DDN | Link |
| Crusoe | $0.08/GB/mo | β | $0.07/GB/mo | Lightbits | Link |
| DataCrunch/Verda | $0.05-0.20/GB/mo | Coming soon | $0.20/GB/mo | NVMe SFS | Link |
| FluidStack | Filesystem only | β | Not documented | Not documented | Link |
| GMI Cloud | Integrated | VAST S3 | VAST NFS | VAST Data, GPUDirect | Link |
| Hot Aisle | Not documented | β | β | Not documented | Link |
| Hyperstack | ~$0.07/GB/mo | In development | WEKA (Supercloud) | NVMe | Link |
| Lambda | β | S3 adapter only | $0.20/GB/mo | VAST Data | Link |
| Nebius | $0.05-0.12/GB/mo | S3 Standard $0.0147 / Enhanced $0.11 | $0.08/GB/mo | NFS | Link |
| Nscale | Not documented | Not documented | “Parallel FS” | Not documented | Link |
| OVHcloud | $0.022/GB/mo | S3 + egress | $120-150/TB/mo | NetApp | Link |
| RunPod | $0.10/GB/mo | S3 (5 DCs) | $0.05-0.07/GB/mo | Network volumes | Link |
| SF Compute | Local NVMe only | β | β | 1.5TB+ per node | Link |
| TensorWave | Local only | β | Not documented | Not documented | Link |
| Vast.ai | Per-host | β | β | Varies | Link |
| Voltage Park | Local NVMe | VAST S3 | VAST NFS | VAST Data | Link |
| Vultr | $0.10/GB/mo | S3 $0.018-0.10/GB/mo | $0.10/GB/mo | NVMe-backed | Link |
Storage Performance
If you’re loading large datasets (100GB+) to GPUs for training, shared filesystem throughput determines how long you wait. Nebius and Lambda both achieve 11-12 GB/s per 8-GPU node with VAST Data, significantly faster than hyperscaler equivalents (AWS EFS maxes at 1.5 GB/s). Most providers don’t publish these specs.
| Provider | Shared FS Throughput | Notes | Source |
|---|---|---|---|
| DataCrunch/Verda | 2000 MB/s continuous (NVMe SFS) | Per volume | Link |
| Lambda | ~11 GB/s per mount (VAST) | With nconnect=32 and 100Gb NIC | Link |
| Nebius | 12 GBps read, 8 GBps write per 8-GPU VM | Per 8-GPU VM | Link |
Orchestration & Platform
How you’ll actually run workloads matters as much as the hardware. But there’s an important distinction: infrastructure orchestration (Kubernetes, Slurm) vs. the platform layer.
Neoclouds provide Kubernetes or Slurm to schedule containers or jobs on GPU nodes. That’s infrastructure orchestrationβit gets your code running on hardware. But production AI teams need more: hosted dev environments where data scientists can iterate, distributed training orchestration that handles multi-node configurations, parallel job scheduling with automatic retries, and cost allocation by user and project.
Most neoclouds stop at infrastructure. The platform layerβthe operational tooling that makes GPU infrastructure actually usable for teamsβis what you build on top, or what Saturn Cloud provides out of the box.
Kubernetes and Orchestration
Most production AI workloads run on Kubernetes. Support varies from fully managed Kubernetes to bring-your-own orchestration.
| Provider | Managed K8s | Slurm | Autoscaling | Notes | Source |
|---|---|---|---|---|---|
| CoreWeave | Yes (CKS) | SUNK | Yes | Bare-metal K8s, no hypervisor | Link |
| Crusoe | Yes (CMK) | Yes | Yes | Run:ai integration | Link |
| DataCrunch/Verda | β | Yes | β | Slurm on clusters | Link |
| FluidStack | β | β | β | Atlas platform | Link |
| GMI Cloud | Yes (Cluster Engine) | β | Yes | K8s-based orchestration | Link |
| Hot Aisle | β | β | β | Bare-metal focus | Link |
| Hyperstack | Yes (On-Demand K8s) | Not documented | β | API-driven K8s clusters | Link |
| Lambda | Yes (1-Click Clusters) | Available | β | Managed K8s and Slurm | Link |
| Nebius | Yes | Managed + Soperator | Yes | First Slurm Kubernetes operator | Link |
| Nscale | Yes (NKS) | Yes | β | Limited docs | Link |
| OVHcloud | Yes | β | Yes | Standard managed K8s | Link |
| RunPod | β | β | Yes | Serverless focus | Link |
| SF Compute | Yes | β | β | Managed K8s per zone | Link |
| TensorWave | β | Yes | β | Pyxis/Enroot containers | Link |
| Vast.ai | β | β | β | Container-based | Link |
| Voltage Park | Add-on | β | β | Helm/Rook-Ceph guides | Link |
| Vultr | Yes (VKE) | β | Yes | Standard managed K8s | Link |
Nebius’s Soperator is the first open-source Kubernetes operator for running Slurm clusters. CoreWeave’s SUNK supports 32,000+ GPU jobs.
The Platform Layer: Saturn Cloud
Beyond GPU interconnect, production AI workloads often need load balancers for inference endpoints, VPCs for network isolation, and VPN/peering for hybrid cloud connectivity.
Network Services Comparison
| Provider | Load Balancer | VPC/Private Network | VPN/Peering | Public IPs | Source |
|---|---|---|---|---|---|
| CoreWeave | Yes (K8s LB) | Yes (VPC) | Direct Connect (Equinix, Megaport) | Yes + BYOIP | Link |
| Crusoe | Yes | Yes (VPC) | Yes (global backbone) | Yes | Link |
| DataCrunch/Verda | β | Not documented | β | Not documented | Link |
| FluidStack | β | Not documented | β | Not documented | Link |
| GMI Cloud | β | Yes (VPC) | β | Yes (Elastic IPs) | Link |
| Hot Aisle | β | β | β | Yes | Link |
| Hyperstack | Not documented | Yes (VPC) | Not documented | Yes | Link |
| Lambda | Not documented | Yes (private network) | Not documented | Yes | Link |
| Nebius | Yes (K8s LB) | Yes | β | Yes | Link |
| Nscale | β | β | β | Not documented | Link |
| OVHcloud | Yes (L4/L7, Octavia) | Yes (vRack) | OVHcloud Connect | Yes (Floating IPs) | Link |
| RunPod | Serverless only | Global networking (Pod-to-Pod) | β | Shared (port mapping) | Link |
| SF Compute | β | β | β | Not documented | Link |
| TensorWave | β | β | β | Not documented | Link |
| Vast.ai | β | β | β | Shared (port mapping) | Link |
| Voltage Park | Not documented | Yes (VPC) | Not documented | Not documented | Link |
| Vultr | Yes (L4, $10/mo) | Yes (VPC 2.0) | β | Yes | Link |
Operational Considerations
These tables cover costs and capabilities that aren’t obvious until you deploy to production. Egress fees can add 20-40% to bills at hyperscalers, but most neoclouds offer free egress. For inference deployments, you need load balancers and VPCsβproviders without these force you to run HAProxy/nginx on VMs and manually configure networking.
Egress Pricing
Good news: free egress is now standard among neoclouds, unlike hyperscalers where moving data out costs $0.08-0.12/GB. If you’re serving inference traffic or moving large datasets between clouds, this matters. Exceptions: Vultr ($0.01/GB after 2TB free) and OVHcloud ($0.011/GB). Vast.ai varies by host and can be expensive.
| Provider | Egress Cost | Notes | Source |
|---|---|---|---|
| CoreWeave | Free | Zero egress, ingress, and I/O operations | Link |
| Crusoe | Free | Zero data transfer fees | Link |
| DataCrunch/Verda | Not documented | Link | |
| FluidStack | Free | Zero egress/ingress | Link |
| GMI Cloud | Not documented | Link | |
| Hot Aisle | Not documented | Link | |
| Hyperstack | Free | Zero bandwidth charges | Link |
| Lambda | Free | Zero egress | Link |
| Nebius | Compute free | S3 Standard $0.015/GB egress; S3 Enhanced free egress | Link |
| Nscale | Not documented | Link | |
| OVHcloud | Compute free | Object Storage $0.011/GB egress | Link |
| RunPod | Free | Zero data transfer | Link |
| SF Compute | Free | No ingress/egress fees | Link |
| TensorWave | Not documented | Claims “no hidden costs” | Link |
| Vast.ai | Varies | Per-host, can be $20+/TB | Link |
| Voltage Park | Free | No hidden costs | Link |
| Vultr | $0.01/GB | 2TB/month free, then $0.01/GB | Link |
Network Services
Production inference deployments need load balancers (to distribute traffic across GPU instances) and VPCs (for network isolation). Only Nebius, CoreWeave, Crusoe, Vultr, and OVHcloud provide both. Marketplace providers (Vast.ai, RunPod) use port mapping instead of dedicated IPs, which complicates production deployments. If you’re only doing training, you can skip this table.
| Provider | Load Balancer | VPC/Private Network | VPN/Peering | Public IPs | Source |
|---|---|---|---|---|---|
| CoreWeave | Yes (K8s LB) | Yes (VPC) | Direct Connect (Equinix, Megaport) | Yes + BYOIP | Link |
| Crusoe | Yes | Yes (VPC) | Yes (global backbone) | Yes | Link |
| DataCrunch/Verda | β | Not documented | β | Not documented | Link |
| FluidStack | β | Not documented | β | Not documented | Link |
| GMI Cloud | β | Yes (VPC) | β | Yes (Elastic IPs) | Link |
| Hot Aisle | β | β | β | Yes | Link |
| Hyperstack | Not documented | Yes (VPC) | Not documented | Yes | Link |
| Lambda | Not documented | Yes (private network) | Not documented | Yes | Link |
| Nebius | Yes (K8s LB) | Yes | β | Yes | Link |
| Nscale | β | β | β | Not documented | Link |
| OVHcloud | Yes (L4/L7, Octavia) | Yes (vRack) | OVHcloud Connect | Yes (Floating IPs) | Link |
| RunPod | Serverless only | Global networking (Pod-to-Pod) | β | Shared (port mapping) | Link |
| SF Compute | β | β | β | Not documented | Link |
| TensorWave | β | β | β | Not documented | Link |
| Vast.ai | β | β | β | Shared (port mapping) | Link |
| Voltage Park | Not documented | Yes (VPC) | Not documented | Not documented | Link |
| Vultr | Yes (L4, $10/mo) | Yes (VPC 2.0) | β | Yes | Link |
Developer Experience & Enterprise Readiness
These tables determine how quickly you can get started and whether the provider meets enterprise requirements. If you’re managing infrastructure-as-code, official Terraform providers (Nebius, CoreWeave, Crusoe, Vultr, OVHcloud) save significant automation work. Self-service access means you can provision GPUs in minutes vs days of sales conversations. Compliance certifications (SOC 2, ISO 27001, HIPAA) gate enterprise adoptionβif you need them, filter to providers with documented certs.
Terraform and API Support
Terraform providers and APIs enable infrastructure-as-code. Official Terraform providers (Nebius, CoreWeave, Crusoe, Vultr, OVHcloud) are actively maintained with full resource coverage. Providers with only REST APIs require custom Terraform provider development or manual scripting.
| Provider | Terraform Provider | API | CLI | Source |
|---|---|---|---|---|
| CoreWeave | Official | Yes | Yes | Link |
| Crusoe | Official | REST | Yes | Link |
| DataCrunch/Verda | β | REST | β | Link |
| FluidStack | β | REST | β | Link |
| GMI Cloud | β | REST | β | Link |
| Hot Aisle | β | REST | β | Link |
| Hyperstack | Community | REST | β | Link |
| Lambda | Community | REST | Yes | Link |
| Nebius | Official | Yes | Yes | Link |
| Nscale | Community | REST | Yes | Link |
| OVHcloud | Official | REST | Yes | Link |
| RunPod | Community | GraphQL | Yes | Link |
| SF Compute | β | Yes | Yes | Link |
| TensorWave | β | REST | β | Link |
| Vast.ai | Community | REST | Yes | Link |
| Voltage Park | β | REST | β | Link |
| Vultr | Official | REST | Yes | Link |
Self-Service Access
Self-service means you can sign up and provision GPUs immediately. Sales-gated (CoreWeave, TensorWave) means you submit a request and wait for organizational approval, which can take days.
| Provider | Tier | Access Model | Notes | Source |
|---|---|---|---|---|
| CoreWeave | Sales-Gated | Sales-gated | Requires organizational approval from sales team | Link |
| Crusoe | Neocloud | Self-service | Sign up via console, larger deployments contact sales | Link |
| DataCrunch/Verda | Neocloud | Self-service | Order GPU instances in minutes via dashboard or API | Link |
| FluidStack | Neocloud + Marketplace | Self-service | Sign up at auth.fluidstack.io, launch in under 5 minutes | Link |
| GMI Cloud | Neocloud | Self-service | Sign up, launch instances in 5-15 minutes via console/API | Link |
| Hot Aisle | Neocloud | Self-service | SSH-based signup, credit card, no contracts | Link |
| Hyperstack | Neocloud | Self-service | Instant access, one-click deployment | Link |
| Lambda | Neocloud | Self-service | Create account and launch GPUs in minutes, pay-as-you-go | Link |
| Nebius | Neocloud | Self-service | Sign up, add $25+, deploy up to 32 GPUs immediately | Link |
| Nscale | Sales-Gated | Hybrid | Self-service for inference only; training clusters require sales | Link |
| OVHcloud | Neocloud | Self-service | Create account, $200 free credit for first project | Link |
| RunPod | Neocloud + Marketplace | Self-service | Deploy GPUs in under a minute, no rate limits | Link |
| SF Compute | Marketplace | Self-service | Sign up to buy, larger deployments contact sales | Link |
| TensorWave | Sales-Gated | Sales-gated | Contact sales/solutions engineers to get started | Link |
| Vast.ai | Marketplace | Self-service | $5 minimum to start, per-second billing | Link |
| Voltage Park | Neocloud | Self-service | On-demand GPUs available, reserved capacity contact sales | Link |
| Vultr | Neocloud | Self-service | Free account signup, provision via portal/API/CLI | Link |
Compliance and Enterprise Features
Enterprise buyers often require SOC 2 Type II (security controls audit) and ISO 27001 (information security management). HIPAA is required for healthcare data. If your procurement process gates on specific certifications, this table eliminates providers quickly. Also check regionsβEuropean teams may need EU data centers for data residency requirements.
| Provider | Compliance | SSO/SAML | Regions | Source |
|---|---|---|---|---|
| CoreWeave | SOC 2, ISO 27001 | SAML/OIDC/SCIM | US, UK, Spain, Sweden, Norway | Security |
| Crusoe | SOC 2 Type II | Not documented | US (TX, VA), Iceland, Norway (soon) | Link |
| DataCrunch/Verda | ISO 27001 | β | EU (Finland, Iceland) | Link |
| FluidStack | β | β | Not documented | Link |
| GMI Cloud | SOC 2 Type 1, ISO 27001 | β | Not documented | Link |
| Hot Aisle | SOC 2 Type II, HIPAA | β | US (MI) | Link |
| Hyperstack | β | β | Europe, North America | Link |
| Lambda | SOC 2 Type II | Not documented | Not documented | Link |
| Nebius | SOC 2 Type II, HIPAA, ISO 27001 | Yes | US, EU (Finland, France, Iceland) | Regions, Trust Center |
| Nscale | β | β | Norway | Link |
| OVHcloud | SOC 2, ISO 27001, PCI DSS, HDS, SecNumCloud | Not documented | Global (46 DCs) | Infrastructure, Certifications |
| RunPod | SOC 2 Type II | β | Multiple | Link |
| SF Compute | β | β | Not documented | Link |
| TensorWave | β | β | Not documented | Link |
| Vast.ai | β | β | Varies by host | Link |
| Voltage Park | SOC 2 Type II, ISO 27001, HIPAA | β | US (WA, TX, VA, UT) | Infrastructure, Security |
| Vultr | SOC 2 (HIPAA), ISO 27001, PCI DSS | β | 32 global locations | Locations, Compliance |
Conclusion
The GPU neocloud market has matured significantly. Free egress is now standard, 400Gb/s InfiniBand is table stakes for serious providers, and pricing has compressed to $2-4/hr for H100s (vs $6-12/hr on hyperscalers).
For production AI training, Nebius, CoreWeave, and Crusoe offer the most complete platforms. For cost-sensitive experimentation, Vast.ai and SF Compute provide the lowest prices. For European data sovereignty, Nebius and DataCrunch/Verda combine EU data centers with enterprise compliance and 100% renewable energy.
The main remaining gap is documentation transparency. Most providers require sales conversations for pricing on reserved capacity, large clusters, and enterprise features. As the market matures, expect more self-service options and published pricing for these categories.
Picked a provider?
Saturn Cloud adds the platform layer that neoclouds don't provide: hosted dev environments, distributed training orchestration, parallel job scheduling, and cost allocation by user and project. Works on any cloud or bare-metal. Chat with an engineer to learn more β
Last updated: December 2025. Pricing and features change frequently. Verify current offerings on provider websites before making decisions.
Saturn Cloud provides customizable, ready-to-use cloud environments
for collaborative data teams.
Try Saturn Cloud and join thousands of users moving to the cloud without having to switch tools.