GPU Cloud Comparison: 17 Neoclouds for AI in 2025

A technical comparison of GPU cloud providers beyond AWS, GCP, and Azure, covering pricing, InfiniBand networking, storage options, and platform maturity for AI workloads.

If you’re running AI training workloads and hitting GPU availability limits or cost walls on AWS, GCP, or Azure, a wave of specialized GPU cloud providers (often called “neoclouds”) offer an alternative. These providers focus exclusively on GPU infrastructure, often with simpler pricing, immediate availability, and hardware optimized for AI workloads.

This guide compares 17 GPU cloud providers across the dimensions that matter for production AI training: GPU pricing, InfiniBand networking, storage options, and platform capabilities. We focus on what’s publicly documented, noting where information requires sales contact.

The Neocloud Landscape

The term “neocloud” refers to cloud providers primarily offering GPU-as-a-Service (GPUaaS). Unlike hyperscalers with broad service portfolios, neoclouds focus on delivering GPU compute with high-speed interconnects for AI and HPC workloads.

Between 10-15 neoclouds currently operate at meaningful scale in the US, with footprints growing across Europe, the Middle East, and Asia.

Why consider neoclouds over AWS, GCP, or Azure?

The hyperscaler GPU experience involves quota requests, waitlists, and premium pricing:

ProviderH100 80GBAvailability
AWS$6.88/hr (p5.48xlarge)Quota approval required, multi-week waitlists common
Azure$12.29/hr (ND96isr H100 v5)Quota requests, capacity constraints
GCP$11.06/hr (a3-highgpu-8g)Limited regions, quota approval process
SF Compute$1.45-1.50/hrSelf-service signup, provision in minutes

Even AWS, after cutting prices 44% in June 2025, is still 4.7x more expensive than SF Compute. GCP and Azure are 7-8x more expensive.

SF Compute’s marketplace model offers a unique advantage: flexible time-based reservations. You can book GPU clusters for arbitrary windows (7 days starting next Tuesday, 2 weeks in March) at guaranteed pricing, with infrastructure provisioned from vetted partners to your specifications. This eliminates the need for long-term contracts while avoiding spot instance interruptions.

Beyond cost, neoclouds eliminate the friction of quota approvals. On AWS, requesting H100 quota often requires a support ticket explaining your use case, with approval taking days to weeks. GCP and Azure have similar processes. Neoclouds typically offer self-service access: sign up, add payment, deploy GPUs in minutes.

Infrastructure is also optimized differently. Neoclouds treat InfiniBand as standardβ€”400Gb/s per GPU for multi-node training. Hyperscalers charge premium tiers for similar networking (AWS EFA, GCP GPUDirect) and availability varies by region.

Market Segmentation

The GPU cloud market has fragmented into distinct tiers:

TierDescriptionKnown Examples
Bespoke WholesaleMulti-year buildouts for frontier labs (not publicly priced)FluidStack, Crusoe (Stargate), CoreWeave; most providers do bespoke deals
Sales-Gated CloudStandardized infrastructure, approval requiredCoreWeave, TensorWave, Nscale (training)
Self-Service NeocloudsOn-demand, transparent pricingNebius, Lambda, Crusoe, Vultr, Hyperstack, DataCrunch/Verda, RunPod, OVHcloud, Voltage Park, GMI Cloud, Hot Aisle
MarketplacesAggregated supply, variable quality, lowest pricesVast.ai, SF Compute, RunPod (Community), FluidStack (marketplace)

Many providers span tiers. FluidStack operates both wholesale (62% of revenue) and marketplace (38%). RunPod offers Secure Cloud (neocloud) and Community Cloud (marketplace). The bespoke tier is likely larger than publicly known, as these deals are rarely disclosed. This guide focuses on publicly available offerings (Tiers 2-4).

GPU Hardware & Pricing

Start here: price is the most visible differentiator and determines your baseline cost. H100 pricing varies 4x across providers ($1.45-6.15/hr), so this table alone can eliminate half your options. Budget-conscious teams should look at SF Compute ($1.45/hr), Vast.ai ($1.74/hr), and Hyperstack ($1.90/hr). Enterprise buyers typically choose from Nebius, CoreWeave, Crusoe, or Lambda.

On-Demand GPU Pricing

ProviderH100H200B200GB200Source
CoreWeavePCIe $4.25 / SXM $6.16~$6.30/hr$8.60/hr$10.50/hrLink
Crusoe$3.90/hr$4.29/hrContactContactLink
DataCrunch/Verda$2.29/hr$2.99/hr$3.79/hrβ€”Link
FluidStack$2.10/hr$2.30/hrContactContactLink
GMI Cloud$2.10/hrFrom $2.50/hrPre-orderPre-orderLink
Hot Aisleβ€”β€”β€”β€”N/A
HyperstackPCIe $1.90 / SXM $2.40$3.50/hrContactContactLink
LambdaPCIe $2.49 / SXM $2.99β€”$4.99/hrβ€”Link
Nebius$2.95/hr$3.50/hr$5.50/hrPre-orderLink
NscaleContactContactβ€”ContactLink
OVHcloud$2.99/hrβ€”β€”β€”Link
RunPodPCIe $1.99-2.39 / SXM $2.69-2.99$3.59/hr$5.19-5.98/hrβ€”Link
SF Compute$1.45-1.50/hrContactβ€”β€”Link
TensorWaveβ€”β€”β€”β€”N/A
Vast.ai$1.49-1.87/hrVariesVariesβ€”Link
Voltage ParkFrom $1.99/hrContactContactContactLink
Vultr$2.99/hrContactContactβ€”Link

*Vultr’s B200 is available via 36-month reserved commitment at $2.89/hr ($23.12/hr for 8x B200 HGX system); on-demand pricing not published.

AMD GPU Availability

AMD GPUs are typically 20-50% less than NVIDIA equivalents and are less supply-constrained. If your workload runs on ROCm (PyTorch and JAX have good support), AMD is worth considering. Vultr offers one of the cheapest MI300X options at $1.85/hr with managed Kubernetes. Hot Aisle and TensorWave are AMD-only specialists.

ProviderMI300X PriceMI325X PriceMI355X PriceSource
Crusoe$3.45/hrβ€”ContactLink
Hot Aisle$1.99/hrβ€”Pre-orderLink
NscalePre-orderβ€”β€”Link
TensorWaveSold out$1.95/hr$2.85/hrLink
Vultr$1.85/hr$2.00/hr$2.59/hr (on-demand) / $2.29/hr (36-month)Link

Training Infrastructure

If you’re training models across 16+ GPUs (distributed training), this section determines whether you’ll actually get good performance. Single-GPU workloads or inference can skip aheadβ€”standard networking and local NVMe storage are fine. For multi-node training, network bandwidth between GPUs and shared filesystem throughput often bottleneck more than GPU speed.

Infrastructure Ownership Models

Ownership model predicts pricing stability and support quality. Owners (Crusoe, CoreWeave, Nebius) control their hardware and can offer consistent SLAs. Aggregators (Vast.ai, SF Compute) provide cheaper pricing but variable quality. Owner (colo) is middle groundβ€”they own GPUs but rent data center space. See Market Segmentation for how this maps to business model tiers.

ProviderModelDescriptionSource
CoreWeaveOwnerAcquired NEST DC ($322M); 250K+ GPUs across 32 DCsLink
CrusoeOwnerVertically integrated; manufactures own modular DCs via Easter-Owens Electric acquisitionLink
DataCrunch/VerdaOwner (colo)Owns GPUs; operates in Iceland and FinlandLink
FluidStackOwner + Aggregator62% Private Cloud (custom-built for enterprises like Anthropic, Meta), 38% Marketplace; $10B debt financing from MacquarieLink
GMI CloudOwner (colo)Owns GPU hardware; offshoot of Realtek/GMI TechnologyLink
Hot AisleOwner (colo)Owns AMD GPUs; colocation at Switch Pyramid Tier 5 DC in Grand Rapids, MILink
HyperstackOwner (colo)Owns GPU hardware; colocation partnershipsLink
LambdaOwner (colo)Owns GPU hardware; colocation in SF and Texas; NVIDIA leaseback partnershipLink
NebiusOwner + ColoOwns DCs in Finland; colocation in US and other regionsLink
NscaleOwnerOwns data centers in Norway (Glomfjord, Stargate Norway JV with Aker)Link
OVHcloudOwnerFully vertically integrated; designs/manufactures servers, builds/manages own DCsLink
RunPodOwner + AggregatorSecure Cloud (Tier 3/4 partners) + Community Cloud (aggregated third-party hosts)Link
SF ComputeAggregatorTwo-sided marketplace connecting GPU cloud providersLink
TensorWaveOwner (colo)Owns AMD GPU hardware; colocation across US data centersLink
Vast.aiAggregatorPure marketplace connecting 10K+ GPUs from individuals to datacentersLink
Voltage ParkOwner (colo)Owns H100 GPU hardware; colocation in Texas, Virginia, WashingtonLink
VultrColoOperates across 32 global colocation facilities (Digital Realty, Equinix, QTS partnerships)Link

InfiniBand and High-Speed Networking

InfiniBand only matters for multi-node distributed training (16+ GPUs). If you’re training on 1-8 GPUs or running inference, skip this table. Providers using RoCE (TensorWave, Hot Aisle, Nscale) offer RDMA over standard Ethernet with lower cost but potentially higher tail latency under network congestion.

Note: This table describes publicly available cloud offerings. Bespoke wholesale buildouts can support arbitrary configurations. “Not documented” indicates information not publicly available.

ProviderInfiniBandSpeed (per GPU)AvailabilityTopologySource
CoreWeaveYes400Gb/s (Quantum-2)H100/H200 clustersNon-blocking fat-tree (rail-optimized)Link
CrusoeYes400Gb/sH100/H200 instancesRail-optimizedLink
DataCrunch/VerdaYes400Gb/s (NDR)Instant clustersRail-optimizedLink
FluidStackYes400Gb/sDedicated clustersNot documentedLink
GMI CloudYes400Gb/sH100/H200 clustersNot documentedLink
Hot AisleRoCE only400Gb EthernetAll nodesDell/BroadcomLink
HyperstackSupercloud only400Gb/s (Quantum-2)H100/H200 SXMNot documentedLink
LambdaClusters only400Gb/s (Quantum-2)1-Click ClustersRail-optimizedLink
NebiusYes400Gb/s (Quantum-2)All GPU nodesNot documentedLink
NscaleRoCE only400Gb EthernetAll nodesNokia 7220 IXRLink
OVHcloudNo25Gb Ethernet (Public) / 50-100Gb (Bare Metal)Public Cloud GPU / Bare MetalvRack OLALink
RunPodClusters only200-400Gb/sInstant ClustersNot documentedLink
SF ComputeK8s only400Gb/sK8s clusters onlyNot documentedLink
TensorWaveRoCE only400Gb EthernetAll nodesAviz ONES fabricLink
Vast.aiNoVaries by hostMarketplaceVaries by hostLink
Voltage ParkYes400Gb/sIB tier ($2.49/hr)Not documentedLink
VultrYes400Gb/s (Quantum-2)H100/H200 clustersNon-blockingLink

Storage Options

Storage matters for two scenarios: (1) multi-node training requiring shared filesystems so all GPUs access the same data, and (2) checkpointing large models to object storage. For single-node work, local NVMe is usually sufficient. This table describes publicly available offerings; bespoke buildouts integrate customer-specified storage.

Key differentiator: VAST Data integration (Lambda, Voltage Park, CoreWeave, GMI Cloud) provides high-performance shared storage with S3 compatibility. Several providers (Crusoe, FluidStack, TensorWave) lack native object storageβ€”you’ll need to run MinIO or similar yourself.

ProviderBlock StorageObject StorageShared FSTechnologySource
CoreWeaveYesS3 Hot $0.06 / Warm $0.03 / Cold $0.015$0.07/GB/moVAST, WEKA, DDNLink
Crusoe$0.08/GB/moβ€”$0.07/GB/moLightbitsLink
DataCrunch/Verda$0.05-0.20/GB/moComing soon$0.20/GB/moNVMe SFSLink
FluidStackFilesystem onlyβ€”Not documentedNot documentedLink
GMI CloudIntegratedVAST S3VAST NFSVAST Data, GPUDirectLink
Hot AisleNot documentedβ€”β€”Not documentedLink
Hyperstack~$0.07/GB/moIn developmentWEKA (Supercloud)NVMeLink
Lambdaβ€”S3 adapter only$0.20/GB/moVAST DataLink
Nebius$0.05-0.12/GB/moS3 Standard $0.0147 / Enhanced $0.11$0.08/GB/moNFSLink
NscaleNot documentedNot documented“Parallel FS”Not documentedLink
OVHcloud$0.022/GB/moS3 + egress$120-150/TB/moNetAppLink
RunPod$0.10/GB/moS3 (5 DCs)$0.05-0.07/GB/moNetwork volumesLink
SF ComputeLocal NVMe onlyβ€”β€”1.5TB+ per nodeLink
TensorWaveLocal onlyβ€”Not documentedNot documentedLink
Vast.aiPer-hostβ€”β€”VariesLink
Voltage ParkLocal NVMeVAST S3VAST NFSVAST DataLink
Vultr$0.10/GB/moS3 $0.018-0.10/GB/mo$0.10/GB/moNVMe-backedLink

Storage Performance

If you’re loading large datasets (100GB+) to GPUs for training, shared filesystem throughput determines how long you wait. Nebius and Lambda both achieve 11-12 GB/s per 8-GPU node with VAST Data, significantly faster than hyperscaler equivalents (AWS EFS maxes at 1.5 GB/s). Most providers don’t publish these specs.

ProviderShared FS ThroughputNotesSource
DataCrunch/Verda2000 MB/s continuous (NVMe SFS)Per volumeLink
Lambda~11 GB/s per mount (VAST)With nconnect=32 and 100Gb NICLink
Nebius12 GBps read, 8 GBps write per 8-GPU VMPer 8-GPU VMLink

Orchestration & Platform

How you’ll actually run workloads matters as much as the hardware. But there’s an important distinction: infrastructure orchestration (Kubernetes, Slurm) vs. the platform layer.

Neoclouds provide Kubernetes or Slurm to schedule containers or jobs on GPU nodes. That’s infrastructure orchestrationβ€”it gets your code running on hardware. But production AI teams need more: hosted dev environments where data scientists can iterate, distributed training orchestration that handles multi-node configurations, parallel job scheduling with automatic retries, and cost allocation by user and project.

Most neoclouds stop at infrastructure. The platform layerβ€”the operational tooling that makes GPU infrastructure actually usable for teamsβ€”is what you build on top, or what Saturn Cloud provides out of the box.

Kubernetes and Orchestration

Most production AI workloads run on Kubernetes. Support varies from fully managed Kubernetes to bring-your-own orchestration.

ProviderManaged K8sSlurmAutoscalingNotesSource
CoreWeaveYes (CKS)SUNKYesBare-metal K8s, no hypervisorLink
CrusoeYes (CMK)YesYesRun:ai integrationLink
DataCrunch/Verdaβ€”Yesβ€”Slurm on clustersLink
FluidStackβ€”β€”β€”Atlas platformLink
GMI CloudYes (Cluster Engine)β€”YesK8s-based orchestrationLink
Hot Aisleβ€”β€”β€”Bare-metal focusLink
HyperstackYes (On-Demand K8s)Not documentedβ€”API-driven K8s clustersLink
LambdaYes (1-Click Clusters)Availableβ€”Managed K8s and SlurmLink
NebiusYesManaged + SoperatorYesFirst Slurm Kubernetes operatorLink
NscaleYes (NKS)Yesβ€”Limited docsLink
OVHcloudYesβ€”YesStandard managed K8sLink
RunPodβ€”β€”YesServerless focusLink
SF ComputeYesβ€”β€”Managed K8s per zoneLink
TensorWaveβ€”Yesβ€”Pyxis/Enroot containersLink
Vast.aiβ€”β€”β€”Container-basedLink
Voltage ParkAdd-onβ€”β€”Helm/Rook-Ceph guidesLink
VultrYes (VKE)β€”YesStandard managed K8sLink

Nebius’s Soperator is the first open-source Kubernetes operator for running Slurm clusters. CoreWeave’s SUNK supports 32,000+ GPU jobs.

The Platform Layer: Saturn Cloud

Beyond GPU interconnect, production AI workloads often need load balancers for inference endpoints, VPCs for network isolation, and VPN/peering for hybrid cloud connectivity.

Network Services Comparison

ProviderLoad BalancerVPC/Private NetworkVPN/PeeringPublic IPsSource
CoreWeaveYes (K8s LB)Yes (VPC)Direct Connect (Equinix, Megaport)Yes + BYOIPLink
CrusoeYesYes (VPC)Yes (global backbone)YesLink
DataCrunch/Verdaβ€”Not documentedβ€”Not documentedLink
FluidStackβ€”Not documentedβ€”Not documentedLink
GMI Cloudβ€”Yes (VPC)β€”Yes (Elastic IPs)Link
Hot Aisleβ€”β€”β€”YesLink
HyperstackNot documentedYes (VPC)Not documentedYesLink
LambdaNot documentedYes (private network)Not documentedYesLink
NebiusYes (K8s LB)Yesβ€”YesLink
Nscaleβ€”β€”β€”Not documentedLink
OVHcloudYes (L4/L7, Octavia)Yes (vRack)OVHcloud ConnectYes (Floating IPs)Link
RunPodServerless onlyGlobal networking (Pod-to-Pod)β€”Shared (port mapping)Link
SF Computeβ€”β€”β€”Not documentedLink
TensorWaveβ€”β€”β€”Not documentedLink
Vast.aiβ€”β€”β€”Shared (port mapping)Link
Voltage ParkNot documentedYes (VPC)Not documentedNot documentedLink
VultrYes (L4, $10/mo)Yes (VPC 2.0)β€”YesLink

Operational Considerations

These tables cover costs and capabilities that aren’t obvious until you deploy to production. Egress fees can add 20-40% to bills at hyperscalers, but most neoclouds offer free egress. For inference deployments, you need load balancers and VPCsβ€”providers without these force you to run HAProxy/nginx on VMs and manually configure networking.

Egress Pricing

Good news: free egress is now standard among neoclouds, unlike hyperscalers where moving data out costs $0.08-0.12/GB. If you’re serving inference traffic or moving large datasets between clouds, this matters. Exceptions: Vultr ($0.01/GB after 2TB free) and OVHcloud ($0.011/GB). Vast.ai varies by host and can be expensive.

ProviderEgress CostNotesSource
CoreWeaveFreeZero egress, ingress, and I/O operationsLink
CrusoeFreeZero data transfer feesLink
DataCrunch/VerdaNot documentedLink
FluidStackFreeZero egress/ingressLink
GMI CloudNot documentedLink
Hot AisleNot documentedLink
HyperstackFreeZero bandwidth chargesLink
LambdaFreeZero egressLink
NebiusCompute freeS3 Standard $0.015/GB egress; S3 Enhanced free egressLink
NscaleNot documentedLink
OVHcloudCompute freeObject Storage $0.011/GB egressLink
RunPodFreeZero data transferLink
SF ComputeFreeNo ingress/egress feesLink
TensorWaveNot documentedClaims “no hidden costs”Link
Vast.aiVariesPer-host, can be $20+/TBLink
Voltage ParkFreeNo hidden costsLink
Vultr$0.01/GB2TB/month free, then $0.01/GBLink

Network Services

Production inference deployments need load balancers (to distribute traffic across GPU instances) and VPCs (for network isolation). Only Nebius, CoreWeave, Crusoe, Vultr, and OVHcloud provide both. Marketplace providers (Vast.ai, RunPod) use port mapping instead of dedicated IPs, which complicates production deployments. If you’re only doing training, you can skip this table.

ProviderLoad BalancerVPC/Private NetworkVPN/PeeringPublic IPsSource
CoreWeaveYes (K8s LB)Yes (VPC)Direct Connect (Equinix, Megaport)Yes + BYOIPLink
CrusoeYesYes (VPC)Yes (global backbone)YesLink
DataCrunch/Verdaβ€”Not documentedβ€”Not documentedLink
FluidStackβ€”Not documentedβ€”Not documentedLink
GMI Cloudβ€”Yes (VPC)β€”Yes (Elastic IPs)Link
Hot Aisleβ€”β€”β€”YesLink
HyperstackNot documentedYes (VPC)Not documentedYesLink
LambdaNot documentedYes (private network)Not documentedYesLink
NebiusYes (K8s LB)Yesβ€”YesLink
Nscaleβ€”β€”β€”Not documentedLink
OVHcloudYes (L4/L7, Octavia)Yes (vRack)OVHcloud ConnectYes (Floating IPs)Link
RunPodServerless onlyGlobal networking (Pod-to-Pod)β€”Shared (port mapping)Link
SF Computeβ€”β€”β€”Not documentedLink
TensorWaveβ€”β€”β€”Not documentedLink
Vast.aiβ€”β€”β€”Shared (port mapping)Link
Voltage ParkNot documentedYes (VPC)Not documentedNot documentedLink
VultrYes (L4, $10/mo)Yes (VPC 2.0)β€”YesLink

Developer Experience & Enterprise Readiness

These tables determine how quickly you can get started and whether the provider meets enterprise requirements. If you’re managing infrastructure-as-code, official Terraform providers (Nebius, CoreWeave, Crusoe, Vultr, OVHcloud) save significant automation work. Self-service access means you can provision GPUs in minutes vs days of sales conversations. Compliance certifications (SOC 2, ISO 27001, HIPAA) gate enterprise adoptionβ€”if you need them, filter to providers with documented certs.

Terraform and API Support

Terraform providers and APIs enable infrastructure-as-code. Official Terraform providers (Nebius, CoreWeave, Crusoe, Vultr, OVHcloud) are actively maintained with full resource coverage. Providers with only REST APIs require custom Terraform provider development or manual scripting.

ProviderTerraform ProviderAPICLISource
CoreWeaveOfficialYesYesLink
CrusoeOfficialRESTYesLink
DataCrunch/Verdaβ€”RESTβ€”Link
FluidStackβ€”RESTβ€”Link
GMI Cloudβ€”RESTβ€”Link
Hot Aisleβ€”RESTβ€”Link
HyperstackCommunityRESTβ€”Link
LambdaCommunityRESTYesLink
NebiusOfficialYesYesLink
NscaleCommunityRESTYesLink
OVHcloudOfficialRESTYesLink
RunPodCommunityGraphQLYesLink
SF Computeβ€”YesYesLink
TensorWaveβ€”RESTβ€”Link
Vast.aiCommunityRESTYesLink
Voltage Parkβ€”RESTβ€”Link
VultrOfficialRESTYesLink

Self-Service Access

Self-service means you can sign up and provision GPUs immediately. Sales-gated (CoreWeave, TensorWave) means you submit a request and wait for organizational approval, which can take days.

ProviderTierAccess ModelNotesSource
CoreWeaveSales-GatedSales-gatedRequires organizational approval from sales teamLink
CrusoeNeocloudSelf-serviceSign up via console, larger deployments contact salesLink
DataCrunch/VerdaNeocloudSelf-serviceOrder GPU instances in minutes via dashboard or APILink
FluidStackNeocloud + MarketplaceSelf-serviceSign up at auth.fluidstack.io, launch in under 5 minutesLink
GMI CloudNeocloudSelf-serviceSign up, launch instances in 5-15 minutes via console/APILink
Hot AisleNeocloudSelf-serviceSSH-based signup, credit card, no contractsLink
HyperstackNeocloudSelf-serviceInstant access, one-click deploymentLink
LambdaNeocloudSelf-serviceCreate account and launch GPUs in minutes, pay-as-you-goLink
NebiusNeocloudSelf-serviceSign up, add $25+, deploy up to 32 GPUs immediatelyLink
NscaleSales-GatedHybridSelf-service for inference only; training clusters require salesLink
OVHcloudNeocloudSelf-serviceCreate account, $200 free credit for first projectLink
RunPodNeocloud + MarketplaceSelf-serviceDeploy GPUs in under a minute, no rate limitsLink
SF ComputeMarketplaceSelf-serviceSign up to buy, larger deployments contact salesLink
TensorWaveSales-GatedSales-gatedContact sales/solutions engineers to get startedLink
Vast.aiMarketplaceSelf-service$5 minimum to start, per-second billingLink
Voltage ParkNeocloudSelf-serviceOn-demand GPUs available, reserved capacity contact salesLink
VultrNeocloudSelf-serviceFree account signup, provision via portal/API/CLILink

Compliance and Enterprise Features

Enterprise buyers often require SOC 2 Type II (security controls audit) and ISO 27001 (information security management). HIPAA is required for healthcare data. If your procurement process gates on specific certifications, this table eliminates providers quickly. Also check regionsβ€”European teams may need EU data centers for data residency requirements.

ProviderComplianceSSO/SAMLRegionsSource
CoreWeaveSOC 2, ISO 27001SAML/OIDC/SCIMUS, UK, Spain, Sweden, NorwaySecurity
CrusoeSOC 2 Type IINot documentedUS (TX, VA), Iceland, Norway (soon)Link
DataCrunch/VerdaISO 27001β€”EU (Finland, Iceland)Link
FluidStackβ€”β€”Not documentedLink
GMI CloudSOC 2 Type 1, ISO 27001β€”Not documentedLink
Hot AisleSOC 2 Type II, HIPAAβ€”US (MI)Link
Hyperstackβ€”β€”Europe, North AmericaLink
LambdaSOC 2 Type IINot documentedNot documentedLink
NebiusSOC 2 Type II, HIPAA, ISO 27001YesUS, EU (Finland, France, Iceland)Regions, Trust Center
Nscaleβ€”β€”NorwayLink
OVHcloudSOC 2, ISO 27001, PCI DSS, HDS, SecNumCloudNot documentedGlobal (46 DCs)Infrastructure, Certifications
RunPodSOC 2 Type IIβ€”MultipleLink
SF Computeβ€”β€”Not documentedLink
TensorWaveβ€”β€”Not documentedLink
Vast.aiβ€”β€”Varies by hostLink
Voltage ParkSOC 2 Type II, ISO 27001, HIPAAβ€”US (WA, TX, VA, UT)Infrastructure, Security
VultrSOC 2 (HIPAA), ISO 27001, PCI DSSβ€”32 global locationsLocations, Compliance

Conclusion

The GPU neocloud market has matured significantly. Free egress is now standard, 400Gb/s InfiniBand is table stakes for serious providers, and pricing has compressed to $2-4/hr for H100s (vs $6-12/hr on hyperscalers).

For production AI training, Nebius, CoreWeave, and Crusoe offer the most complete platforms. For cost-sensitive experimentation, Vast.ai and SF Compute provide the lowest prices. For European data sovereignty, Nebius and DataCrunch/Verda combine EU data centers with enterprise compliance and 100% renewable energy.

The main remaining gap is documentation transparency. Most providers require sales conversations for pricing on reserved capacity, large clusters, and enterprise features. As the market matures, expect more self-service options and published pricing for these categories.


Last updated: December 2025. Pricing and features change frequently. Verify current offerings on provider websites before making decisions.