Hopper GPU

NVIDIA H100 SXM5 80GB

80GB HBM3 | 2.0 PFLOPS FP16 | 3.4 TB/s bandwidth

From $2.20 /hr

LLM training Model fine-tuning HPC Large-scale inference

Specifications

VRAM

80 GB HBM3

FP16

2.0 PFLOPS

FP8

4.0 PFLOPS

FP4

N/A

Memory BW

3.4 TB/s

TDP

700W

Interconnect

NVLink 4

Architecture

Hopper

Cloud GPU Pricing (10 offers)

ProviderInstance TypevCPUsRAMPrice/hrPrice/moSpot PriceAvailabilityAction
Vast.aiCheapest
h100_sxm5_80gb16128 GB$2.20/hr$1,606.00/mo
$1.65-25%
AvailableRent on Vast.ai
Lambda Cloud
gpu_1x_h100_sxm526200 GB$2.49/hr$1,817.70/mo--AvailableDeploy on Lambda
RunPod
h100-sxm-80gb20200 GB$2.69/hr$1,963.70/mo
$1.89-29.7%
AvailableDeploy on RunPod
CoreWeave
h100-sxm-1x36360 GB$2.79/hr$2,036.70/mo--AvailableDeploy on CoreWeave
Google Cloud Platform
a3-highgpu-8g2081872 GB$3.37/hr$2,460.10/mo
$1.35-59.9%
AvailableDeploy on Google Cloud
Amazon Web Services
p5.48xlarge1922048 GB$3.60/hr$2,628.00/mo
$1.44-60%
AvailableDeploy on AWS
Microsoft Azure
Standard_ND96isr_H100_v5961900 GB$3.68/hr$2,686.40/mo
$1.47-60.1%
AvailableDeploy on Azure
Amazon Web Services
p5.48xlarge1922048 GB$3.96/hr$2,890.80/mo
$1.58-60.1%
AvailableDeploy on AWS
Lambda Cloud
gpu_8x_h100_sxm5(8x GPU)2081800 GB$19.92/hr$14,541.60/mo--AvailableDeploy on Lambda
Amazon Web Services
p5.48xlarge(8x GPU)1922048 GB$28.80/hr$21,024.00/mo
$11.52-60%
AvailableDeploy on AWS
Last updated 57h ago

Compare With Other GPUs

Frequently Asked Questions

What is the cheapest NVIDIA H100 SXM5 80GB cloud provider?
The cheapest NVIDIA H100 SXM5 80GB is available on Vast.ai at $2.20/hr (h100_sxm5_80gb).
How much does NVIDIA H100 SXM5 80GB cost per hour?
NVIDIA H100 SXM5 80GB cloud GPU pricing ranges from $2.20/hr to $28.80/hr depending on the provider and configuration.
What are the specs of NVIDIA H100 SXM5 80GB?
NVIDIA H100 SXM5 80GB features 80GB HBM3 memory, 2.0 PFLOPS FP16 performance, 3.4 TB/s memory bandwidth, and 700W TDP. Architecture: Hopper.
Is NVIDIA H100 SXM5 80GB good for AI training?
Yes, NVIDIA H100 SXM5 80GB is well-suited for AI training workloads. Key use cases include: LLM training, Model fine-tuning, HPC, Large-scale inference.

Related GPUs