Hopper GPU
NVIDIA H100 SXM5 80GB
80GB HBM3 | 2.0 PFLOPS FP16 | 3.4 TB/s bandwidth
From $2.20 /hr
LLM training Model fine-tuning HPC Large-scale inference
Specifications
VRAM
80 GB HBM3
FP16
2.0 PFLOPS
FP8
4.0 PFLOPS
FP4
N/A
Memory BW
3.4 TB/s
TDP
700W
Interconnect
NVLink 4
Architecture
Hopper
Cloud GPU Pricing (10 offers)
| Provider | Instance Type | vCPUs | RAM | Price/hr | Price/mo | Spot Price | Availability | Action |
|---|---|---|---|---|---|---|---|---|
Vast.aiCheapest | h100_sxm5_80gb | 16 | 128 GB | $2.20/hr | $1,606.00/mo | $1.65-25% | Available | Rent on Vast.ai |
Lambda Cloud | gpu_1x_h100_sxm5 | 26 | 200 GB | $2.49/hr | $1,817.70/mo | -- | Available | Deploy on Lambda |
RunPod | h100-sxm-80gb | 20 | 200 GB | $2.69/hr | $1,963.70/mo | $1.89-29.7% | Available | Deploy on RunPod |
CoreWeave | h100-sxm-1x | 36 | 360 GB | $2.79/hr | $2,036.70/mo | -- | Available | Deploy on CoreWeave |
Google Cloud Platform | a3-highgpu-8g | 208 | 1872 GB | $3.37/hr | $2,460.10/mo | $1.35-59.9% | Available | Deploy on Google Cloud |
Amazon Web Services | p5.48xlarge | 192 | 2048 GB | $3.60/hr | $2,628.00/mo | $1.44-60% | Available | Deploy on AWS |
Microsoft Azure | Standard_ND96isr_H100_v5 | 96 | 1900 GB | $3.68/hr | $2,686.40/mo | $1.47-60.1% | Available | Deploy on Azure |
Amazon Web Services | p5.48xlarge | 192 | 2048 GB | $3.96/hr | $2,890.80/mo | $1.58-60.1% | Available | Deploy on AWS |
Lambda Cloud | gpu_8x_h100_sxm5(8x GPU) | 208 | 1800 GB | $19.92/hr | $14,541.60/mo | -- | Available | Deploy on Lambda |
Amazon Web Services | p5.48xlarge(8x GPU) | 192 | 2048 GB | $28.80/hr | $21,024.00/mo | $11.52-60% | Available | Deploy on AWS |
Last updated 57h ago
Compare With Other GPUs
Frequently Asked Questions
What is the cheapest NVIDIA H100 SXM5 80GB cloud provider?
The cheapest NVIDIA H100 SXM5 80GB is available on Vast.ai at $2.20/hr (h100_sxm5_80gb).
How much does NVIDIA H100 SXM5 80GB cost per hour?
NVIDIA H100 SXM5 80GB cloud GPU pricing ranges from $2.20/hr to $28.80/hr depending on the provider and configuration.
What are the specs of NVIDIA H100 SXM5 80GB?
NVIDIA H100 SXM5 80GB features 80GB HBM3 memory, 2.0 PFLOPS FP16 performance, 3.4 TB/s memory bandwidth, and 700W TDP. Architecture: Hopper.
Is NVIDIA H100 SXM5 80GB good for AI training?
Yes, NVIDIA H100 SXM5 80GB is well-suited for AI training workloads. Key use cases include: LLM training, Model fine-tuning, HPC, Large-scale inference.