Hopper GPU

NVIDIA H100 PCIe 80GB

80GB HBM3 | 1.5 PFLOPS FP16 | 2.0 TB/s bandwidth

From $1.68 /hr

Inference Fine-tuning HPC AI workloads

Specifications

VRAM

80 GB HBM3

FP16

1.5 PFLOPS

FP8

3.0 PFLOPS

FP4

N/A

Memory BW

2.0 TB/s

TDP

350W

Interconnect

PCIe Gen5

Architecture

Hopper

Cloud GPU Pricing (5 offers)

ProviderInstance TypevCPUsRAMPrice/hrPrice/moSpot PriceAvailabilityAction
CoreWeaveCheapest
h100-pcie-1x24240 GB$1.68/hr$1,226.40/mo--AvailableDeploy on CoreWeave
Vast.ai
h100_pcie_80gb1296 GB$1.80/hr$1,314.00/mo
$1.35-25%
AvailableRent on Vast.ai
Lambda Cloud
gpu_1x_h100_pcie26200 GB$1.99/hr$1,452.70/mo--AvailableDeploy on Lambda
RunPod
h100-pcie-80gb16125 GB$2.09/hr$1,525.70/mo
$1.46-30.1%
AvailableDeploy on RunPod
Amazon Web Services
p5n.24xlarge96768 GB$2.50/hr$1,825.00/mo
$1.25-50%
AvailableDeploy on AWS
Last updated 57h ago

Compare With Other GPUs

Frequently Asked Questions

What is the cheapest NVIDIA H100 PCIe 80GB cloud provider?
The cheapest NVIDIA H100 PCIe 80GB is available on CoreWeave at $1.68/hr (h100-pcie-1x).
How much does NVIDIA H100 PCIe 80GB cost per hour?
NVIDIA H100 PCIe 80GB cloud GPU pricing ranges from $1.68/hr to $2.50/hr depending on the provider and configuration.
What are the specs of NVIDIA H100 PCIe 80GB?
NVIDIA H100 PCIe 80GB features 80GB HBM3 memory, 1.5 PFLOPS FP16 performance, 2.0 TB/s memory bandwidth, and 350W TDP. Architecture: Hopper.
Is NVIDIA H100 PCIe 80GB good for AI training?
NVIDIA H100 PCIe 80GB is primarily designed for: Inference, Fine-tuning, HPC, AI workloads. For large-scale training, consider higher-tier GPUs.

Related GPUs