H100 SXM5 80GB vs H100 PCIe 80GB

Compare NVIDIA H100 SXM5 80GB and NVIDIA H100 PCIe 80GB specs, performance, and cloud pricing

H100 SXM5 80GB

80GB

From $2.20/hr

H100 PCIe 80GB

80GB

From $1.68/hr

Architecture

Hopper

vs Hopper

FP16 Gap

1.3x

H100 SXM5 80GB leads

SpecificationH100 SXM5 80GBH100 PCIe 80GB
VRAM80 GB80 GB
VRAM TypeHBM3HBM3
FP16 TFLOPS2.0 PFLOPS1.5 PFLOPS
FP8 TFLOPS4.0 PFLOPS3.0 PFLOPS
Memory Bandwidth3.4 TB/s2.0 TB/s
TDP700W350W
InterconnectNVLink 4PCIe Gen5
ArchitectureHopperHopper

Price Comparison

MetricH100 SXM5 80GBH100 PCIe 80GB
Cheapest On-Demand$2.20/hr$1.68/hr
Cheapest Spot$1.35/hr$1.25/hr
Providers Available75

Verdict

Best for Training

NVIDIA H100 SXM5 80GB

2.0 PFLOPS FP16 with 80GB VRAM

Best Value

NVIDIA H100 PCIe 80GB

901 TFLOPS per $/hr

Best for Inference

NVIDIA H100 SXM5 80GB

4.0 PFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

H100 SXM5 80GB

2.0 PFLOPS FP16 with 80GB HBM3 provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

H100 SXM5 80GB

4.0 PFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

H100 PCIe 80GB

Starting at $1.68/hr delivers the best TFLOPS per dollar.

Learn More