H200 SXM 141GB vs H100 PCIe 80GB
Compare NVIDIA H200 SXM 141GB and NVIDIA H100 PCIe 80GB specs, performance, and cloud pricing
H200 SXM 141GB
141GB
From $3.49/hr
H100 PCIe 80GB
80GB
From $1.68/hr
Architecture
Hopper
vs Hopper
FP16 Gap
1.5x
H100 PCIe 80GB leads
| Specification | H200 SXM 141GB | H100 PCIe 80GB |
|---|---|---|
| VRAM | 141 GB | 80 GB |
| VRAM Type | HBM3e | HBM3 |
| FP16 TFLOPS | 989.5 TFLOPS | 1.5 PFLOPS |
| FP8 TFLOPS | 2.0 PFLOPS | 3.0 PFLOPS |
| Memory Bandwidth | 4.8 TB/s | 2.0 TB/s |
| TDP | 700W | 350W |
| Interconnect | NVLink 4 | PCIe Gen5 |
| Architecture | Hopper | Hopper |
Price Comparison
| Metric | H200 SXM 141GB | H100 PCIe 80GB |
|---|---|---|
| Cheapest On-Demand | $3.49/hr | $1.68/hr |
| Cheapest Spot | $2.52/hr | $1.25/hr |
| Providers Available | 4 | 5 |
Verdict
Best for Training
NVIDIA H200 SXM 141GB
989.5 TFLOPS FP16 with 141GB VRAM
Best Value
NVIDIA H100 PCIe 80GB
901 TFLOPS per $/hr
Best for Inference
NVIDIA H100 PCIe 80GB
3.0 PFLOPS FP8/FP16
Use-Case Recommendations
Large-Scale Training
Training LLMs and large multi-modal models
Winner
H200 SXM 141GB
989.5 TFLOPS FP16 with 141GB HBM3e provides the best training throughput.
Inference at Scale
Deploying models in production for real-time inference
Winner
H100 PCIe 80GB
3.0 PFLOPS FP8/FP16 gives superior inference throughput.
Budget-Conscious Workloads
Getting the best performance per dollar
Winner
H100 PCIe 80GB
Starting at $1.68/hr delivers the best TFLOPS per dollar.