H200 SXM 141GB vs A100 PCIe 80GB

Compare NVIDIA H200 SXM 141GB and NVIDIA A100 PCIe 80GB specs, performance, and cloud pricing

H200 SXM 141GB

141GB

From $3.49/hr

A100 PCIe 80GB

80GB

From $1.05/hr

Architecture

Hopper

vs Ampere

FP16 Gap

1.6x

H200 SXM 141GB leads

SpecificationH200 SXM 141GBA100 PCIe 80GB
VRAM141 GB80 GB
VRAM TypeHBM3eHBM2e
FP16 TFLOPS989.5 TFLOPS624 TFLOPS
FP8 TFLOPS2.0 PFLOPSN/A
Memory Bandwidth4.8 TB/s2.0 TB/s
TDP700W300W
InterconnectNVLink 4PCIe Gen4
ArchitectureHopperAmpere

Price Comparison

MetricH200 SXM 141GBA100 PCIe 80GB
Cheapest On-Demand$3.49/hr$1.05/hr
Cheapest Spot$2.52/hr$0.740/hr
Providers Available43

Verdict

Best for Training

NVIDIA H200 SXM 141GB

989.5 TFLOPS FP16 with 141GB VRAM

Best Value

NVIDIA A100 PCIe 80GB

594 TFLOPS per $/hr

Best for Inference

NVIDIA H200 SXM 141GB

2.0 PFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

H200 SXM 141GB

989.5 TFLOPS FP16 with 141GB HBM3e provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

H200 SXM 141GB

2.0 PFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

A100 PCIe 80GB

Starting at $1.05/hr delivers the best TFLOPS per dollar.

Learn More