B200 SXM 192GB vs A100 SXM4 80GB

Compare NVIDIA B200 SXM 192GB and NVIDIA A100 SXM4 80GB specs, performance, and cloud pricing

B200 SXM 192GB

192GB

From $6.50/hr

A100 SXM4 80GB

80GB

From $1.10/hr

Architecture

Blackwell

vs Ampere

FP16 Gap

7.2x

B200 SXM 192GB leads

SpecificationB200 SXM 192GBA100 SXM4 80GB
VRAM192 GB80 GB
VRAM TypeHBM3eHBM2e
FP16 TFLOPS4.5 PFLOPS624 TFLOPS
FP8 TFLOPS9.0 PFLOPSN/A
Memory Bandwidth8.0 TB/s2.0 TB/s
TDP1000W400W
InterconnectNVLink 5NVLink 3
ArchitectureBlackwellAmpere

Price Comparison

MetricB200 SXM 192GBA100 SXM4 80GB
Cheapest On-Demand$6.50/hr$1.10/hr
Cheapest Spot$4.32/hr$0.760/hr
Providers Available46

Verdict

Best for Training

NVIDIA B200 SXM 192GB

4.5 PFLOPS FP16 with 192GB VRAM

Best Value

NVIDIA B200 SXM 192GB

692 TFLOPS per $/hr

Best for Inference

NVIDIA B200 SXM 192GB

9.0 PFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

B200 SXM 192GB

4.5 PFLOPS FP16 with 192GB HBM3e provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

B200 SXM 192GB

9.0 PFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

B200 SXM 192GB

Starting at $6.50/hr delivers the best TFLOPS per dollar.

Learn More