B200 SXM 192GB vs A100 PCIe 40GB

Compare NVIDIA B200 SXM 192GB and NVIDIA A100 PCIe 40GB specs, performance, and cloud pricing

B200 SXM 192GB

192GB

From $6.50/hr

A100 PCIe 40GB

40GB

From $0.850/hr

Architecture

Blackwell

vs Ampere

FP16 Gap

7.2x

B200 SXM 192GB leads

SpecificationB200 SXM 192GBA100 PCIe 40GB
VRAM192 GB40 GB
VRAM TypeHBM3eHBM2e
FP16 TFLOPS4.5 PFLOPS624 TFLOPS
FP8 TFLOPS9.0 PFLOPSN/A
Memory Bandwidth8.0 TB/s1.6 TB/s
TDP1000W250W
InterconnectNVLink 5PCIe Gen4
ArchitectureBlackwellAmpere

Price Comparison

MetricB200 SXM 192GBA100 PCIe 40GB
Cheapest On-Demand$6.50/hr$0.850/hr
Cheapest Spot$4.32/hr$0.480/hr
Providers Available44

Verdict

Best for Training

NVIDIA B200 SXM 192GB

4.5 PFLOPS FP16 with 192GB VRAM

Best Value

NVIDIA A100 PCIe 40GB

734 TFLOPS per $/hr

Best for Inference

NVIDIA B200 SXM 192GB

9.0 PFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

B200 SXM 192GB

4.5 PFLOPS FP16 with 192GB HBM3e provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

B200 SXM 192GB

9.0 PFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

A100 PCIe 40GB

Starting at $0.850/hr delivers the best TFLOPS per dollar.

Learn More