B200 SXM 192GB vs H200 SXM 141GB
Compare NVIDIA B200 SXM 192GB and NVIDIA H200 SXM 141GB specs, performance, and cloud pricing
B200 SXM 192GB
192GB
From $6.50/hr
H200 SXM 141GB
141GB
From $3.49/hr
Architecture
Blackwell
vs Hopper
FP16 Gap
4.5x
B200 SXM 192GB leads
| Specification | B200 SXM 192GB | H200 SXM 141GB |
|---|---|---|
| VRAM | 192 GB | 141 GB |
| VRAM Type | HBM3e | HBM3e |
| FP16 TFLOPS | 4.5 PFLOPS | 989.5 TFLOPS |
| FP8 TFLOPS | 9.0 PFLOPS | 2.0 PFLOPS |
| Memory Bandwidth | 8.0 TB/s | 4.8 TB/s |
| TDP | 1000W | 700W |
| Interconnect | NVLink 5 | NVLink 4 |
| Architecture | Blackwell | Hopper |
Price Comparison
| Metric | B200 SXM 192GB | H200 SXM 141GB |
|---|---|---|
| Cheapest On-Demand | $6.50/hr | $3.49/hr |
| Cheapest Spot | $4.32/hr | $2.52/hr |
| Providers Available | 4 | 4 |
Verdict
Best for Training
NVIDIA B200 SXM 192GB
4.5 PFLOPS FP16 with 192GB VRAM
Best Value
NVIDIA B200 SXM 192GB
692 TFLOPS per $/hr
Best for Inference
NVIDIA B200 SXM 192GB
9.0 PFLOPS FP8/FP16
Use-Case Recommendations
Large-Scale Training
Training LLMs and large multi-modal models
Winner
B200 SXM 192GB
4.5 PFLOPS FP16 with 192GB HBM3e provides the best training throughput.
Inference at Scale
Deploying models in production for real-time inference
Winner
B200 SXM 192GB
9.0 PFLOPS FP8/FP16 gives superior inference throughput.
Budget-Conscious Workloads
Getting the best performance per dollar
Winner
B200 SXM 192GB
Starting at $6.50/hr delivers the best TFLOPS per dollar.