H200 SXM 141GB vs L40S

Compare NVIDIA H200 SXM 141GB and NVIDIA L40S specs, performance, and cloud pricing

H200 SXM 141GB

141GB

From $3.49/hr

L40S

48GB

From $0.820/hr

Architecture

Hopper

vs Ada Lovelace

FP16 Gap

2.7x

H200 SXM 141GB leads

SpecificationH200 SXM 141GBL40S
VRAM141 GB48 GB
VRAM TypeHBM3eGDDR6X
FP16 TFLOPS989.5 TFLOPS366.5 TFLOPS
FP8 TFLOPS2.0 PFLOPS733 TFLOPS
Memory Bandwidth4.8 TB/s864 GB/s
TDP700W350W
InterconnectNVLink 4PCIe Gen4
ArchitectureHopperAda Lovelace

Price Comparison

MetricH200 SXM 141GBL40S
Cheapest On-Demand$3.49/hr$0.820/hr
Cheapest Spot$2.52/hr$0.440/hr
Providers Available45

Verdict

Best for Training

NVIDIA H200 SXM 141GB

989.5 TFLOPS FP16 with 141GB VRAM

Best Value

NVIDIA L40S

447 TFLOPS per $/hr

Best for Inference

NVIDIA H200 SXM 141GB

2.0 PFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

H200 SXM 141GB

989.5 TFLOPS FP16 with 141GB HBM3e provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

H200 SXM 141GB

2.0 PFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

L40S

Starting at $0.820/hr delivers the best TFLOPS per dollar.

Learn More