A100 PCIe 40GB vs L40S
Compare NVIDIA A100 PCIe 40GB and NVIDIA L40S specs, performance, and cloud pricing
A100 PCIe 40GB
40GB
From $0.850/hr
L40S
48GB
From $0.820/hr
Architecture
Ampere
vs Ada Lovelace
FP16 Gap
1.7x
A100 PCIe 40GB leads
| Specification | A100 PCIe 40GB | L40S |
|---|---|---|
| VRAM | 40 GB | 48 GB |
| VRAM Type | HBM2e | GDDR6X |
| FP16 TFLOPS | 624 TFLOPS | 366.5 TFLOPS |
| FP8 TFLOPS | N/A | 733 TFLOPS |
| Memory Bandwidth | 1.6 TB/s | 864 GB/s |
| TDP | 250W | 350W |
| Interconnect | PCIe Gen4 | PCIe Gen4 |
| Architecture | Ampere | Ada Lovelace |
Price Comparison
| Metric | A100 PCIe 40GB | L40S |
|---|---|---|
| Cheapest On-Demand | $0.850/hr | $0.820/hr |
| Cheapest Spot | $0.480/hr | $0.440/hr |
| Providers Available | 4 | 5 |
Verdict
Best for Training
NVIDIA A100 PCIe 40GB
624 TFLOPS FP16 with 40GB VRAM
Best Value
NVIDIA A100 PCIe 40GB
734 TFLOPS per $/hr
Best for Inference
NVIDIA L40S
733 TFLOPS FP8/FP16
Use-Case Recommendations
Large-Scale Training
Training LLMs and large multi-modal models
Winner
A100 PCIe 40GB
624 TFLOPS FP16 with 40GB HBM2e provides the best training throughput.
Inference at Scale
Deploying models in production for real-time inference
Winner
L40S
733 TFLOPS FP8/FP16 gives superior inference throughput.
Budget-Conscious Workloads
Getting the best performance per dollar
Winner
A100 PCIe 40GB
Starting at $0.850/hr delivers the best TFLOPS per dollar.