L40S vs RTX 4090
Compare NVIDIA L40S and NVIDIA RTX 4090 specs, performance, and cloud pricing
L40S
48GB
From $0.820/hr
RTX 4090
24GB
From $0.370/hr
Architecture
Ada Lovelace
vs Ada Lovelace
FP16 Gap
4.4x
L40S leads
| Specification | L40S | RTX 4090 |
|---|---|---|
| VRAM | 48 GB | 24 GB |
| VRAM Type | GDDR6X | GDDR6X |
| FP16 TFLOPS | 366.5 TFLOPS | 83 TFLOPS |
| FP8 TFLOPS | 733 TFLOPS | 166 TFLOPS |
| Memory Bandwidth | 864 GB/s | 1.0 TB/s |
| TDP | 350W | 450W |
| Interconnect | PCIe Gen4 | None |
| Architecture | Ada Lovelace | Ada Lovelace |
Price Comparison
| Metric | L40S | RTX 4090 |
|---|---|---|
| Cheapest On-Demand | $0.820/hr | $0.370/hr |
| Cheapest Spot | $0.440/hr | $0.280/hr |
| Providers Available | 5 | 3 |
Verdict
Best for Training
NVIDIA L40S
366.5 TFLOPS FP16 with 48GB VRAM
Best Value
NVIDIA L40S
447 TFLOPS per $/hr
Best for Inference
NVIDIA L40S
733 TFLOPS FP8/FP16
Use-Case Recommendations
Large-Scale Training
Training LLMs and large multi-modal models
Winner
L40S
366.5 TFLOPS FP16 with 48GB GDDR6X provides the best training throughput.
Inference at Scale
Deploying models in production for real-time inference
Winner
L40S
733 TFLOPS FP8/FP16 gives superior inference throughput.
Budget-Conscious Workloads
Getting the best performance per dollar
Winner
L40S
Starting at $0.820/hr delivers the best TFLOPS per dollar.