A100 SXM4 80GB vs L40S

Compare NVIDIA A100 SXM4 80GB and NVIDIA L40S specs, performance, and cloud pricing

A100 SXM4 80GB

80GB

From $1.10/hr

L40S

48GB

From $0.820/hr

Architecture

Ampere

vs Ada Lovelace

FP16 Gap

1.7x

A100 SXM4 80GB leads

SpecificationA100 SXM4 80GBL40S
VRAM80 GB48 GB
VRAM TypeHBM2eGDDR6X
FP16 TFLOPS624 TFLOPS366.5 TFLOPS
FP8 TFLOPSN/A733 TFLOPS
Memory Bandwidth2.0 TB/s864 GB/s
TDP400W350W
InterconnectNVLink 3PCIe Gen4
ArchitectureAmpereAda Lovelace

Price Comparison

MetricA100 SXM4 80GBL40S
Cheapest On-Demand$1.10/hr$0.820/hr
Cheapest Spot$0.760/hr$0.440/hr
Providers Available65

Verdict

Best for Training

NVIDIA A100 SXM4 80GB

624 TFLOPS FP16 with 80GB VRAM

Best Value

NVIDIA A100 SXM4 80GB

567 TFLOPS per $/hr

Best for Inference

NVIDIA L40S

733 TFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

A100 SXM4 80GB

624 TFLOPS FP16 with 80GB HBM2e provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

L40S

733 TFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

A100 SXM4 80GB

Starting at $1.10/hr delivers the best TFLOPS per dollar.

Learn More