A100 PCIe 40GB vs RTX 4090

Compare NVIDIA A100 PCIe 40GB and NVIDIA RTX 4090 specs, performance, and cloud pricing

A100 PCIe 40GB

40GB

From $0.850/hr

RTX 4090

24GB

From $0.370/hr

Architecture

Ampere

vs Ada Lovelace

FP16 Gap

7.5x

A100 PCIe 40GB leads

SpecificationA100 PCIe 40GBRTX 4090
VRAM40 GB24 GB
VRAM TypeHBM2eGDDR6X
FP16 TFLOPS624 TFLOPS83 TFLOPS
FP8 TFLOPSN/A166 TFLOPS
Memory Bandwidth1.6 TB/s1.0 TB/s
TDP250W450W
InterconnectPCIe Gen4None
ArchitectureAmpereAda Lovelace

Price Comparison

MetricA100 PCIe 40GBRTX 4090
Cheapest On-Demand$0.850/hr$0.370/hr
Cheapest Spot$0.480/hr$0.280/hr
Providers Available43

Verdict

Best for Training

NVIDIA A100 PCIe 40GB

624 TFLOPS FP16 with 40GB VRAM

Best Value

NVIDIA A100 PCIe 40GB

734 TFLOPS per $/hr

Best for Inference

NVIDIA A100 PCIe 40GB

624 TFLOPS FP8/FP16

Use-Case Recommendations

Large-Scale Training

Training LLMs and large multi-modal models

Winner

A100 PCIe 40GB

624 TFLOPS FP16 with 40GB HBM2e provides the best training throughput.

Inference at Scale

Deploying models in production for real-time inference

Winner

A100 PCIe 40GB

624 TFLOPS FP8/FP16 gives superior inference throughput.

Budget-Conscious Workloads

Getting the best performance per dollar

Winner

A100 PCIe 40GB

Starting at $0.850/hr delivers the best TFLOPS per dollar.

Learn More