Ampere GPU
NVIDIA A100 PCIe 80GB
80GB HBM2e | 624 TFLOPS FP16 | 2.0 TB/s bandwidth
From $1.05 /hr
Inference Fine-tuning HPC Data analytics
Specifications
VRAM
80 GB HBM2e
FP16
624 TFLOPS
FP8
N/A
FP4
N/A
Memory BW
2.0 TB/s
TDP
300W
Interconnect
PCIe Gen4
Architecture
Ampere
Cloud GPU Pricing (3 offers)
| Provider | Instance Type | vCPUs | RAM | Price/hr | Price/mo | Spot Price | Availability | Action |
|---|---|---|---|---|---|---|---|---|
Vast.aiCheapest | a100_pcie_80gb | 12 | 96 GB | $1.05/hr | $766.50/mo | $0.790-24.8% | Available | Rent on Vast.ai |
RunPod | a100-pcie-80gb | 16 | 125 GB | $1.44/hr | $1,051.20/mo | $1.01-29.9% | Available | Deploy on RunPod |
Amazon Web Services | p4d.24xlarge | 96 | 1152 GB | $1.85/hr | $1,350.50/mo | $0.740-60% | Available | Deploy on AWS |
Last updated 57h ago
Compare With Other GPUs
Frequently Asked Questions
What is the cheapest NVIDIA A100 PCIe 80GB cloud provider?
The cheapest NVIDIA A100 PCIe 80GB is available on Vast.ai at $1.05/hr (a100_pcie_80gb).
How much does NVIDIA A100 PCIe 80GB cost per hour?
NVIDIA A100 PCIe 80GB cloud GPU pricing ranges from $1.05/hr to $1.85/hr depending on the provider and configuration.
What are the specs of NVIDIA A100 PCIe 80GB?
NVIDIA A100 PCIe 80GB features 80GB HBM2e memory, 624 TFLOPS FP16 performance, 2.0 TB/s memory bandwidth, and 300W TDP. Architecture: Ampere.
Is NVIDIA A100 PCIe 80GB good for AI training?
NVIDIA A100 PCIe 80GB is primarily designed for: Inference, Fine-tuning, HPC, Data analytics. For large-scale training, consider higher-tier GPUs.
Related GPUs
H100 PCIe 80GB
80GB HBM3 | Hopper
From $1.68/hr
A100 SXM4 80GB
80GB HBM2e | Ampere
From $1.10/hr
A100 PCIe 40GB
40GB HBM2e | Ampere
From $0.850/hr
L40S
48GB GDDR6X | Ada Lovelace
From $0.820/hr
A10G
24GB GDDR6X | Ampere
From $0.540/hr
RTX A6000
48GB GDDR6 | Ampere
From $0.520/hr