Ampere GPU

NVIDIA A100 PCIe 80GB

80GB HBM2e | 624 TFLOPS FP16 | 2.0 TB/s bandwidth

From $1.05 /hr

Inference Fine-tuning HPC Data analytics

Specifications

VRAM

80 GB HBM2e

FP16

624 TFLOPS

FP8

N/A

FP4

N/A

Memory BW

2.0 TB/s

TDP

300W

Interconnect

PCIe Gen4

Architecture

Ampere

Cloud GPU Pricing (3 offers)

ProviderInstance TypevCPUsRAMPrice/hrPrice/moSpot PriceAvailabilityAction
Vast.aiCheapest
a100_pcie_80gb1296 GB$1.05/hr$766.50/mo
$0.790-24.8%
AvailableRent on Vast.ai
RunPod
a100-pcie-80gb16125 GB$1.44/hr$1,051.20/mo
$1.01-29.9%
AvailableDeploy on RunPod
Amazon Web Services
p4d.24xlarge961152 GB$1.85/hr$1,350.50/mo
$0.740-60%
AvailableDeploy on AWS
Last updated 57h ago

Compare With Other GPUs

Frequently Asked Questions

What is the cheapest NVIDIA A100 PCIe 80GB cloud provider?
The cheapest NVIDIA A100 PCIe 80GB is available on Vast.ai at $1.05/hr (a100_pcie_80gb).
How much does NVIDIA A100 PCIe 80GB cost per hour?
NVIDIA A100 PCIe 80GB cloud GPU pricing ranges from $1.05/hr to $1.85/hr depending on the provider and configuration.
What are the specs of NVIDIA A100 PCIe 80GB?
NVIDIA A100 PCIe 80GB features 80GB HBM2e memory, 624 TFLOPS FP16 performance, 2.0 TB/s memory bandwidth, and 300W TDP. Architecture: Ampere.
Is NVIDIA A100 PCIe 80GB good for AI training?
NVIDIA A100 PCIe 80GB is primarily designed for: Inference, Fine-tuning, HPC, Data analytics. For large-scale training, consider higher-tier GPUs.

Related GPUs