Ampere GPU

NVIDIA A100 PCIe 40GB

40GB HBM2e | 624 TFLOPS FP16 | 1.6 TB/s bandwidth

From $0.850 /hr

Inference Training (smaller models) Data analytics

Specifications

VRAM

40 GB HBM2e

FP16

624 TFLOPS

FP8

N/A

FP4

N/A

Memory BW

1.6 TB/s

TDP

250W

Interconnect

PCIe Gen4

Architecture

Ampere

Cloud GPU Pricing (4 offers)

ProviderInstance TypevCPUsRAMPrice/hrPrice/moSpot PriceAvailabilityAction
Vast.aiCheapest
a100_pcie_40gb864 GB$0.850/hr$620.50/mo
$0.640-24.7%
AvailableRent on Vast.ai
RunPod
a100-pcie-40gb16125 GB$0.890/hr$649.70/mo
$0.620-30.3%
AvailableDeploy on RunPod
Lambda Cloud
gpu_1x_a100_pcie14100 GB$0.990/hr$722.70/mo--AvailableDeploy on Lambda
Amazon Web Services
p4.8xlarge32384 GB$1.20/hr$876.00/mo
$0.480-60%
AvailableDeploy on AWS
Last updated 57h ago

Compare With Other GPUs

Frequently Asked Questions

What is the cheapest NVIDIA A100 PCIe 40GB cloud provider?
The cheapest NVIDIA A100 PCIe 40GB is available on Vast.ai at $0.850/hr (a100_pcie_40gb).
How much does NVIDIA A100 PCIe 40GB cost per hour?
NVIDIA A100 PCIe 40GB cloud GPU pricing ranges from $0.850/hr to $1.20/hr depending on the provider and configuration.
What are the specs of NVIDIA A100 PCIe 40GB?
NVIDIA A100 PCIe 40GB features 40GB HBM2e memory, 624 TFLOPS FP16 performance, 1.6 TB/s memory bandwidth, and 250W TDP. Architecture: Ampere.
Is NVIDIA A100 PCIe 40GB good for AI training?
Yes, NVIDIA A100 PCIe 40GB is well-suited for AI training workloads. Key use cases include: Inference, Training (smaller models), Data analytics.

Related GPUs