Ampere GPU
NVIDIA A100 PCIe 40GB
40GB HBM2e | 624 TFLOPS FP16 | 1.6 TB/s bandwidth
From $0.850 /hr
Inference Training (smaller models) Data analytics
Specifications
VRAM
40 GB HBM2e
FP16
624 TFLOPS
FP8
N/A
FP4
N/A
Memory BW
1.6 TB/s
TDP
250W
Interconnect
PCIe Gen4
Architecture
Ampere
Cloud GPU Pricing (4 offers)
| Provider | Instance Type | vCPUs | RAM | Price/hr | Price/mo | Spot Price | Availability | Action |
|---|---|---|---|---|---|---|---|---|
Vast.aiCheapest | a100_pcie_40gb | 8 | 64 GB | $0.850/hr | $620.50/mo | $0.640-24.7% | Available | Rent on Vast.ai |
RunPod | a100-pcie-40gb | 16 | 125 GB | $0.890/hr | $649.70/mo | $0.620-30.3% | Available | Deploy on RunPod |
Lambda Cloud | gpu_1x_a100_pcie | 14 | 100 GB | $0.990/hr | $722.70/mo | -- | Available | Deploy on Lambda |
Amazon Web Services | p4.8xlarge | 32 | 384 GB | $1.20/hr | $876.00/mo | $0.480-60% | Available | Deploy on AWS |
Last updated 57h ago
Compare With Other GPUs
Frequently Asked Questions
What is the cheapest NVIDIA A100 PCIe 40GB cloud provider?
The cheapest NVIDIA A100 PCIe 40GB is available on Vast.ai at $0.850/hr (a100_pcie_40gb).
How much does NVIDIA A100 PCIe 40GB cost per hour?
NVIDIA A100 PCIe 40GB cloud GPU pricing ranges from $0.850/hr to $1.20/hr depending on the provider and configuration.
What are the specs of NVIDIA A100 PCIe 40GB?
NVIDIA A100 PCIe 40GB features 40GB HBM2e memory, 624 TFLOPS FP16 performance, 1.6 TB/s memory bandwidth, and 250W TDP. Architecture: Ampere.
Is NVIDIA A100 PCIe 40GB good for AI training?
Yes, NVIDIA A100 PCIe 40GB is well-suited for AI training workloads. Key use cases include: Inference, Training (smaller models), Data analytics.