Hopper GPU
NVIDIA H200 SXM 141GB
141GB HBM3e | 989.5 TFLOPS FP16 | 4.8 TB/s bandwidth
From $3.49 /hr
LLM training Large model inference HPC Generative AI
Specifications
VRAM
141 GB HBM3e
FP16
989.5 TFLOPS
FP8
2.0 PFLOPS
FP4
N/A
Memory BW
4.8 TB/s
TDP
700W
Interconnect
NVLink 4
Architecture
Hopper
Cloud GPU Pricing (4 offers)
| Provider | Instance Type | vCPUs | RAM | Price/hr | Price/mo | Spot Price | Availability | Action |
|---|---|---|---|---|---|---|---|---|
Lambda CloudCheapest | gpu_1x_h200 | 48 | 480 GB | $3.49/hr | $2,547.70/mo | -- | Available | Deploy on Lambda |
CoreWeave | h200-sxm-1x | 48 | 480 GB | $3.85/hr | $2,810.50/mo | -- | Available | Deploy on CoreWeave |
Google Cloud Platform | a3-ultragpu-1g | 96 | 680 GB | $4.20/hr | $3,066.00/mo | $2.52-40% | Available | Deploy on Google Cloud |
Amazon Web Services | p5e.24xlarge | 96 | 768 GB | $4.50/hr | $3,285.00/mo | $2.70-40% | Available | Deploy on AWS |
Last updated 57h ago
Compare With Other GPUs
Frequently Asked Questions
What is the cheapest NVIDIA H200 SXM 141GB cloud provider?
The cheapest NVIDIA H200 SXM 141GB is available on Lambda Cloud at $3.49/hr (gpu_1x_h200).
How much does NVIDIA H200 SXM 141GB cost per hour?
NVIDIA H200 SXM 141GB cloud GPU pricing ranges from $3.49/hr to $4.50/hr depending on the provider and configuration.
What are the specs of NVIDIA H200 SXM 141GB?
NVIDIA H200 SXM 141GB features 141GB HBM3e memory, 989.5 TFLOPS FP16 performance, 4.8 TB/s memory bandwidth, and 700W TDP. Architecture: Hopper.
Is NVIDIA H200 SXM 141GB good for AI training?
Yes, NVIDIA H200 SXM 141GB is well-suited for AI training workloads. Key use cases include: LLM training, Large model inference, HPC, Generative AI.