Blackwell GPU

NVIDIA B200 SXM 192GB

192GB HBM3e | 4.5 PFLOPS FP16 | 8.0 TB/s bandwidth

From $6.50 /hr

LLM training Large-scale inference HPC Multi-modal AI

Specifications

VRAM

192 GB HBM3e

FP16

4.5 PFLOPS

FP8

9.0 PFLOPS

FP4

18.0 PFLOPS

Memory BW

8.0 TB/s

TDP

1000W

Interconnect

NVLink 5

Architecture

Blackwell

Cloud GPU Pricing (4 offers)

ProviderInstance TypevCPUsRAMPrice/hrPrice/moSpot PriceAvailabilityAction
CoreWeaveCheapest
b200-sxm-1x64512 GB$6.50/hr$4,745.00/mo--LimitedDeploy on CoreWeave
Lambda Cloud
gpu_1x_b20064512 GB$6.80/hr$4,964.00/mo--AvailableDeploy on Lambda
Amazon Web Services
p6.24xlarge96768 GB$7.20/hr$5,256.00/mo
$4.32-40%
LimitedDeploy on AWS
Google Cloud Platform
a4-highgpu-1g96680 GB$7.50/hr$5,475.00/mo
$4.50-40%
WaitlistDeploy on Google Cloud
Last updated 57h ago

Compare With Other GPUs

Frequently Asked Questions

What is the cheapest NVIDIA B200 SXM 192GB cloud provider?
The cheapest NVIDIA B200 SXM 192GB is available on CoreWeave at $6.50/hr (b200-sxm-1x).
How much does NVIDIA B200 SXM 192GB cost per hour?
NVIDIA B200 SXM 192GB cloud GPU pricing ranges from $6.50/hr to $7.50/hr depending on the provider and configuration.
What are the specs of NVIDIA B200 SXM 192GB?
NVIDIA B200 SXM 192GB features 192GB HBM3e memory, 4.5 PFLOPS FP16 performance, 8.0 TB/s memory bandwidth, and 1000W TDP. Architecture: Blackwell.
Is NVIDIA B200 SXM 192GB good for AI training?
Yes, NVIDIA B200 SXM 192GB is well-suited for AI training workloads. Key use cases include: LLM training, Large-scale inference, HPC, Multi-modal AI.

Related GPUs