The V100 is a high-performance Inferred GPU. Featuring 32GB HBM2 of ultra-fast memory, it is engineered for the most demanding AI model training, large language models (LLMs), and complex scientific computing.
Recommended Scenarios
Deep Learning Training
HPC
Matrix Operations
Architecture
Volta
VRAM Capacity
32GB HBM2
Bandwidth
900 GB/s
CUDA Cores
5120
FP16 Perf.
125 TFLOPS
Power (TDP)
300W