The A100 is a high-performance Inferred GPU. Featuring 80GB HBM2e of ultra-fast memory, it is engineered for the most demanding AI model training, large language models (LLMs), and complex scientific computing.
Recommended Scenarios
Scientific Computing
BERT Training
Large Data Analytics
Architecture
Ampere
VRAM Capacity
80GB HBM2e
Bandwidth
1935 GB/s
CUDA Cores
6912
FP16 Perf.
624 TFLOPS
Power (TDP)
400W