The A30 is a high-performance Inferred GPU. Featuring 24GB HBM2 of ultra-fast memory, it is engineered for the most demanding AI model training, large language models (LLMs), and complex scientific computing.
Recommended Scenarios
Mainstream AI Inference
HPC
Media Processing
Architecture
Ampere
VRAM Capacity
24GB HBM2
Bandwidth
933 GB/s
CUDA Cores
3584
FP16 Perf.
330 TFLOPS
Power (TDP)
165W