AI 분석
The NVIDIA H200 is the first GPU to feature HBM3e, delivering 141GB of memory at 4.8TB/s. This allows for significantly larger models to be held in a single GPU’s memory, reducing the complexity of multi-GPU orchestration. It is particularly effective for high-throughput LLM inference. We track H200 availability across leading cloud providers to help you secure this cutting-edge hardware.
H200 은(는) 플래그십급 Inferred GPU입니다.
권장 학습/사용 시나리오
딥러닝 학습
모델 추론
비디오 인코딩/디코딩
아키텍처
N/A