Nvidia's H200 GPU Has Mind-Blowing Performance!
Nvidia Launches Next Generation AI Processor H200.
The new H200 GPU is an upgraded product of the current H100. The new GPU utilizes a memory specification called HBM3e for the first time, which increases the memory bandwidth of the H200 GPU from 3.35TB per second in the H100 to 4.8TB, and the total memory capacity has increased from 80GB in the previous generation product to 141GB. Compared to A100, bandwidth has increased by 2.4 times, and capacity has almost doubled. In terms of performance, H200 improves 60% to 90% compared to H100, and the inference speed of Llama 2 (a 70 billion parameter LLM) nearly doubles.
H200 is expected to be launched in the second quarter of 2024, mainly provided to third-party computing power providers such as CoreWeave, Lambda, and Vultr, as well as cloud computing vendors such as Amazon, Google, Microsoft, and Oracle. H200 will compete with AMD's MI300X GPU. Although the competition in the AI market remains fierce, Nvidia still maintains a leading position in the AI chip market and has advantages in hardware and software ecosystems.
The main concern for the company's performance is the restriction on sales in the Chinese market. Nvidia has launched three chips specifically for the Chinese market to meet US policy regulatory requirements: HGX H20, L20 PCIe, and L2 PCIe. These three chips have lower performance, and pricing has not been announced yet. The advantage is still in Nvidia's mature software ecosystem. Export license is still required, and it is uncertain whether these three chips can meet regulatory and market demand.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.
Read more
Comment
Sign in to post a comment
71805742 : CPI decrase, Nasdaq win
73417464 : Can NVDA achieve 500 this time?