Account Info
Log Out
English
Back
Log in to access Online Inquiry
Back to the Top
AI chip giants gather at Computex 2024: Will it bring new opportunities?
Views 2.9M Contents 262

Nvidia: Steadily Ahead

avatar
Carter West joined discussion · Jun 4 05:22
Stay Connected.Stay Informed. Follow me on MooMoo!
Last Sunday at the Computex conference in Taiwan, both Nvidia and AMD pulled out all the stops, showcasing their latest advancements in AI technology.The press conference had a lot of content, which was also hard to understand, so let's summarize it in the simplest language.
**Nvidia: Steadily Ahead**
1. **Energy Efficiency**: Huang Renxun emphasized that the combination of GPUs and CPUs significantly accelerates computational tasks with substantial advantages in cost and energy consumption. He highlighted that Nvidia's technology can save up to 98% in costs and reduce energy usage by 97%, figures highly appealing to investors. Nvidia's CUDA code library offers ready-to-use codes, facilitating faster program development.
2. **Blackwell Chip in Production**: Huang demonstrated the production version of the Blackwell chip, previously shown as a prototype at GTC. Compared to the Pascal chip from eight years ago, the Blackwell chip reduces the energy consumed for training the GPT-4 model (with 1.8 trillion parameters and 8 trillion tokens) to 1/350th.
3. **Exceeding Moore's Law; Rubin Platform in 2026**:
  - Annual product updates are planned.
  - The Blackwell Ultra, an upgraded version based on the Blackwell architecture, is set for 2025.
  - In 2026, the new Rubin platform will be launched, followed by Rubin Ultra in 2027.
  - Rubin Ultra GPU will integrate 12 HBM4 chips. According to wccftech, Rubin GPUs will adopt a 4x reticle design, utilize TSMC's 3nm process, and employ CoWoS-L packaging technology.
  - Confirmation of the next-gen CPU platform, Vera CPU, launching in 2026.
Comprehensively, from data center networks to memory storage and data transmission, continuous upgrades are vital to handle ever-increasing data volumes. Nvidia's holistic approach to hardware upgrades, including rapid GPU iterations, advancements in manufacturing, packaging, and precision, in conjunction with CPU, NVLink, Spectrum™️-X interconnect improvements, has led to a thousand-fold increase in AI computing power compared to Pascal over eight years with the Blackwell architecture – surpassing Moore's Law.
Nvidia: Steadily Ahead
**AMD: The Persistent Contender**
AMD CEO Lisa Su announced the upgraded version of the AMD M300X AI chip, the MI325X, scheduled for release in Q4, featuring larger memory capacity and faster data throughput. She underscored that AMD servers can run models with up to 1 trillion parameters, double that of Nvidia's H200 servers.
She previewed the MI350 series for 2025 with enhanced AI inference capabilities and the MI400 series in 2026, adopting a new architecture.
Post-conference, Nvidia's stock price rose by 4%, while AMD's announcements did not elicit a similar investor response. This disparity reflects the market's higher confidence in Nvidia's leadership position, which remains unchallenged thus far.
Nvidia's innovation pace solidifies its AI data center dominance and prompts customers to continually upgrade, positively impacting revenues and pleasing investors. Potential concerns include government export controls and the potential impact of new platforms on existing product sales. However, current observations suggest that demand for the Hopper chip increased after the Blackwell launch, indicating minimal negative effects from new introductions.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
11
+0
Translate
Report
51K Views
Comment
Sign in to post a comment