share_log

AMD发布英伟达竞品AI芯片 预期市场规模四年到5000亿美元

AMD releases ai chip competitor to nvidia, expected market size to reach 500 billion dollars in four years.

cls.cn ·  17:01

① The newly launched MI325X, like the MI300X, is based on the CDNA 3 architecture and has been upgraded in terms of HBM3e memory capacity; ② AMD predicts that the MI355X released next year will usher in significant architectural upgrades and further increases in memory capacity; ③ Su Zifeng once again made a big statement, predicting that the overall computing power chip market will reach 500 billion US dollars in 2028.

Finance Association, Oct. 11 (Editor Shi Zhengcheng) In the field of AI computing power, AMD, which has been living in the shadow of Nvidia, held a press conference on artificial intelligence on Thursday to launch a number of new products, including the MI325X computing power chip. However, while the market heat is flat, AMD stock prices have also experienced a wave of significant dives.

m7B4yh5p87.jpg

As the product that has received the most attention in the market, the MI325X, like the MI300X that was launched before, is all based on the CDNA 3 architecture, and the basic design is similar. Therefore, the MI325X can be seen more as a mid-term upgrade. Using 256GB of HBM3e memory, the memory bandwidth can reach up to 6TB/s. The company expects that production of this chip will begin in the fourth quarter, and that it will be supplied through cooperative server manufacturers in the first quarter of next year.

62wEKl8LSI.png

In AMD's positioning, the company's AI accelerators are more competitive in use cases where AI models create content or perform inference rather than training models by processing massive amounts of data. Part of the reason is that AMD has built more high-bandwidth memory on the chip, making it perform better than some Nvidia chips. In a horizontal comparison, Nvidia equipped the latest B200 chip with 192 GB of HBM3e memory, that is, the two B100 are each connected to 4 24GB memory chips, but the memory bandwidth can reach 8 TB/s.

AMD head Su Zifeng emphasized at the press conference: “What you can see is that the Mi325 can provide up to 40% higher performance than the Nvidia H200 when running LLama 3.1.”

According to official documentation, compared to the H200, the MI325, which has a parameter advantage, can provide 1.3 times the peak theoretical FP16 (16 bit floating point) and FP8 calculation performance.

Compared to the MI325X, AMD has also drawn a big “pie” for the market — the company will launch the MI350 series GPUs with CDNA 4 architecture next year. In addition to further increasing the HBM3e memory size to 288GB and the process process to 3nm, the performance improvement is also amazing. For example, the performance of FP16 and FP8 is 80% better than the newly released MI325. The company further stated that compared with the CDNA 3 accelerator, the MI350 series's inference performance will be 35 times better.

3MhZuzX7gy.png

AMD expects the platform equipped with the MI355X GPU to be launched in the second half of next year to compete head-on with the MI325 against Nvidia's Blackwell architecture products.

r9fPnQ346v.png

Su Zifeng also said on Thursday that the market for data center artificial intelligence accelerators will grow to 500 billion US dollars in 2028, compared to 45 billion US dollars in 2023. In her previous statement, she expected the market to reach 400 billion dollars in 2027.

It is worth mentioning that most industry analysts believe that Nvidia's share in the AI chip market can reach more than 90%, which is why chip leaders can enjoy 75% gross margin. Based on the same considerations, there was also a big difference in the share price performance between the two sides — after today's press conference, AMD (red line)'s increase narrowed back to less than 20% during the year, while Nvidia (green line)'s increase was close to 180%.

jny578Y5qO.png

(AMD and Nvidia stock price increases during the year, source: TradingView)

Smoothly “punch Intel”

For AMD, most of the current data center business still comes from CPU sales. In actual use cases, the GPU also needs to be paired with the CPU to use it.

In the June quarter earnings report, AMD's data center sales doubled year-on-year to 2.8 billion US dollars, but AI chips only accounted for 1 billion US dollars. The company said its share of the data center CPU market is about 34%, which is lower than Intel's Xeon series chips.

As a challenger in the data center CPU field, AMD also released the fifth-generation EPYC “Turing” series server CPUs on Thursday, with specifications ranging from 9015 ($527) with 8 cores to 9965 ($14,831) with up to 192 cores. AMD emphasizes that many of the EPYC 9965's performance performance is “several times better” than Intel's flagship server CPU Xeon 8592+.

xG6RBb01gD.png

(Su Zifeng shows “Turing” series server CPUs, Source: AMD)

At the press conference, AMD invited Kevin Salvadore, Meta's vice president of infrastructure and engineering, to stand. The latter revealed that the company has deployed more than 1.5 million ePyc CPUs.

Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more
    Write a comment