Citigroup: OpenAI Set to Become Broadcom's Fourth Largest ASIC Customer, with New AI Chips Expected in H2 2025
According to Citigroup, OpenAI is poised to emerge as Broadcom's fourth-largest Application-Specific Integrated Circuit (ASIC) customer, following Google, Meta, and ByteDance, with estimated deliveries scheduled for the latter half of 2025.
From GPUs to ASICs: Is OpenAI Reshaping the Chip Market?
A noteworthy question arises: Will this collaboration with Broadcom pose a threat to OpenAI's long-time partner, NVIDIA?
Currently, NVIDIA dominates the general-purpose GPU market, accounting for nearly 70% of the AI compute market share. Meanwhile, Broadcom and Marvell are the primary players in the ASIC space, collectively holding over 60% of the ASIC market.
ASICs sacrifice versatility for exceptional performance in specific scenarios, whereas general-purpose compute cards offer flexibility but may lag in performance for niche applications.
The truth is, the needs of compute card customers vary significantly. Cloud providers prioritize elastic computing, while enterprises might focus on clustered compute power. ASICs, tailored to specific requirements, offer advantages over standard compute cards, aligning more closely with customers' unique use cases.
Cloud and hyperscale giants like $Alphabet-C (GOOG.US)$ , $Meta Platforms (META.US)$ , $Microsoft (MSFT.US)$ , and $Amazon (AMZN.US)$ are leading the ASIC trend, evidenced by Google's TPU, Meta's MTIA, Microsoft's Maia, and Amazon's Trainium2.
It's crucial to note that ASICs can come with a higher cost compared to general-purpose compute cards. Morgan Stanley's estimates suggest that the Total Cost of Ownership (TCO) for GB200 is 44% lower than TPUv5 and 30% lower than Trainium 2.
Broadcom's management anticipates a potential AI market for the company ranging from 30billionto50 billion annually over the next 3-5 years, significantly exceeding its fiscal year 2024 target of $11 billion.
Broadcom's primary focus is on ASIC chips tailored for large consumer AI platform customers like Google, Meta, ByteDance, and now potentially OpenAI. Management believes that these AI giants will continue to invest heavily in xPU clusters, scaling from 100,000 clusters this year to 1 million over the next 3-5 years, with no signs of demand slowdown in sight.
With each cluster carrying a capital expenditure of 40 billion, Broadcom estimates that its services, encompassing computing and net working (excludingpower),could account for 25 billion of that expenditure. This translates into a potential AI market for Broadcom ranging from 30billionto50 billion annually over the next 3-5 years.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.
Read more
Comment
Sign in to post a comment