💡Cerebras Systems aims for an $8 billion IPO with its innovative ai chip $CBRS.
Cerebras Systems Inc. applied for an IPO (Initial Public Offering) with the U.S. Securities and Exchange Commission (SEC) on October 1, 2024. Aiming for a maximum valuation of $8 billion with its wafer-scale giant AI chip as its weapon, it is planning to be listed on Nasdaq under the ticker symbol $CBRS.
1️⃣ Innovative technology of Cerebras: WSE-3
Cerebras' main product is a huge AI chip called WSE-3 (Wafer Scale Engine 3), which uses an entire single silicon wafer.
Its specifications are remarkable:
- Number of transistors: 4 trillion
- Number of cores: approximately 0.9 million
- On-chip memory: 44GB SRAM
This boasts an overwhelming specification compared to the largest commercial GPUs, with approximately 50 times more cores and 880 times more memory capacity. The WSE-3 is manufactured using TSMC's 5nm process, achieving a significant performance improvement from its predecessor, the WSE-2 (7nm process).
Notably, the WSE-3 can accommodate even the largest existing AI models on a single chip. This enables efficient AI training without the need for complex distributed processing like model parallelism or Tensor parallelism.
2️⃣ Cluster Configuration and Scalability
Cerebras offers the compute appliance CS-3 with WSE-3 installed. This mini refrigerator-sized system can be clustered up to 2,048 units. This is a significant expansion from the previous model CS-2's 192 units.
The core of the cluster configuration consists of components called Memory X and Swarm X:
- Memory X: Equipped with a large-capacity DRAM to hold Weight information for the entire model
- Swarm X: Switch that performs efficient distribution of Weight information and aggregation of gradient information
This architecture achieves linear performance improvement with the expansion of the cluster size. This is because it can significantly reduce communication overhead, which is often a problem in GPU clusters.
AI training's new 'performance' indicator
Cerebras redefines the 'performance' of AI training not only based on simple computing speed but also includes the following elements:
- Pure computing power: for example, shortening the training time for the Llama model from 1 month on a GPU cluster to 1 day
Programmability: Code designed for a single machine can run on a 2,048-machine cluster without any modifications
Shortening of development cycles: Total time from model development to completion of training
With this new definition of 'performance,' the Cerebras system has demonstrated more than 35 times efficiency compared to conventional GPU clusters. For example, UAE's G42 company completed the training of a natural language model with 13 billion parameters, which took 68 days on a GPU cluster, in just 2 days using 64 CS-2 clusters.
Comparison with GPUs: Technical advantages
Cerebras' WSE is specifically designed for AI workloads and has the following advantages compared to general-purpose GPUs:
Adaptability to sparse computations: Many AI models have sparse characteristics, and WSE can efficiently handle these
Memory bandwidth: Resolving the memory access bottleneck with on-chip large-capacity SRAM
Reduction of communication overhead: Capable of processing the entire model on a single chip, eliminating the need for inter-chip communication
Ease of programming: Complex distributed processing code is not required, development is possible with standard Python and PyTorch.
With these advantages, especially in the training of large language models (LLMs), Cerebras system has the potential to demonstrate overwhelming efficiency.
Of particular note is the potential for Cerebras to advance research in 'Sparse AI'. The WSE architecture is well suited for the development of more efficient sparse models, which could lead to a breakthrough in next-generation AI development.
5️⃣ Financial situation and details of IPO.
Cerebras' growth rate is astounding. Revenue more than triples from 2022 to 2023 ($78.7 million), reaching $136.4 million in revenue for the first half of 2024 (Jan-Jun), marking a growth of approximately 15.7 times compared to the same period last year. However, there are challenges as well. 87% of revenue in the first half of 2024 is from UAE AI company G42, posing a high customer concentration risk.
In the IPO, aiming to raise funds between $0.75 billion and $1 billion, this will be a significant source of funding to accelerate the company's growth strategy.
📍 Summary and Future Outlook
Cerebras Systems $CBRS has the potential to pioneer a new era of AI computation with innovative hardware and a unique approach. However, there are many challenges to overcome, such as expanding the customer base and optimizing manufacturing costs. While it presents a significant investment opportunity riding on the rapid growth of the AI chip market, careful attention must be paid to technical risks and changes in the competitive environment.
Cerebras Systems is causing a paradigm shift in AI training, opening up a gap in the AI training market dominated by NVIDIA $NVDA, and could become the "second act" of the AI revolution. It is worth paying attention to its developments, including the IPO.
1️⃣ Innovative technology of Cerebras: WSE-3
Cerebras' main product is a huge AI chip called WSE-3 (Wafer Scale Engine 3), which uses an entire single silicon wafer.
Its specifications are remarkable:
- Number of transistors: 4 trillion
- Number of cores: approximately 0.9 million
- On-chip memory: 44GB SRAM
This boasts an overwhelming specification compared to the largest commercial GPUs, with approximately 50 times more cores and 880 times more memory capacity. The WSE-3 is manufactured using TSMC's 5nm process, achieving a significant performance improvement from its predecessor, the WSE-2 (7nm process).
Notably, the WSE-3 can accommodate even the largest existing AI models on a single chip. This enables efficient AI training without the need for complex distributed processing like model parallelism or Tensor parallelism.
2️⃣ Cluster Configuration and Scalability
Cerebras offers the compute appliance CS-3 with WSE-3 installed. This mini refrigerator-sized system can be clustered up to 2,048 units. This is a significant expansion from the previous model CS-2's 192 units.
The core of the cluster configuration consists of components called Memory X and Swarm X:
- Memory X: Equipped with a large-capacity DRAM to hold Weight information for the entire model
- Swarm X: Switch that performs efficient distribution of Weight information and aggregation of gradient information
This architecture achieves linear performance improvement with the expansion of the cluster size. This is because it can significantly reduce communication overhead, which is often a problem in GPU clusters.
AI training's new 'performance' indicator
Cerebras redefines the 'performance' of AI training not only based on simple computing speed but also includes the following elements:
- Pure computing power: for example, shortening the training time for the Llama model from 1 month on a GPU cluster to 1 day
Programmability: Code designed for a single machine can run on a 2,048-machine cluster without any modifications
Shortening of development cycles: Total time from model development to completion of training
With this new definition of 'performance,' the Cerebras system has demonstrated more than 35 times efficiency compared to conventional GPU clusters. For example, UAE's G42 company completed the training of a natural language model with 13 billion parameters, which took 68 days on a GPU cluster, in just 2 days using 64 CS-2 clusters.
Comparison with GPUs: Technical advantages
Cerebras' WSE is specifically designed for AI workloads and has the following advantages compared to general-purpose GPUs:
Adaptability to sparse computations: Many AI models have sparse characteristics, and WSE can efficiently handle these
Memory bandwidth: Resolving the memory access bottleneck with on-chip large-capacity SRAM
Reduction of communication overhead: Capable of processing the entire model on a single chip, eliminating the need for inter-chip communication
Ease of programming: Complex distributed processing code is not required, development is possible with standard Python and PyTorch.
With these advantages, especially in the training of large language models (LLMs), Cerebras system has the potential to demonstrate overwhelming efficiency.
Of particular note is the potential for Cerebras to advance research in 'Sparse AI'. The WSE architecture is well suited for the development of more efficient sparse models, which could lead to a breakthrough in next-generation AI development.
5️⃣ Financial situation and details of IPO.
Cerebras' growth rate is astounding. Revenue more than triples from 2022 to 2023 ($78.7 million), reaching $136.4 million in revenue for the first half of 2024 (Jan-Jun), marking a growth of approximately 15.7 times compared to the same period last year. However, there are challenges as well. 87% of revenue in the first half of 2024 is from UAE AI company G42, posing a high customer concentration risk.
In the IPO, aiming to raise funds between $0.75 billion and $1 billion, this will be a significant source of funding to accelerate the company's growth strategy.
📍 Summary and Future Outlook
Cerebras Systems $CBRS has the potential to pioneer a new era of AI computation with innovative hardware and a unique approach. However, there are many challenges to overcome, such as expanding the customer base and optimizing manufacturing costs. While it presents a significant investment opportunity riding on the rapid growth of the AI chip market, careful attention must be paid to technical risks and changes in the competitive environment.
Cerebras Systems is causing a paradigm shift in AI training, opening up a gap in the AI training market dominated by NVIDIA $NVDA, and could become the "second act" of the AI revolution. It is worth paying attention to its developments, including the IPO.
参考
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.
Read more
Comment
Sign in to post a comment
Kimihiko OP : Cerebras, a competitor of Nvidia, has applied for an IPO in the United States. The emerging artificial intelligence company, Cerebras Systems, applied for a new stock listing in the United States on Monday. The fundraising conditions and scale have not yet been disclosed. The company, headquartered in Sunnyvale, California, has applied to list Class A common stock under the symbol 'CBRS' on the Nasdaq Global Market. Citigroup Global Markets and Barclays Capital will serve as joint lead managers for this offering. Cerebras is one of several startup companies seeking to capitalize on the sharp increase in demand for semiconductors that support AI in a market currently dominated by technology giant Nvidia (NASDAQ: NVDA). The company, which designs processors for AI training and inference, claims to have solved a decades-old issue in the computer industry – developing chips the size of an entire silicon wafer. In late August, Cerebras announced an AI inference solution that is claimed to be 20 times faster than Nvidia (NVDA) GPU-based hyperscale cloud. The company stated in its IPO application that it has experienced rapid growth, with revenue reaching $78.7 million and $24.6 million for 2023 and 2022, respectively, showing 220% year-over-year growth. For the six months up to June 30, 2024, and June 30, 2023, it generated revenues of $100.7 million and $8.7 million, respectively. The company incurred losses of $66.6 million until June 30, 2024, and $77.8 million until June 30, 2023, for the six-month periods. Cerebras raised $200 million in a Series F funding round in 2021, exceeding a valuation of $4 billion.
ミツ5963 : I wonder if it will become a trillion dollars in 10 years~
Kimihiko OP ミツ5963 : Let's buy a little.
ミツ5963 : I will buy it if you sell it.
人生が含み損 : It feels like a forced semiconductors approach, but does it just look like a bunch of small ones lined up?
It seems like various semiconductor manufacturers will emerge like bamboo shoots after the rain in the future.
I can't buy them all, hmm