Account Info
Log Out
English
Back
Log in to access Online Inquiry
Back to the Top
Mag 7's diverging Q2 results: Will they boost the market again?
Views 6.7M Contents 1497

Analysis of Tesla's pre-market review chart on Monday, August 5th.

It is a stock disaster, but also a golden opportunity and a once-in-a-lifetime chance.
Important bearish coordinate points have been input into the analysis and investment trading system...
Dojo - Musk's 'self-driving' (Tesla as a publicly listed company is systematically producing and operating.)
The core of the Dojo plan is Tesla's proprietary D1 chip, which means Tesla may not need to rely on Nvidia's chips in the future, and can obtain a large amount of computing power at a low cost. It is expected that by the end of this year, Dojo 1 will achieve the equivalent of about 8,000 H100s in online training.
The importance of Dojo supercomputer to Tesla is growing day by day.
For Musk, Dojo is not only a supercomputer used by Tesla to train autonomous driving models in the cloud, but it has actually become the cornerstone of Musk's AI business empire.
Previously, Goldman Sachs even compared Dojo to "Tesla's AWS" and believed that it would become the biggest value driver for Tesla's future market cap.
In Elon Musk's grand AI blueprint, what role does Dojo play? On Saturday morning local time, TechCrunch reporter Rebecca Bellan published an in-depth article titled "Tesla Dojo: Elon Musk's big plan to build an AI supercomputer, explained", which starts with Dojo and explains Musk's AI plan in detail.
The following are the highlights of the article:
1. Tesla's reliance on a supercomputer is mainly because of its pure vision-based approach (relying solely on cameras rather than sensors to capture data).
2. Tesla's goal is to achieve a combination of "half Tesla AI hardware, half Nvidia/other" in the next 18 months, with "other" potentially being AMD chips.
3. The core of the Dojo plan is Tesla's proprietary D1 chip, which means Tesla may not have to rely on Nvidia's chips in the future and can obtain a large amount of computing power at a low cost.
4. The Dojo chip is Tesla's insurance policy and could bring dividends.
5. It is expected that by October of this year, the total computing power of Dojo will reach 100 exaflops, which is equivalent to the computing power of about 320,500 Nvidia A100 GPUs. It is also expected that Dojo 1 will achieve online training equivalent to about 8,000 H100 units by the end of this year.
The full text of the article is as follows:
For years, Elon Musk has been talking about Dojo - the artificial intelligence supercomputer that will be the cornerstone of Tesla's AI ambitions. This project is very important to Musk, who recently stated that as Tesla prepares to unveil its robotaxi in October, the company's AI team will "double down" to push the Dojo project forward.
But what exactly is Dojo? Why is it so critical to Tesla's long-term strategy?
In short: Dojo is a custom-built supercomputer by Tesla, designed to train its "fully self-driving" neural network. The enhancement of Dojo is closely related to Tesla's goal of achieving full self-driving and bringing the robotaxi to the market. FSD is currently available on approximately 2 million Tesla vehicles, capable of performing some automated driving tasks, but still requires human attention in the driver's seat.
Tesla's originally scheduled time to announce its robotaxi in August has been delayed to October, but both Musk's public statements and internal sources at Tesla tell us that the goal of autonomous driving has not disappeared.
Tesla seems to be preparing to make a huge investment in AI and Dojo to achieve this feat.
The story behind Tesla's Dojo
Musk does not want Tesla to be just an auto manufacturer, or just a provider of solar panels and energy storage systems. Instead, he wants Tesla to be an AI company, a company that cracks the code of self-driving cars by mimicking human perception.
Most other companies developing self-driving car technology rely on a combination of sensors to perceive the world (such as lidar, radar, and cameras) and high-definition maps to locate vehicles. Tesla believes it can solely rely on cameras to capture visual data, then use advanced neural networks to process this data and quickly decide how the car should behave.
As Tesla's former AI chief Andrej Karpathy said at the company's first AI Day in 2021, the company is essentially attempting to "build a synthetic organism from scratch". (Tesla has been teasing Dojo since 2019, but officially announced it on AI Day.)
Companies like Alphabet's Waymo have already commercialized level 4 self-driving cars through more traditional sensor and machine learning approaches - SAE defines it as a system that can drive itself under specific conditions without human intervention. However, Tesla has yet to produce a hands-free self-driving system.
About 1.8 million people have paid high subscription fees for Tesla's FSD, currently priced at $8,000, with a peak price of $15,000. The selling point is that AI software trained by Dojo will eventually be push-updated to Tesla customers. The scale of FSD also means that Tesla has been able to collect millions of miles of video clips to train FSD. This means the more data Tesla collects, the closer this automaker gets to achieving true full self-driving.
However, some industry experts argue that the approach of simply inputting more data into models and expecting them to get smarter may have limitations.
"First, there are economic constraints, and doing so will quickly become prohibitively expensive," said Anand Raghunathan, a professor of electrical and computer engineering at Purdue University in Silicon Valley, to TechCrunch. He further added, "There are voices saying we may actually run out of meaningful data to train the model. More data doesn't necessarily mean more information, so it depends on whether those data contain useful information to create a better model, and whether the training process can truly refine that information into a better model."
Raghunathan said that despite these concerns, at least in the short term, it seems there will be more data. More data means the need for more computing power to store and process, to train Tesla's AI models. This is where the supercomputer Dojo comes in.
What is a supercomputer?
Dojo is a supercomputer system designed by Tesla for artificial intelligence, especially for the training of Full Self-Driving (FSD). The name is a tribute to the practice dojos in martial arts.
A supercomputer is comprised of thousands of small computers called nodes, each with its own CPU (Central Processing Unit) and GPU (Graphics Processing Unit). The former is responsible for overall node management, while the latter handles complex tasks such as dividing tasks into multiple parts for parallel processing. GPUs are crucial for machine learning operations, as they support FSD training simulations. They also support large language models, which is why the rise of generative AI has made Nvidia the most valuable company on earth.
Even Tesla purchases Nvidia GPUs to train its artificial intelligence (but that's another story).
Why does Tesla need a supercomputer?
Tesla's pure vision path is the main reason why it needs a supercomputer. The neural network behind FSD is trained on a large amount of driving data to identify and classify objects around the vehicle, and then make driving decisions. This means that when FSD is activated, the neural network must continuously collect and process visual data at a speed that matches human depth and speed perception.
In other words, Tesla wants to create a digital version of the human visual cortex and brain functions.
To achieve this goal, Tesla needs to store and process all video data collected from its vehicles worldwide, and run millions of simulations to train its models.
Tesla seems to rely on Nvidia to power its current Dojo training computer, but it doesn't want to put all its eggs in one basket - especially because Nvidia chips are expensive. Tesla also wants to create something better, with increased bandwidth and reduced latency. That's why the AI division of this automaker has decided to propose its own custom hardware plan, which aims to train AI models more efficiently than traditional systems.
The core of this plan is Tesla's proprietary D1 chip, which the company says has been optimized for AI workloads.
More information about these chips
Tesla shares a similar view with Apple, believing that hardware and software should be designed to work together. That's why Tesla is working to move away from standard GPU hardware and design its own chips to power Dojo.
Tesla showcased its D1 chip at the 2021 AI Day as a palm-sized silicon block. As of May this year, the D1 chip has been put into production. Taiwan Semiconductor Manufacturing Company, China is using a 7nm process to manufacture these chips. According to Tesla, the D1 has 50 billion transistors and a large size of 645 square millimeters, all promising to be powerful, efficient, and capable of handling complex tasks quickly.
"We can do calculations and data transfer at the same time, and our custom ISA (Instruction Set Architecture) is fully optimized for machine learning workloads," said Ganesh Venkatakrishnan, former Senior Director of Autopilot Hardware at Tesla's AI Day in 2021. "This is a pure machine learning machine."
However, the D1 chip is still not as powerful as Nvidia's A100 chip, also manufactured by Taiwan Semiconductor Manufacturing Company using a 7nm process. The A100 has 54 billion transistors and a size of 826 square millimeters, slightly outperforming Tesla's D1 in terms of performance.
To achieve higher bandwidth and computing capability, Tesla's AI team is fusing 25 D1 chips together to form a block, serving as a unified computer system. Each block has a computing capacity of 9 petaflops and a bandwidth of 36 TB per second, and includes all the hardware needed for power, cooling, and data transfer. You can imagine this block as a self-contained computer made up of 25 small computers. Six such blocks form a rack, and two racks form a cabinet. Ten cabinets make up an ExaPOD. At the 2022 AI Day, Tesla stated that Dojo will be expanded by deploying multiple ExaPODs. All of this together makes up a supercomputer.
Tesla is also developing the next generation D2 chip, aimed at addressing information flow bottlenecks. D2 doesn't connect various chips, but places the entire Dojo block on a single silicon chip.
Tesla has not yet confirmed how many D1 chips it has ordered or expects to receive, nor has it provided a timetable for running the Dojo supercomputer on the D1 chip.
An X post in June stated, "Elon is building a huge GPU cooler in Texas," to which Musk replied that Tesla's goal is to achieve "half Tesla's AI hardware, half Nvidia/other" in the next roughly 18 months. According to Musk's comments in January, "other" may be AMD chips.
What does Dojo mean for Tesla?
Controlling their chip production means that Tesla may one day be able to add significant computing power to AI training projects at low cost, especially as Tesla and Taiwan Semiconductor expand chip production.
This also means that Tesla may not have to rely on Nvidia chips in the future, as the prices of these chips continue to rise and become increasingly difficult to guarantee.
During Tesla's second quarter earnings call, Musk stated that the demand for Nvidia hardware is "so high that it is often difficult to get a GPU." He said he is "quite concerned" about being able to obtain GPUs steadily when needed, so I think this requires more effort on our Dojo to ensure that we have the necessary training capabilities.
Having said that, Tesla is still purchasing Nvidia chips today to train its AI. In June, Musk posted on X:
"Approximately half of the approximately 10 billion dollars in AI-related spending that I mentioned at Tesla this year is internal, mainly AI inference computers designed by Tesla and sensors present in all of our cars, plus Dojo. Nvidia hardware accounts for about 2/3 of the cost for building the AI training supercluster. My current best guess for Tesla's purchase of Nvidia this year is 3 billion to 4 billion dollars."
Inference computing refers to the real-time AI computing executed by Tesla, which is separate from the training computing handled by Dojo.
Dojo is a risky bet, as Musk has repeatedly indicated that Tesla may not succeed in order to hedge this bet.
In the long run, Tesla could theoretically create a new business model based on its AI department. Musk has suggested that the first version of Dojo will be specifically tailored for Tesla's computer vision labeling and training, which is very beneficial for FSD and training Optimus (Tesla's humanoid robot), but not useful for other things.
Musk has stated that subsequent versions of Dojo will lean towards general AI training. One related potential issue is that almost all existing AI software is written for GPUs. Using Dojo to train general AI models will require rewriting the software.
Unless Tesla rents out its computing power, similar to how AWS and Azure rent out cloud computing capabilities. Musk also noted during the second-quarter earnings call that he sees 'a path to competing with Nvidia through Dojo.'
Morgan Stanley predicted in a September 2023 report that Dojo could increase Tesla's market cap by $500 billion through unlocking new revenue streams from robotaxis and software services.
In short, Dojo's chip is an insurance policy for this car manufacturer and could bring dividends.
How is Dojo progressing?
Reuters reported last year that Tesla began production of Dojo in July 2023, but Musk hinted in an article in June 2023 that Dojo had been "online and running useful tasks for several months."
Around the same time, Tesla stated that Dojo is expected to become one of the most powerful five supercomputers by February 2024, a feat that has not been publicly disclosed, causing us to question whether it has already happened.
The company also expects Dojo's total computing power to reach 100 exaflops by October 2024. (1 exaflop is equivalent to 10 trillion computer operations per second. To achieve 100 exaflops, assuming a D1 can reach 362 teraflops, Tesla will need over 276,000 D1 chips, or approximately 320,500 Nvidia A100 GPUs.)
In January 2024, Tesla pledged to invest $0.5 billion to build a Dojo supercomputer at its Gigafactory in Buffalo, New York.
In May 2024, Musk pointed out that the rear of Tesla's Austin Gigafactory would be reserved for an "ultra-high-density water-cooled supercomputer cluster."
Immediately after Tesla's Q2 earnings call, Musk posted on X that the automaker's AI team is using Tesla's HW4 AI computer (renamed AI4), which is the hardware in Tesla vehicles and is present in Nvidia's GPU training loop. He noted that the breakdown is approximately 90,000 Nvidia H100s plus 40,000 AI4 computers.
He continued, "Dojo1 will achieve online training equivalent to around 8,000 H100s by the end of this year. Not a lot, but not few either."
Analysis of Tesla's pre-market review chart on Monday, August 5th.
Analysis of Tesla's pre-market review chart on Monday, August 5th.
Analysis of Tesla's pre-market review chart on Monday, August 5th.
Analysis of Tesla's pre-market review chart on Monday, August 5th.
Analysis of Tesla's pre-market review chart on Monday, August 5th.
Analysis of Tesla's pre-market review chart on Monday, August 5th.
Video playback link🔗You can find the "News" feature under the "Market"-"More" section. - YouTube
Tesla investor Sawyer Merritt said this can help the electric vehicle giant "reduce its costs by 15%-30%": Elon Musk called it a major breakthrough.
Tesla CEO Elon Musk said on Friday that the company's use of dry electrode 4680 batteries in the stainless steel Cybertruck is a "major breakthrough" that can significantly reduce costs.

What happened: Musk responded to Tesla enthusiasts and investor Sawyer Merritt on Twitter, writing, "This is a major breakthrough," and Merritt also called the use of dry electrode 4680 batteries a "big deal."

In an article, Merritt wrote that if Tesla can scale production and use dry electrode 4680 batteries, it can help Tesla reduce costs by 15% to 30% and may produce higher density batteries.

"If they can figure out how to scale up and get good returns, that will change the game," wrote Merritt.

In response, Musk did not detail the benefits of the new technology, but seemed to agree with Merritt's view.

This is a major breakthrough

— Elon Musk (@elonmusk) August 2, 2024
Why it matters: Tesla is using 4680 batteries to power the Cybertruck. Currently, the cathode of the 4680 batteries produced for the Cybertruck is made using a more traditional "wet" process that involves the use of toxic solvents. Tesla stated that the second quarter production of the 4680 batteries has increased by 50% compared to the first quarter.

Tesla began testing prototypes of the Cybertruck using internally produced "dry" cathode 4680 batteries in July. The company stated during its second quarter earnings call in June that, once in production, the use of internally produced dry cathode 4680 batteries will significantly lower costs. The production of the Cybertruck is expected to be profitable by the end of the year.

Last month, Tesla's Vice President of Vehicle Engineering, Lars Moravy, stated in a conference call with analysts that mass production of Cybertrucks using dry cathode 4680 batteries is expected in the fourth quarter.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
4
+0
See Original
Report
17K Views
Comment
Sign in to post a comment