Account Info
Log Out
English
Back
Log in to access Online Inquiry
Back to the Top
Tesla's "Cybercab" global debut sends stock falling
Views 1.4M Contents 535

Tesla Has A Lot To Prove On Robotaxi Day. Top Experts Have Doubts

The Tesla Robotaxi Day event on Oct 10 at a Warner Bros. Hollywood studio is a high-stakes moment for Elon Musk. He has hinged the company's future on the idea that Tesla isn't just an electric carmaker, but a force in AI and robotics.
But Tesla's technical approach to self-driving cars - including what we know of it so far and what's expected to happen in Los Angeles - raises major red flags, artificial intelligence and autonomous vehicle experts told InsideEVs.
Some warned that deploying Tesla Robotaxis at scale would be dangerous. Tesla's technology remains unproven and its safety data is not revealed. Others said Tesla is at least a decade away from legally launching a self-driving taxi service, and many agreed that its approach to autonomy is fundamentally flawed, barring some big shift in thinking.
The automaker is set to reveal a purpose-built autonomous vehicle, potentially called the "Cybercab," that could be a rival to Uber and Google's Waymo. Musk is also expected to lay out plans for a robotaxi service that will incorporate both Cybercabs and regular Tesla owners' cars, which he has long promised would gain autonomous capability someday.
Even so, critics and experts in the space - many of whom have been in it for decades - said that this demonstration may be less about future products and more about proving to investors that Tesla is on the right track to "solving" full autonomy. Even Musk has claimed that Tesla could be worth trillions if it does this, but essentially worthless if it does not.
"There's just no corroborating evidence that would suggest that they're anywhere close to having actual self-driving cars," said Missy Cummings, the director of the Autonomy and Robotics Center at George Mason University and former safety adviser to the National Highway Traffic Safety Administration. "This is just another attempt for [Musk] to raise cash."
Tesla is taking a radically different approach to autonomous driving than others in the space.
To make FSD work, Tesla uses multiple cameras acting as the vehicle's "eyes." This visual data feeds into what the company calls neural networks - machine-learning models inspired by the human brain. These networks process the information, make sense of it and then help the car make active decisions based on what it "sees."
Around mid-2023, Tesla started shifting to this neural network approach, and away from a system based on 300,000-plus lines of code that guided a vehicle in certain situations. Last June, it explained in a thread on X how the system was already operational in customer vehicles.
The backbone of these neural networks is, supposedly, a growing number of AI-powered "supercomputer clusters." They process billions of data points to train FSD to drive more like humans.
Tesla's rivals have taken a different approach. Google's autonomous ride-hailing service Waymo operates on pre-mapped roads and uses a full suite of sensors including cameras, radar and LIDAR, whereas Tesla only uses cameras and AI. Waymo EVs, white Jaguar I-Paces outfitted with that hardware, are legally operating in four U.S. cities: San Francisco, Phoenix, Los Angeles and Austin.
General Motors' Cruise self-driving division has taken a similar approach as Waymo. All three companies are under federal safety investigations.
On the consumer side, an increasing number of automakers are turning to LIDAR and expanding their ADAS options, although broadly speaking, all have been more cautious than Tesla in the space. But Tesla insists its outside-the-box approach will create a "generalized" solution to self-driving that will let cars operate virtually anywhere. Cruise and Waymo, on the other hand, focus on mastering discrete areas and then expanding from there.
Many experts have their doubts about Tesla's approach on both hardware and software.
"Wherever you have a neural net, you will always have the possibility of hallucination," Cummings said.
"It's just that they do it infrequently enough to give people false confidence," she added. Hallucinations are the same thing that happens when ChatGPT spits out a totally nonsensical answer.
Tesla's system could be prone to "statistical inference errors," she said, which basically means analyzing a particular set of data inaccurately, leading to wrong conclusions. In Tesla's case, that means making wrong decisions on the road.
The automaker is still a decade away from being a legitimate self-driving car company, according to Cummings. The key problem, she said, was that Tesla hasn't made its FSD safety data public yet. It releases some Autopilot and FSD data periodically showing the number of accidents per million miles of driving using those systems, but the reports are not detailed and nearly not enough to prove that the system is safe, she said.
Independent testing by AMCI has found that FSD had an average disengagement rate of one in every 13 miles. That's a big red flag, according to Cummings.
"It's just not a reality until we see a Tesla reporting actual testing with bonafide testing drivers and/or testing the vehicles with no drivers in them."
So-called "edge cases," or rare events, are another potential problem area, experts said.
"What matters in safety is not the average day. What matters is the bad day and the bad days are extremely rare," said Phil Koopman, a professor of electrical and computer engineering at Carnegie Mellon University who has worked extensively on autonomous vehicle safety.
According to the Federal Highway Administration, the fatality rate for human drivers is 1.33 deaths per 100 million miles driven in the U.S. "Saying 'I drove 10 miles without an intervention' means nothing," Koopman said, referring to Tesla owners who post videos of their experiences using FSD. That's statistically insignificant. After all, humans can log "99,999,999 miles without a fatality."
Tesla uses end-to-end machine learning in the latest version 12 of FSD. That means feeding the neural networks with raw data (lots of videos, in this case) which directly results in an action on the road (acceleration, braking, turning). Koopman said this approach works well for common driving scenarios but is "horrible at handling rare events."
The issue there is that extremely uncommon situations - like a house fire or an odd object on the road - may not be represented in even a large data set, said Dan McGehee, who directs the University of Iowa's Driving Safety Research Institute. Rather, those kinds of hyper-specific events need to be painstakingly taught to a self-driving system, he said.
AI-based self-driving systems can also make it more difficult for engineers to trace back why a vehicle made a certain decision - good or bad - industry experts say.
Waymo relies on a few hundred expensive LIDAR-equipped cars, while Tesla has sidestepped those costs to deploy millions of camera-equipped vehicles.
Both strategies come with trade-offs, but Koopman likened skipping LIDAR to "tying one hand behind your back while trying to solve an impossible problem." LIDAR sensors, which use lasers to create a 3D understanding of the surrounding world, are far superior at depth perception and fare better in adverse weather.
Tesla's FSD user manual admits that cameras struggle in such scenarios. "Visibility is critical for FSD to operate. Low visibility, such as low light or poor weather conditions (rain, snow, direct sun, fog, etc.) can significantly degrade performance," the disclaimer reads.
For that exact reason, McGehee, of the University of Iowa, says it]s critical to think about redundancy when designing driverless cars.
"Not only do you have to have a 360-degree view of the world, but you have to have an overlapping view of the world with a different modality," he said, adding that Tesla's decision to go with cameras only is "problematic."
Krzysztof Czarnecki, professor of electrical and computer engineering at the University of Waterloo and a member of SAE task forces for automated driving said that a Tesla Robotaxi with its current set of hardware and software "would cause mayhem and accidents and [the cars] will disappear very quickly from the road." It would take several years to solve the vision problems.
"This is like taking ChatGPT and putting it behind the wheels," Czarnecki said. "Not literally, of course, because it's fed with driving data, but the underlying technology is kind of that, and you can't build a safe system that way," he added.
Tesla could create a driverless service using a vision-only system, said Alex Roy, a former executive at the now-defunct self-driving startup Argo AI and a cofounder at New Industry VC. However, that would mean either deploying far and wide while compromising safety and performance, or deploying in a highly constrained environment.
"I'm absolutely convinced that a camera-first or camera-only system will be able to do this. The only question is when," Roy said, acknowledging that he's in the minority. Even so, he said he doesn't think Tesla's event will yield anything that can be commercialized in the near term.
While none of the experts opposed robotaxis, they emphasized the need for extensive real-world testing including without driver, along with increased data sharing with regulators to address issues transparently. "Self-driving cars can succeed in limited domains," Cummings noted, adding that she advocates for controlled pilot testing to make that happen.
Koopman, on the other hand, said he had very low expectations from the Robotaxi reveal. A prototype car that triggers discussions is perfectly fine, he said.
"But that would have no predictive power whatsoever as to when robotaxis will be on the road at scale."
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
16
+0
4
Translate
Report
12K Views
Comment
Sign in to post a comment