It's neat that we now have video evidence of Autopilot self-disengaging itself when it detects an unavoidable crash, instead of slamming on the brakes (as one would expect) to minimize impact.
Do we? I'm no Musk supporter by any means, but Rober's testing seems to have substantial methodology flaws and potential 'questions'. In the article, for instance, you can see he activates autopilot while the car is at 39mph. The set point is also 39mph. When you see the next shot when autopilot is not activated (right before the 'brick wall' crash), the car is driving 42mph. Standard autopilot would not accelerate beyond the set point (which, by default, should be the exact speed you're currently driving). How would the speed manage to go up an additional 3mph unless Rober was pressing the accelerator to override it?
I am happy to take more comments from those who know Teslas more than I, but lets base our critiques on the available information (especially when it is indicated in the article).
This appears to be a different take based on the speedometer value. Also, he turned the system on 3 seconds before the wall, while giving the other system much more time. More importantly than all this, why didn't he used FSD, why is he using legacy Autopilot?
He was originally testing the Automatic Emergency Braking feature, but it couldn't stop in time for a single obstacle, so he cut them some slack and used Autopilot instead.
I've watched that a couple times and I'm pretty sure he accidentally cancelled autopilot by jerking the wheel with his hands. His hand movement lines up with the autopilot disengage. I've done that several times myself, it is deliberately sensitive to wheel torque as a manual disengagement mechanism.
the footage he posted on x does not fully match the in car footage on youtube so there was evidently multiple runs. it seems unlikely that the mirror wall could be repaired so I am left to conclude that he kept driving the tesla at the wall until he crashed through it (manually). that being said I would not be super surprised to find that a carefully crafted mirror wall would confuse autopilot. that seems to be a logical weakness of a vision based system.
Thanks for that link. I didn't see the unedited video since it's hidden on Twitter.
Two things to point out:
* The main video clip where he engages autopilot at 39mph is clearly a different run/different clip. Not inherently bad, just a bit weird and seems to imply he enables autopilot substanially earlier than he actually does in the unedited clip. I don't know why he would activate it so late and it's possible that affects the outcome slightly.
* The disengagement is clearly from him jerking the steering wheel. You can see him jerk the steering wheel left slightly and the autopilot noise happens precisely timed with that.
Neither of these are really meaningful to the end result I suppose. By the time he jerks the wheel to cause the disengagement it definitely does not have time to stop in time and wouldn't have affected the outcome of crashing into the wall.
That said, I think it's not particularly a fair comparison to compare basic Autopilot instead of their FSD software. Something like the dense fog or heavy rain cases could have had different outcomes for FSD since it slows down in adverse weather conditions. I'd be interested to see how more standard TACC/lane keep solutions from other competitors fair in these tests. I suspect most would also fail these.
- Tesla's statistics on AP/FSD-related crash incidents specifically does not include fatality incidents.
- Tesla's statistics on AP/FSD-related crash incidents specifically does not include any incidents without SRS airbag deployment (this may not seem egregious, but realize that current airbag algorithms are far more sophisticated than 'collision at x speed, deploy', and take into account angle of attack, whether the vehicle was decelerating or accelerating at the moment of impact, etc. It could decide that the best method of protecting a passenger in a 25mph collision is to fire the seatbelt tensioners, not the airbags. According to Tesla, this would "not be a reported incident". It also includes accidents where the vehicle is so severely damaged airbag systems -fail- to deploy. Also not an incident.)
- A previous fatality collision had Tesla holding a press conference when too much heat was coming toward their AP/FSD systems, to mislead the public. A Tesla spokesperson implied that the vehicle was actively warning the driver about inattentiveness and that that lead to the collision. When the NHTSA concluded their investigation they found that: the vehicle -had- warned the driver about inattentiveness, -one time-, and crucially, that that single incident happened -eighteen minutes- prior to the collision.
There's more shadiness behind Tesla's autopilot/FSD history...
- Actually, blatantly faking videos of Tesla FSD.
- Elon claiming unsupervised NY to LA would be possible by end of 2016 (almost 10 years ago).
- Elon claiming all cars sold today (years ago) have required hardware to become robotaxis.
- Tesla salesmen of the past telling customers their cars were investments because they would become robotaxis that would make money.
Theranos lasted 15 years, so I'm surprised $TSLA is being called this early.
As someone who personally knows people in the industry, I'm pretty confident in saying that TSLA will never actually deliver any type of robust, safe autonomous driving system. The organization itself is just not capable of it anymore.
All that are left are people trying to game metrics and not get fired, that type of culture just can't produce it.
The difference with Theranos is that Tesla managed to become one of the biggest (by market cap) companies on earth, while it's CEO become the richest person on earth.
It's true. That's also why people are so confused about those who call the company a Con or scam.
But in current context, with cratering sales, that's actually a disadvantage - these 100k people and different factories still need to be paid and maintained. This part of business will be bleeding cash in Q1 already.
> As someone who personally knows people in the industry, I'm pretty confident in saying that TSLA will never actually deliver any type of robust, safe autonomous driving system. The organization itself is just not capable of it anymore.
Very much so. I think it's at least a decade out, and when it comes, it won't be from Tesla. Like you say, the org mindset isn't there. I'm reminded of this story from a component manufacturer about the mindset of Tesla:
"Hey, we sent you over the new firmware for the component, check it out." (The test suite for this component takes approximately 36 hours to execute.)
Three hours later:
"This is working so much better, thanks a lot!"
"???"
"Oh, we just flashed a car we have here and took it out for a drive."
It's neat that we now have video evidence of Autopilot self-disengaging itself when it detects an unavoidable crash, instead of slamming on the brakes (as one would expect) to minimize impact.
Do we? I'm no Musk supporter by any means, but Rober's testing seems to have substantial methodology flaws and potential 'questions'. In the article, for instance, you can see he activates autopilot while the car is at 39mph. The set point is also 39mph. When you see the next shot when autopilot is not activated (right before the 'brick wall' crash), the car is driving 42mph. Standard autopilot would not accelerate beyond the set point (which, by default, should be the exact speed you're currently driving). How would the speed manage to go up an additional 3mph unless Rober was pressing the accelerator to override it?
Please watch this unedited video of the run up and impact:
https://xcancel.com/MarkRober/status/1901449395327094898
I am happy to take more comments from those who know Teslas more than I, but lets base our critiques on the available information (especially when it is indicated in the article).
This appears to be a different take based on the speedometer value. Also, he turned the system on 3 seconds before the wall, while giving the other system much more time. More importantly than all this, why didn't he used FSD, why is he using legacy Autopilot?
He was originally testing the Automatic Emergency Braking feature, but it couldn't stop in time for a single obstacle, so he cut them some slack and used Autopilot instead.
Because he was testing automatic collision avoidance.
I've watched that a couple times and I'm pretty sure he accidentally cancelled autopilot by jerking the wheel with his hands. His hand movement lines up with the autopilot disengage. I've done that several times myself, it is deliberately sensitive to wheel torque as a manual disengagement mechanism.
the footage he posted on x does not fully match the in car footage on youtube so there was evidently multiple runs. it seems unlikely that the mirror wall could be repaired so I am left to conclude that he kept driving the tesla at the wall until he crashed through it (manually). that being said I would not be super surprised to find that a carefully crafted mirror wall would confuse autopilot. that seems to be a logical weakness of a vision based system.
Thanks for that link. I didn't see the unedited video since it's hidden on Twitter.
Two things to point out:
* The main video clip where he engages autopilot at 39mph is clearly a different run/different clip. Not inherently bad, just a bit weird and seems to imply he enables autopilot substanially earlier than he actually does in the unedited clip. I don't know why he would activate it so late and it's possible that affects the outcome slightly.
* The disengagement is clearly from him jerking the steering wheel. You can see him jerk the steering wheel left slightly and the autopilot noise happens precisely timed with that.
Neither of these are really meaningful to the end result I suppose. By the time he jerks the wheel to cause the disengagement it definitely does not have time to stop in time and wouldn't have affected the outcome of crashing into the wall.
That said, I think it's not particularly a fair comparison to compare basic Autopilot instead of their FSD software. Something like the dense fog or heavy rain cases could have had different outcomes for FSD since it slows down in adverse weather conditions. I'd be interested to see how more standard TACC/lane keep solutions from other competitors fair in these tests. I suspect most would also fail these.
The driver disengaged it using his foot, or perhaps the steering wheel. Self disengagement makes a very different sound.
It's super sus that they aren't using FSD.
The disengagement before the crash feels like a decision to avoid harsh reactions when they inevitably get false positives.
The autonomy stack might create even more of a PR nightmare with sudden full on ABS braking with no visible reason.
So overall it is a way to mask the quality of your autonomy stack.
So much shadiness. Here's a few other ways:
- Tesla's statistics on AP/FSD-related crash incidents specifically does not include fatality incidents.
- Tesla's statistics on AP/FSD-related crash incidents specifically does not include any incidents without SRS airbag deployment (this may not seem egregious, but realize that current airbag algorithms are far more sophisticated than 'collision at x speed, deploy', and take into account angle of attack, whether the vehicle was decelerating or accelerating at the moment of impact, etc. It could decide that the best method of protecting a passenger in a 25mph collision is to fire the seatbelt tensioners, not the airbags. According to Tesla, this would "not be a reported incident". It also includes accidents where the vehicle is so severely damaged airbag systems -fail- to deploy. Also not an incident.)
- A previous fatality collision had Tesla holding a press conference when too much heat was coming toward their AP/FSD systems, to mislead the public. A Tesla spokesperson implied that the vehicle was actively warning the driver about inattentiveness and that that lead to the collision. When the NHTSA concluded their investigation they found that: the vehicle -had- warned the driver about inattentiveness, -one time-, and crucially, that that single incident happened -eighteen minutes- prior to the collision.
There's more shadiness behind Tesla's autopilot/FSD history...
- Actually, blatantly faking videos of Tesla FSD.
- Elon claiming unsupervised NY to LA would be possible by end of 2016 (almost 10 years ago).
- Elon claiming all cars sold today (years ago) have required hardware to become robotaxis.
- Tesla salesmen of the past telling customers their cars were investments because they would become robotaxis that would make money.
Theranos lasted 15 years, so I'm surprised $TSLA is being called this early.
As someone who personally knows people in the industry, I'm pretty confident in saying that TSLA will never actually deliver any type of robust, safe autonomous driving system. The organization itself is just not capable of it anymore.
All that are left are people trying to game metrics and not get fired, that type of culture just can't produce it.
"called this early" - got a pretty good laugh !
The difference with Theranos is that Tesla managed to become one of the biggest (by market cap) companies on earth, while it's CEO become the richest person on earth.
The eventual fallout will be much, much worse.
To be fair, Tesla is also producing actual drivable cars.
(Maybe that's why it's surprising that it's declining so soon.)
It's true. That's also why people are so confused about those who call the company a Con or scam.
But in current context, with cratering sales, that's actually a disadvantage - these 100k people and different factories still need to be paid and maintained. This part of business will be bleeding cash in Q1 already.
> As someone who personally knows people in the industry, I'm pretty confident in saying that TSLA will never actually deliver any type of robust, safe autonomous driving system. The organization itself is just not capable of it anymore.
Very much so. I think it's at least a decade out, and when it comes, it won't be from Tesla. Like you say, the org mindset isn't there. I'm reminded of this story from a component manufacturer about the mindset of Tesla:
"Hey, we sent you over the new firmware for the component, check it out." (The test suite for this component takes approximately 36 hours to execute.)
Three hours later:
"This is working so much better, thanks a lot!"
"???"
"Oh, we just flashed a car we have here and took it out for a drive."
"?!?"
Oof.
TL;DR: NHTSA finds, on average, Tesla autopilot disengages less than 1 second before a crash.