Elon has tricked himself into thinking the automated statistics machine is capable of human level cognition. He thinks cars will only need eyeballs like humans have and that things like directly measuring what's physically in front of you and comparing it to a 3D point cloud scan is useless.
Welp, he's wrong. He won't admit it. More people will have to die and/or Tesla will have to face bankruptcy before they fire him and start adding lidar (etc) back in.
Real sad because by then they probably won't have the cash to pay for the insane upfront investment that Google has been plowing into this for 16 years now.
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.
The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.
> Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
Heck, they even use vision for rain sensing instead of the cheap and more effective sensor everyone else uses (which is just some infrared LEDs and photodiodes that measures the change in internal reflection at the outer side of the windshield when the critical angle changes when the windshield gets wet).
> But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
They did get rid of the radar at a moment when there was a shortage of parts. They had the choice, ship now without the part, or wait and ship less cars.
Maybe that was always the plan, and the shortage only accelerated the decision.
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.
A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.
The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.
You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.
The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.
Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.
> It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away.
Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
> The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
This is a non-issue and certainly not horror-like. All one's got to do is train themselves to slow down / brake when they hear the beep. And you're trying to paint this extremely useful safety feature as something bad?
> Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction.
What are you trying to say here? If the car in front of me brakes I brake too. I do not need to know the reason it braked, I simply brake too, because I have to. It works out fine every time because I have to drive in such a way to be able to stop in time in case the car in front of me applies 100% braking at any time. Basic driving.
Generally, what you're describing as predicting is more accurately called assuming. Assuming that things will go how one wants them to go. I call that sort of driving optimistic: optimistically assuming that the car in front of me will continue going forward and that there is nothing behind that huge truck that's blocking my view of the upcoming intersection, so I can freely gas it through.
That mindset is of course wrong; we must drive pessimistically, assuming that any car may apply max braking at any time and that if any part of our line of sight is obstructed, the worst case scenario is happening behind it - there is a high speed object coming towards us at a collision course that will reveal itself from behind the obstruction at the last second. Therefore, we must slow down when coming around a line of sight obstruction.
> Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
That's probably because for things moving in straight lines at constant velocity you don't need to be able to measure distance at all accurately to figure out if they are on a collision course. You just need to be able to tell if the distance is decreasing.
First, you just have to note if their angular position is changing. If it is then they are not on a collision course.
If the angular position is not changing, then you have to check if the distance is decreasing. If it is they are on a collision course. If it is not then they aren't.
If you take advantage of the fact that cars generally have distinctly different front ends and back ends and that most of the time cars are traveling forward you don't even have to estimate distance. If the angular position is not changing just note if the direction the car is pointing has its front closer to you than its back or not. If its front is closer than its back then it is on a collision course. Otherwise not.
You will need to make some adjustments due to cars having volume. A near miss for point cars could still be a collision for cars with volume, but this should be fairly easy to deal with.
> Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters
Can you tell me the distance between 2 objects, each 50 meters away from you, down to 1 cm? That's the superhuman part. Even the distance between you and an object 10 meters away down to a few millimeters is impossible for a human.
One would've thought that unproven and potentially dangerous technology like this--self-driving cars--would've required many years of testing before being allowed on public roads.
And yet here we are where the testing grounds are our public roadways and we, the public, are the guinea pigs.
That's an interesting piece though I don't think that the reporting is of the same ilk as the anti-Tesla reporting.
To me (keen cyclist and non-driver), it seems like the newspapers were pushing back against the freedom that cycles were giving to women. One of my favourite pro-cycling quotes is from the suffragette Susan B. Anthony (1896):
> “Let me tell you what I think of bicycling. I think it has done more to emancipate women than anything else in the world. It gives women a feeling of freedom and self-reliance. I stand and rejoice every time I see a woman ride by on a wheel…the picture of free, untrammeled womanhood.”
It would be on-brand for the newspapers to demonise cycling if it was allowing women to escape their restrictions.
Nowadays, there's doesn't seem to be much negative reporting about other car-shaped EVs, but just Teslas which pre-dates the anti-Musk viewpoints. Also, the reporting isn't just about autonomous crashes, so it would seem to me that Teslas do have an issue with quality. (Here in the UK, I only know one person with a Tesla and he's had several minor issues with it).
However, I do see parallels with the early anti-cycling reporting and current anti e-scooter/e-bike/e-motorbike reporting here in the UK, though I suspect that some of that is pushed by the motor lobby, although we do have a lot of illegal e-motorbikes being ridden around our cities.
I cut elon a tiny bit of slack because I remember ten years ago when a lot of us stupidly believed that deep learning just needed to be scaled up and self-driving was literally only 5 years away. Elon's problem was that he bet the farm on that assumption and has buried himself so deep in promises that he has seemingly no choice but to double down at every opportunity.
I've never believed that but I said the opposite - these cars will never drive themselves. Elon has caused an unknown but not small number of deaths through his misleading marketing. I cut him no slack.
All of his competitors chose to embrace sensor fusion. Elon applied his "first principles" heuristic and went ahead anyway. In court filings, even the head of his self-driving initiative disagreed with his timelines.
I used to tell the fanboys "Automated driving is like making children. Trying is much more fun than succeeding." ten years ago. But building a golem _was_ exciting to be honest.
> .. the issue of door handles. On Teslas, they retract into the doors while the cars are being driven. The system depends on battery power.
I never will understand this horrible decision. It isn't good design if it kills people. I wonder why this isn't regulated. They could at least implement a "push to pop up" functionality that works without battery power or have a narrow slot under the handle.
“The whole Tesla fleet operates as a network. When one car learns something, they all learn it. That is beyond what other car companies are doing.” Every Tesla driver, he explained, becomes a kind of “expert trainer for how the autopilot should work”.
On one hand, offline reinforcement learning using the sensor recordings and human driver inputs sounds cool but on the other, the average Tesla driver drives like a jerkass so maybe not the best example to learn from.
Because a bunch of people have for inexplicable reasons tied their meaningful parts of their self identity to Elon and his grievance filled simplistic worldview and they get very very upset when flaws in it are pointed out and they respond by trying to make sure nobody can publicly criticise it.
I would like to be generous and declare that it is due to the polarising nature of Elon Musk that the discussions end up degenerating into poor quality comments.
However, I don't personally believe that as even topics which are more to do with the products rather than the man seem to get flagged extremely quickly.
> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
At this point it’s beyond me why people still trust the brand and the system. Musk really only disrupted the “fake it” part of “fake it till you make it”.
>> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
> This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
Like, what judge would get fooled by that? It's dumb software engineer thinking.
Then make sure you don’t read till the end of the article where this behavior is supported. Maybe it is just a coincidence that Teslas always record data except when there’s a suspicion they caused the crash, and then the data was lost, didn’t upload, was irrelevant, or self driving wasn’t involved.
> The YouTuber Mark Rober, a former engineer at Nasa, replicated this behaviour in an experiment on 15 March 2025. He simulated a range of hazardous situations, in which the Model Y performed significantly worse than a competing vehicle. The Tesla repeatedly ran over a crash-test dummy without braking. The video went viral, amassing more than 14m views within a few days.
> The real surprise came after the experiment. Fred Lambert, who writes for the blog Electrek, pointed out the same autopilot disengagement that the NHTSA had documented. “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,” Lambert noted.
In my previous comment I was wondering why would anyone still trust Tesla’s claims and not realistically assume the worst. It’s because plenty of people will only worry about it when it happens to them. It’s not an issue in itself until after your burned to a crisp in your car.
No, turning off autopilot during a crash isn't subterfuge. The subterfuge would be using that to lie about autopilot's involvement. I'm pretty sure that has never happened, and their past data has included anyone using autopilot in the vicinity of a crash, much more than one second.
The article cites an example of a Tesla engineer dying in a crash where witnesses (including a survivor) say he had FSD turned on. Elon claimed the witnesses were wrong.
You mean this one? "The Tesla CEO claimed von Ohain had never downloaded the latest version of the software – so it couldn’t have caused the crash."
That quote isn't playing games about whether it was engaged or not. If that's a lie it's equally easy to make the lie whether the system disengages or stays engaged.
I'm taking issue with a very specific scenario, not claiming tesla is honest in general.
Turning off the system just before a crash when it’s unavoidable allows them to say “the system wasn’t active when the crash occurred” and implicitly label a lot of data “irrelevant”. Which they do a lot of times according to the article, without providing any of that data. That’s beyond subterfuge. They don’t just kill people, they destroy evidence of their guilt and shift the blame to the victim. How much stock one needs to own to pretend they don’t understand this?
Tesla bragged about the cars giving a ton of data, and showed it when this suited the company and it was good for the image. But every time it was controversial like an unexplainable accident potentially caused by the car itself the data was somehow not transmitted, or lost, or irrelevant.
I’m not sure why you have such a hard time understanding the issue, or insist on what you’re “pretty sure” about when all evidence (they cite the NHTSA and experiments conducted privately by a NASA engineer, as well as the string of coincidental data unavailability for controversial accidents) points to the contrary. The article provides evidence and discussion on all these points. Nonetheless you ignore all that and stick to your “I’m pretty sure” with fanboy abandon. Sets a really low bar for future conversations.
> Turning off the system just before a crash when it’s unavoidable allows them to say “the system wasn’t active when the crash occurred”
In theory. Maybe.
Have they ever done that?
You're citing entirely different bad behavior. That's not evidence for my question. The article has claims of stonewalling and claiming no data at all and one case where they said the software wasn't even installed, but those are not the scenario I asked about.
Calling me a tesla fanboy for wanting evidence for the correct claim instead of a completely different claim is pretty ridiculous. I'm not being pro tesla here.
And the reason I said "pretty sure" is that people bring up that scenario over and over and over, but nobody has ever shown an example of it being real, despite having tons of examples of other tesla problems.
I've seen so many Teslas do so many stupid things on motorways that I do everything I can not to be behind, in front of, or beside one. Can't imagine why anyone would get inside one.
You should add thalidomide to your list of "end users being test subjects gone horribly wrong" list, especially with the FDA being mucked around with these days: https://en.wikipedia.org/wiki/Thalidomide_scandal
My self driving Tesla does better than most Tiktok-brained drivers. I know because I'm watching, just like the car. Two is better than one and I enjoy it thoroughly.
Who cares if your social media toy has bugs in production ? These are several tons metal things going fast amongst humans, with high-energy batteries that like to explode. This can't have bugs in production.
Pretty sure if firefighters got there in time they could break the glass, unless they meant the battery fire was so fierce they couldn’t approach the vehicle.
Window glass in most modern vehicles is laminated rather than a simple tempered pane - makes them less likely to shatter in a rollover, and thereby eject occupants, but harder to break through in an emergency.
TBH I see this more as a “firefighters aren’t being given the right tools” issue, as this is far from unique to Tesla, and the tools have existed since laminated side glass became a requirement - but don’t seem to yet be part of standard issue or training.
Seems Tesla is a status symbol. I’ve taken an Uber a few times in different model Teslas. I’m sorry to say, they were all a piece of crap. Everything rattled like crazy. Super loud while driving. The door handles are not intuitive. Not sure why anyone would buy one other than a status symbol.
"hard left"? In what universe are you? A hard left newspaper would call for making all companies worker-owned, and I don't see anything like that from the Guardian.
Elon has tricked himself into thinking the automated statistics machine is capable of human level cognition. He thinks cars will only need eyeballs like humans have and that things like directly measuring what's physically in front of you and comparing it to a 3D point cloud scan is useless.
Welp, he's wrong. He won't admit it. More people will have to die and/or Tesla will have to face bankruptcy before they fire him and start adding lidar (etc) back in.
Real sad because by then they probably won't have the cash to pay for the insane upfront investment that Google has been plowing into this for 16 years now.
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.
The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.
> Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
Heck, they even use vision for rain sensing instead of the cheap and more effective sensor everyone else uses (which is just some infrared LEDs and photodiodes that measures the change in internal reflection at the outer side of the windshield when the critical angle changes when the windshield gets wet).
> But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
They did get rid of the radar at a moment when there was a shortage of parts. They had the choice, ship now without the part, or wait and ship less cars.
Maybe that was always the plan, and the shortage only accelerated the decision.
> It might be ego as well
Might?
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.
A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.
The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.
You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.
The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.
Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.
Humans and lidars don't work well together.
Wasn't the angular resolution solved by having spinning lidars?
> It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away.
Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
> The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
This is a non-issue and certainly not horror-like. All one's got to do is train themselves to slow down / brake when they hear the beep. And you're trying to paint this extremely useful safety feature as something bad?
> Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction.
What are you trying to say here? If the car in front of me brakes I brake too. I do not need to know the reason it braked, I simply brake too, because I have to. It works out fine every time because I have to drive in such a way to be able to stop in time in case the car in front of me applies 100% braking at any time. Basic driving.
Generally, what you're describing as predicting is more accurately called assuming. Assuming that things will go how one wants them to go. I call that sort of driving optimistic: optimistically assuming that the car in front of me will continue going forward and that there is nothing behind that huge truck that's blocking my view of the upcoming intersection, so I can freely gas it through.
That mindset is of course wrong; we must drive pessimistically, assuming that any car may apply max braking at any time and that if any part of our line of sight is obstructed, the worst case scenario is happening behind it - there is a high speed object coming towards us at a collision course that will reveal itself from behind the obstruction at the last second. Therefore, we must slow down when coming around a line of sight obstruction.
> Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
That's probably because for things moving in straight lines at constant velocity you don't need to be able to measure distance at all accurately to figure out if they are on a collision course. You just need to be able to tell if the distance is decreasing.
First, you just have to note if their angular position is changing. If it is then they are not on a collision course.
If the angular position is not changing, then you have to check if the distance is decreasing. If it is they are on a collision course. If it is not then they aren't.
If you take advantage of the fact that cars generally have distinctly different front ends and back ends and that most of the time cars are traveling forward you don't even have to estimate distance. If the angular position is not changing just note if the direction the car is pointing has its front closer to you than its back or not. If its front is closer than its back then it is on a collision course. Otherwise not.
You will need to make some adjustments due to cars having volume. A near miss for point cars could still be a collision for cars with volume, but this should be fairly easy to deal with.
> Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters
Can you tell me the distance between 2 objects, each 50 meters away from you, down to 1 cm? That's the superhuman part. Even the distance between you and an object 10 meters away down to a few millimeters is impossible for a human.
One would've thought that unproven and potentially dangerous technology like this--self-driving cars--would've required many years of testing before being allowed on public roads.
And yet here we are where the testing grounds are our public roadways and we, the public, are the guinea pigs.
Nothing new under the sun.
https://thevictoriancyclist.wordpress.com/2015/06/21/cycling...
That's an interesting piece though I don't think that the reporting is of the same ilk as the anti-Tesla reporting.
To me (keen cyclist and non-driver), it seems like the newspapers were pushing back against the freedom that cycles were giving to women. One of my favourite pro-cycling quotes is from the suffragette Susan B. Anthony (1896):
> “Let me tell you what I think of bicycling. I think it has done more to emancipate women than anything else in the world. It gives women a feeling of freedom and self-reliance. I stand and rejoice every time I see a woman ride by on a wheel…the picture of free, untrammeled womanhood.”
It would be on-brand for the newspapers to demonise cycling if it was allowing women to escape their restrictions.
Nowadays, there's doesn't seem to be much negative reporting about other car-shaped EVs, but just Teslas which pre-dates the anti-Musk viewpoints. Also, the reporting isn't just about autonomous crashes, so it would seem to me that Teslas do have an issue with quality. (Here in the UK, I only know one person with a Tesla and he's had several minor issues with it).
However, I do see parallels with the early anti-cycling reporting and current anti e-scooter/e-bike/e-motorbike reporting here in the UK, though I suspect that some of that is pushed by the motor lobby, although we do have a lot of illegal e-motorbikes being ridden around our cities.
[dead]
I cut elon a tiny bit of slack because I remember ten years ago when a lot of us stupidly believed that deep learning just needed to be scaled up and self-driving was literally only 5 years away. Elon's problem was that he bet the farm on that assumption and has buried himself so deep in promises that he has seemingly no choice but to double down at every opportunity.
I've never believed that but I said the opposite - these cars will never drive themselves. Elon has caused an unknown but not small number of deaths through his misleading marketing. I cut him no slack.
All of his competitors chose to embrace sensor fusion. Elon applied his "first principles" heuristic and went ahead anyway. In court filings, even the head of his self-driving initiative disagreed with his timelines.
Someone in his position cannot afford fallacious thinking like that. Or so one would think.
I used to tell the fanboys "Automated driving is like making children. Trying is much more fun than succeeding." ten years ago. But building a golem _was_ exciting to be honest.
A lot of Elon’s bets were protests against Google’s hegemony which seems to worked out - robotaxi has started and openai is crushing google search.
> .. the issue of door handles. On Teslas, they retract into the doors while the cars are being driven. The system depends on battery power.
I never will understand this horrible decision. It isn't good design if it kills people. I wonder why this isn't regulated. They could at least implement a "push to pop up" functionality that works without battery power or have a narrow slot under the handle.
Quoting Elon:
“The whole Tesla fleet operates as a network. When one car learns something, they all learn it. That is beyond what other car companies are doing.” Every Tesla driver, he explained, becomes a kind of “expert trainer for how the autopilot should work”.
Good grief.
On one hand, offline reinforcement learning using the sensor recordings and human driver inputs sounds cool but on the other, the average Tesla driver drives like a jerkass so maybe not the best example to learn from.
Why's it flagged?
Pretty much every post that even hints Tesla, Musk, or Spacex gets flagged on here.
Because a bunch of people have for inexplicable reasons tied their meaningful parts of their self identity to Elon and his grievance filled simplistic worldview and they get very very upset when flaws in it are pointed out and they respond by trying to make sure nobody can publicly criticise it.
I would like to be generous and declare that it is due to the polarising nature of Elon Musk that the discussions end up degenerating into poor quality comments.
However, I don't personally believe that as even topics which are more to do with the products rather than the man seem to get flagged extremely quickly.
You can only worship; you don't talk ill of billionaires. No sir, that's not tolerated here.
> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
At this point it’s beyond me why people still trust the brand and the system. Musk really only disrupted the “fake it” part of “fake it till you make it”.
>> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
> This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
Like, what judge would get fooled by that? It's dumb software engineer thinking.
Well it depends, does the judge reside in the Northern District of Texas?
I'll worry about that possible subterfuge if it actually happens a single time ever.
It's something to keep in mind but it's not an issue itself.
Then make sure you don’t read till the end of the article where this behavior is supported. Maybe it is just a coincidence that Teslas always record data except when there’s a suspicion they caused the crash, and then the data was lost, didn’t upload, was irrelevant, or self driving wasn’t involved.
> The YouTuber Mark Rober, a former engineer at Nasa, replicated this behaviour in an experiment on 15 March 2025. He simulated a range of hazardous situations, in which the Model Y performed significantly worse than a competing vehicle. The Tesla repeatedly ran over a crash-test dummy without braking. The video went viral, amassing more than 14m views within a few days.
> The real surprise came after the experiment. Fred Lambert, who writes for the blog Electrek, pointed out the same autopilot disengagement that the NHTSA had documented. “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,” Lambert noted.
In my previous comment I was wondering why would anyone still trust Tesla’s claims and not realistically assume the worst. It’s because plenty of people will only worry about it when it happens to them. It’s not an issue in itself until after your burned to a crisp in your car.
No, turning off autopilot during a crash isn't subterfuge. The subterfuge would be using that to lie about autopilot's involvement. I'm pretty sure that has never happened, and their past data has included anyone using autopilot in the vicinity of a crash, much more than one second.
The article cites an example of a Tesla engineer dying in a crash where witnesses (including a survivor) say he had FSD turned on. Elon claimed the witnesses were wrong.
You mean this one? "The Tesla CEO claimed von Ohain had never downloaded the latest version of the software – so it couldn’t have caused the crash."
That quote isn't playing games about whether it was engaged or not. If that's a lie it's equally easy to make the lie whether the system disengages or stays engaged.
I'm taking issue with a very specific scenario, not claiming tesla is honest in general.
Turning off the system just before a crash when it’s unavoidable allows them to say “the system wasn’t active when the crash occurred” and implicitly label a lot of data “irrelevant”. Which they do a lot of times according to the article, without providing any of that data. That’s beyond subterfuge. They don’t just kill people, they destroy evidence of their guilt and shift the blame to the victim. How much stock one needs to own to pretend they don’t understand this?
Tesla bragged about the cars giving a ton of data, and showed it when this suited the company and it was good for the image. But every time it was controversial like an unexplainable accident potentially caused by the car itself the data was somehow not transmitted, or lost, or irrelevant.
I’m not sure why you have such a hard time understanding the issue, or insist on what you’re “pretty sure” about when all evidence (they cite the NHTSA and experiments conducted privately by a NASA engineer, as well as the string of coincidental data unavailability for controversial accidents) points to the contrary. The article provides evidence and discussion on all these points. Nonetheless you ignore all that and stick to your “I’m pretty sure” with fanboy abandon. Sets a really low bar for future conversations.
> Turning off the system just before a crash when it’s unavoidable allows them to say “the system wasn’t active when the crash occurred”
In theory. Maybe.
Have they ever done that?
You're citing entirely different bad behavior. That's not evidence for my question. The article has claims of stonewalling and claiming no data at all and one case where they said the software wasn't even installed, but those are not the scenario I asked about.
Calling me a tesla fanboy for wanting evidence for the correct claim instead of a completely different claim is pretty ridiculous. I'm not being pro tesla here.
And the reason I said "pretty sure" is that people bring up that scenario over and over and over, but nobody has ever shown an example of it being real, despite having tons of examples of other tesla problems.
I've seen so many Teslas do so many stupid things on motorways that I do everything I can not to be behind, in front of, or beside one. Can't imagine why anyone would get inside one.
Pro tip: Keep the dangerous vehicles in front of you. That way you have control over the situation.
It’s ridiculous that Tesla can beta test their shitty software in public and I have to be subjected to it
I grew up in an era of lawn darts, leaded gasoline, and Oxycontin. The end user is a test subject of all products.
You should add thalidomide to your list of "end users being test subjects gone horribly wrong" list, especially with the FDA being mucked around with these days: https://en.wikipedia.org/wiki/Thalidomide_scandal
Lawn darts was fun though.
I think we're supposed to learn from incidents like that, not keep repeating the same old mistakes
This is true for most software nowadays
Sure, but I'm not directly affected by someone's buggy phone software. If a self driving Tesla crashes into me, that does affect me.
I find it bit disappointing that you even need to restate this. People here should know better.
100% sure that buggy phones killed more people than Teslas.
My self driving Tesla does better than most Tiktok-brained drivers. I know because I'm watching, just like the car. Two is better than one and I enjoy it thoroughly.
Let me guess, you always perfect code? Maybe just html but it’s perfect, right?
Who cares if your social media toy has bugs in production ? These are several tons metal things going fast amongst humans, with high-energy batteries that like to explode. This can't have bugs in production.
https://en.m.wikipedia.org/wiki/Social_media%27s_role_in_the...
Pretty sure if firefighters got there in time they could break the glass, unless they meant the battery fire was so fierce they couldn’t approach the vehicle.
Window glass in most modern vehicles is laminated rather than a simple tempered pane - makes them less likely to shatter in a rollover, and thereby eject occupants, but harder to break through in an emergency.
TBH I see this more as a “firefighters aren’t being given the right tools” issue, as this is far from unique to Tesla, and the tools have existed since laminated side glass became a requirement - but don’t seem to yet be part of standard issue or training.
https://www.firehouse.com/rescue/vehicle-extrication/product...
Seems Tesla is a status symbol. I’ve taken an Uber a few times in different model Teslas. I’m sorry to say, they were all a piece of crap. Everything rattled like crazy. Super loud while driving. The door handles are not intuitive. Not sure why anyone would buy one other than a status symbol.
[flagged]
[flagged]
"hard left"? In what universe are you? A hard left newspaper would call for making all companies worker-owned, and I don't see anything like that from the Guardian.
[flagged]
I’m sure the child who gets obliterated by a Tesla cares about the distinction.
Do they care whether it's a human or software, or what model car it is?
The thing that should not be taken seriously are tesla cars. FSD and autopilot are marketing term for the same underlying piece of crap technology
If "software" is the technology
Do you happen to own TSLA?
I happen to know more about the software than any journalist it appears
[flagged]
[flagged]
[flagged]
It’s a book excerpt
https://archive.is/jqbM2