The ongoing Elon Twitter saga: "insert demographic" melts down

Status
Not open for further replies.
There's no form of transportation in human history that has been infallible; Concorde, the Space Shuttle, the F-35.. yet apparently self driving cars need to be infallible, because reasons *

Bingo! That is because they were all controlled by humans on the whole or if automated they had humans there to take control if something should happen. You now what to remove the human and replace it with "its statistically better than your average human" and that is good enough.

* the actual reason being that he hates Elon Musk and can't stand the notion of him bringing self driving vehicles to market

You do realise that other companies make cars with what is called "FSD" than just Tesla right? Other manufactures even do it better with better safety features. I include them in this as well. Its not all about daddy Elon Roar
 
No, I've not assumed that at all. How on earth have you gone from me pointing out that they don't need to be infallible (and indeed aren't) to concluding that I'm assuming they'll not make a mistake? That's a complete contradiction.

For the simple reason that when a driverless vehicle kills or seriously injures someone through its mistake (and it will) the public aren't going to be happy with but but but its safer than humans.

*Why should it?*

That just doesn't make any sense, it's an impossible criterion anyway. These vehicles are already much safer than humans and that will only imprvoe, but that doesn't mean there will be 0 accidents just that we can drastically reduce accidents.

You are far too trusting of these companies and their PR. If they are so amazing why do they fight to hide their data from the public?

California regulators do not require Waymo to disclose every incident involving erratic behavior in its fleet. In the first five months of 2023, San Francisco officials said they had logged more than 240 incidents in which a Cruise or Waymo vehicle might have created a safety hazard.[126]

In January 2022, Waymo sued the California Department of Motor Vehicles (DMV) to prevent data on driverless crashes from being released to the public. Waymo maintained that such information constituted a trade secret.[150] According to The Los Angeles Times, the "topics Waymo wants to keep hidden include how it plans to handle driverless car emergencies, what it would do if a robot taxi started driving itself where it wasn't supposed to go, and what constraints there are on the car's ability to traverse San Francisco's tunnels, tight curves and steep hills."[151]

In February 2022, Waymo was successful in preventing the release of robotaxi safety records. A Waymo spokesperson affirmed that the company would be transparent about its safety record.
[152]

So they will be transparent except they won't. And there are no more than 100 cars on the road in SF at any one time which is a tiny sample and they don't tell the public about incidents they might be involved in. If they are so damned good then be fully open.

This is just magical thinking, they can't see around corners and they're not omnipotent. The idea that they even could be infallible is flawed; in some situations, an "accident" may be unavoidable and they'll need to take action to minimise the damage/injuries caused. What action should have been taken in some given situation isn't necessarily so clear cut and involves various tradeoffs.

Fact is it's already pretty clear that taking humans out of the equation will reduce accidents and improve road safety immensely, these are coming, they're already present in SF and your impossible "infallible' standard will never be met and doesn't need to be.

You seem to be confusing omnipotent with infallible. The former means all knowing. The latter means with all the information available it makes the correct decision every single time. I don't think that is too much to ask from a machine trusted to drive vehicles at 70mph in all conditions.

edit: oh and are you going to show me this automation you mentioned that fit the criteria I mentioned?
 
Last edited:
No the second was because the plane gave control to the pilots. It told them it had no measurement for airspeed.

The augment it still fixed by "the automatic car still has a driver for back up" same as the plane as a pilot as back up though. We have already reached that point with self driving cars woth Mercedes being granted lvl 3 permissions. You have to be ready to take the wheel but you don't actually have to have your eyes on the road.

In lvl 2 you have legal responsibility in lvl 3 mercades has legal responsibility same as pilots and plane manufacturers with auto pilots.

They've been driving for over a year now and not crashed yet, I think statistically for miles driven they probbaly beat the average driver already

So the plane had no measurement for airspeed so it couldn't fly the plane anyway could it? Remove the human from the equation and what happens there? The plane crashes.

Look I'm not saying driver aides are a bad thing, they absolutely are a huge benefit to safety. I just don't think as some on this thread seem to that we should just be handing the roads over to cars controlled by software.
 
How exactly are they doing that given that these cars are safer than human drivers?
These vehicles are already much safer than humans and that will only imprvoe,
Does the data actually support this?

  • Tesla drivers have the highest accident rate. From Nov. 14, 2022, through Nov. 14, 2023, Tesla drivers had 23.54 accidents per 1,000 drivers. Ram (22.76) and Subaru (20.90) were the only other brands with more than 20.00 accidents per 1,000 drivers.

And i'm not saying it does, I'm just asking for evidence because maybe that data above hasnt account for the overall amount of cars on the road.

But the video I posted earlier in the thread mentioned Elon stating that Tesla FSD has done 150 million miles.

Tesla has 11.3 deaths per 100 million miles driven.

The average for human beings is 1.35 deaths per 100m miles driven.
 
Last edited:
Does the data actually support this?



And i'm not saying it does, I'm just asking for evidence because maybe that data above hasnt account for the overall amount of cars on the road.

But the video I posted earlier in the thread mentioned Elon stating that Tesla FSD has done 150 million miles.

Tesla has 11.3 deaths per 100 million miles driven.

The average for human beings is 1.35 deaths per 100m miles driven.
And to add, don't tesla disable auto pilot 1 second before a crash in an attempt to be able to say auto pilot wasn't active at the time of the accident?
 
I find the whole thing interesting. If we move to fully automatic driving, whereby the driver does not even have to be at the wheel, will any accidents and injuries be the sole fault of the manufacturer of the car that caused the incident? I cannot see how liability could go to anyone else in that situation.

The alternative is to never have fully autonomous driving where you can sit and relax, because you always have to be watching the road/holding the wheel anyway as you will be liable for any accidents. If that is the case, i fail to see the advantage of it all so much.
 
Last edited:
The problem with FSD systems is that they struggle with pedestrians. Motorway driving is a solved problem but it’s incredibly difficult for an AI to read human behaviour as well as humans can. We can predict when some nutter is going to run across the road or open a car door in a way that’s terrifically hard to match with software.
 
The problem with FSD systems is that they struggle with pedestrians. Motorway driving is a solved problem but it’s incredibly difficult for an AI to read human behaviour as well as humans can. We can predict when some nutter is going to run across the road or open a car door in a way that’s terrifically hard to match with software.

Indeed. There is also the interesting moral decisions that they may occasionally have to make. It was raised on Top Gear by Clarkson ages ago. For example what if swerving to hit a deer could potentially mean the car hits a child/human etc? I mean, occurrences where a moral choice has to be made are fortunately rare, but not unheard of. How would a computer know what to do in terms of choosing a best case scenario (ie just hit deer and damage car instead of swerve and hit human child).

There will always be exceptional circumstances that these vehicles will have to deal with, and i think those will cause a legal minefield.
 
Last edited:
The problem with FSD systems is that they struggle with pedestrians. Motorway driving is a solved problem but it’s incredibly difficult for an AI to read human behaviour as well as humans can. We can predict when some nutter is going to run across the road or open a car door in a way that’s terrifically hard to match with software.

Euro NCAP do test for this though.

 
Last edited:
Indeed. There is also the interesting moral decisions that they may occasionally have to make. It was raised on Top Gear by Clarkson ages ago. For example what if swerving to hit a deer could potentially mean the car hits a child/human etc? I mean, occurrences where a moral choice has to be made are fortunately rare, but not unheard of. How would a computer know what to do in terms of choosing a best case scenario (ie just hit deer and damage car instead of swerve and hit human child).

There will always be exceptional circumstances that these vehicles will have to deal with, and i think those will cause a legal minefield.

This is one of the complexities.
What happens if the crash is caused because the "driver" had not maintained the vehicle properly, slick tyres, brakes pads down to the metal etc
So the manufacturer would rightly say the car was not maintained correctly and not their fault
 
From an aviation point of view, autopilot in planes and autopilot in cars are entirely different things - you can't even begin to compare the two of them.
 
People seem to be arguing that we can't have self driving vehicles because there's wildly exceptional circumstances where it might go wrong, even though a human can have a heart attack on the motorway and cause an 18 car pile up, or can be drunk, on drugs, reckless, homicidal, on their phones, they fall asleep, etc. Computers will be far safer and many lives will be saved over the next 10-50 years world wide, this is just a fact. If you don't understand that then the issue is you vastly over estimate how good humans are at driving, humans make mistakes while driving all of the time.
 
People seem to be arguing that we can't have self driving vehicles because there's wildly exceptional circumstances where it might go wrong, even though a human can have a heart attack on the motorway and cause an 18 car pile up, or can be drunk, on drugs, reckless, homicidal, on their phones, they fall asleep, etc. Computers will be far safer and many lives will be saved over the next 10-50 years world wide, this is just a fact. If you don't understand that then the issue is you vastly over estimate how good humans are at driving, humans make mistakes while driving all of the time.
"Wildly exceptional"

So far most of the autonomous cars makers have had repeated incidents that show they can't reliably cope with even normal stuff that human drivers deal with hundreds of times an hour.

Tesla's driving towards concrete pillars, teslas deciding that the best thing to do when it can't work out if there is something in the road is to drive into it, Teslas driving under truck trailers, teslas driving into the back of aircraft, waymo's stopping on top of the person they've just run over because they stopped when they detected something and then started going forward again without it being clear...(all real world incidents that made the news).

At the moment the average self driving car is worse in many normal situations than a drunk 12 year old who has just stolen his first ford fiesta, let alone the competency that is required before you're legally allowed to drive on your own as a human in most of the world.
 
Last edited:
  • Like
Reactions: JRS
I find the whole thing interesting. If we move to fully automatic driving, whereby the driver does not even have to be at the wheel, will any accidents and injuries be the sole fault of the manufacturer of the car that caused the incident? I cannot see how liability could go to anyone else in that situation.

The alternative is to never have fully autonomous driving where you can sit and relax, because you always have to be watching the road/holding the wheel anyway as you will be liable for any accidents. If that is the case, i fail to see the advantage of it all so much.

This is the bit dowie just doesn't seem to get. The driver in a car is legally responsible for what that car does while they are in control. You don't get to blame Ford when your Mustang mounts the pavement because you've made a mistake. With driverless vehicles the liability will have to be with the manufacturer.
 
Indeed. There is also the interesting moral decisions that they may occasionally have to make. It was raised on Top Gear by Clarkson ages ago. For example what if swerving to hit a deer could potentially mean the car hits a child/human etc? I mean, occurrences where a moral choice has to be made are fortunately rare, but not unheard of. How would a computer know what to do in terms of choosing a best case scenario (ie just hit deer and damage car instead of swerve and hit human child).

There will always be exceptional circumstances that these vehicles will have to deal with, and i think those will cause a legal minefield.

Not even a deer, what about a cat vs a child? No human is going to choose to hit the child, software doesn't care about that child. You are right its going to be a legal minefield and companies like Tesla, Alphabet etc will be desperate to be given legal immunity and I wouldn't put it past these politicians to give it to them.
 
Last edited:
"Wildly exceptional"

So far most of the autonomous cars makers have had repeated incidents that show they can't reliably cope with even normal stuff that human drivers deal with hundreds of times an hour.

Tesla's driving towards concrete pillars, teslas deciding that the best thing to do when it can't work out if there is something in the road is to drive into it, Teslas driving under truck trailers, teslas driving into the back of aircraft, waymo's stopping on top of the person they've just run over because they stopped when they detected something and then started going forward again without it being clear...(all real world incidents that made the news).

At the moment the average self driving car is worse in many normal situations than a drunk 12 year old who has just stolen his first ford fiesta, let alone the competency that is required before you're legally allowed to drive on your own as a human in most of the world.

I specifically said in the next 10-50 years though, I didn't say they were currently perfect. I think probably we can do a better job at designing roads for them, at the minute roads are designed for humans, but we can improve and implement signs and systems in a way that will help autonomous vehicles - think traffic lights transmitting their signal over blue tooth to cars around them rather than a green/amber/red light, think cars sharing live traffic data with each other. Plus every incident that involves an automonous vehicle going wrong just means we have data on how to ensure it doesn't happen again. When a human is texting on the motorway and drives into another vehicle, another dummy will do that on the same stretch of the road in 6 months time.
 
Last edited:
This is the bit dowie just doesn't seem to get. The driver in a car is legally responsible for what that car does while they are in control. You don't get to blame Ford when your Mustang mounts the pavement because you've made a mistake. With driverless vehicles the liability will have to be with the manufacturer.

Which is never going to happen. Liability will be with the 'driver' of the driverless car, and said 'driver' will be having to watch the driverless car like a hawk to make sure it isn't about to get them into trouble.

Sounds very relaxing ;)
 
I suspect large amounts of pavement alongside roads would require barriers. Not to protect the humans but rather to stop roads being clogged as automated cars react to protect them. Eventually human nature will mean people step out expecting cars to stop, which they will. At present fear of driver reaction keeps pedestrians honest in this regard. The big accidents will happen when manually driven cars are in the minority as peoples behaviour will become trained for the reactions of automation and misjudge against human behaviour and reactions.
 
Which is never going to happen. Liability will be with the 'driver' of the driverless car, and said 'driver' will be having to watch the driverless car like a hawk to make sure it isn't about to get them into trouble.

Sounds very relaxing ;)

In these future driverless cars there is likely no controls to take control of. You just sit there and tell it where you want to go.
 
Status
Not open for further replies.
Back
Top Bottom