The ongoing Elon Twitter saga: "insert demographic" melts down

Status
Not open for further replies.
Permabanned
Joined
13 Sep 2023
Posts
175
Location
London
cd19c226-60a6-474b-85ba-c5b92769d315.jpg


Excellent community noting.
 
Soldato
Joined
3 Oct 2007
Posts
12,104
Location
London, UK
This is flawed and you're not up to date on this at all, robotaxis already exist in San Francisco and you've got the public safety aspect completely backwards; moving to self driving removes human errors. It doesn't mean they are going to be completely infallible though, just that they'll make roads much safer than they are currently:


This same company

And that vehicle looks nothing like current FSD vehicles and it clearly still makes mistakes and is easily confused by things its not expecting to come across. And California is one of the states I was talking about where they allow cooperation's to take such risks in public places.

Secondly, the other point re: putting people in jail seems like a total red herring, this isn't some new problem you need to solve.

If you live on a hill and your handbrake fails, sending your car into someone's house down the hill and causing significant damage then you're liable as it's your car. That no one is going to jail over it vs if you'd perhaps been in the car and driving recklessly or drunk is irrelevant.

You are talking about a mechanical failure while no one is in the vehicle. Yes if you are driving along and your brakes fail, you have kept the vehicle well serviced and the failure is deemed not to be your fault, you aren't going to jail if your car ploughs into a load of people killing them. Your insurance company are looking at a large claim but that is it. If however you make a mistake and plough into a load of people killing them, you are facing jail. Chances are these vehicles will be owned by companies anyway rather than individuals if being used as robotaxis. If that Waymo car makes a mistake and kills someone, you don't think those responsible at the company should face the same criminal sanctions you or I would had we been driving and made a mistake?
 
Soldato
Joined
3 Oct 2007
Posts
12,104
Location
London, UK
Tbf those are the conditions where its usualy rhe plane flying. Because the software can see things the human can't as it isn't limited to vision.


The human pilot will kill you much more often than the software.

And we accept that risk because humans make mistakes. Planes however have multiple redundancies now they are fly by wire. I don't think the general public would be quick to accept planes with no pilots though.

Just look at the Boeing 737 Max crashes. IMO there should be people at Boeing and the FAA in jail now for allowing those crashes to happen.
 
Last edited:
Caporegime
Joined
30 Jun 2007
Posts
68,784
Location
Wales
And we accept that risk because humans make mistakes. Planes however have multiple redundancies now they are fly by wire. I don't think the general public would be quick to accept planes with no pilots though.

Just look at the Boeing 737 Max crashes. IMO there should be people at Boeing and the FAA in jail now for allowing those crashes to happen.

Look at German wings flight 9525 or airfrance 447.

The max disasters where due to the cost of pilot training and the corrupt attempts to avoid it.
 
Soldato
Joined
3 Oct 2007
Posts
12,104
Location
London, UK
Look at German wings flight 9525 or airfrance 447.

The max disasters where due to the cost of pilot training and the corrupt attempts to avoid it.

So the first crash happened because one of the pilots was clearly mentally unwell. The second because the aircraft was giving the pilots inconsistent air speed. Remove the pilots in that case and its likely the autopilot might also stall the aircraft.

Well and a failed part that the computers believed meant the aircraft was nose down and bad software. With no pilots those planes still crash, in fact crash faster as there would be no pilots trying to fight it. Yes poor training and attempts to save money played a huge part but that is what companies do. And that is what companies making driverless cars will do. Elon has already removed/turned off sensors from Tesla cars that made it safer.
 
Last edited:
Caporegime
Joined
29 Jan 2008
Posts
58,920
And that vehicle looks nothing like current FSD vehicles and it clearly still makes mistakes and is easily confused by things its not expecting to come across. And California is one of the states I was talking about where they allow cooperation's to take such risks in public places.

No, you said "other than in maybe some states that put corporations above public safety." which is totally silly as they're safer than human drivers already. Pointing out that in some instance the car did something silly isn't an argument against that as no one claimed they were infallible. It only reinforces that your earlier claim re: needing them to be infallible is flawed.

You are talking about a mechanical failure while no one is in the vehicle. Yes if you are driving along and your brakes fail, you have kept the vehicle well serviced and the failure is deemed not to be your fault, you aren't going to jail if your car ploughs into a load of people killing them. Your insurance company are looking at a large claim but that is it. If however you make a mistake and plough into a load of people killing them, you are facing jail. Chances are these vehicles will be owned by companies anyway rather than individuals if being used as robotaxis. If that Waymo car makes a mistake and kills someone, you don't think those responsible at the company should face the same criminal sanctions you or I would had we been driving and made a mistake?

No, I was making a broader point re: people suffering property damage, injuries or death and no one necessarily being prosecuted. In fact it's not necessarily the case that a human driver would either, sometimes accidents happen and people are injured or die but no one is criminally responsible. Also just because Waymo currently operates robo taxis doesn't imply that individuals won't own self-driving cars, that's absurd as a line of reasoning.

The notion that an accident can occur involving a death or injury without criminal charges then being filed isn't a barrier to adopting technology, it's already the case that that can occur both with and without human involvement.

You can go jump in front of a DLR train and there is no driver to charge, but if you did it in front of a real train a driver probably wouldn't be charged either. Criminal charges re: accidents require some sort of recklessness or negligence, if say a company had covered up a known flaw then that might be criminal.

On the general point of what happens if a machine kills someone and no human was responsible, that's happened for centuries.
 
Soldato
Joined
3 Oct 2007
Posts
12,104
Location
London, UK
No, you said "other than in maybe some states that put corporations above public safety." which is totally silly as they're safer than human drivers already. Pointing out that in some instance the car did something silly isn't an argument against that as no one claimed they were infallible. It only reinforces that your earlier claim re: needing them to be infallible is flawed.

They are testing new/beta technology that is far from infallible on public highways where the other road users haven't signed up to be part of the test. We all signed up to share the roads with other humans as long as they meet certain criteria. So I would say states allowing companies to do this are putting corporations before public safety. These are also small scale tests and they still make mistakes. In my opinion they should be infallible before being allowed on scale on public highways. They shouldn't be causing any issues on the roads that could lead to an accident.

No, I was making a broader point re: people suffering property damage, injuries or death and no one necessarily being prosecuted. In fact it's not necessarily the case that a human driver would either, sometimes accidents happen and people are injured or die but no one is criminally responsible. Also just because Waymo currently operates robo taxis doesn't imply that individuals won't own self-driving cars, that's absurd as a line of reasoning.

The notion that an accident can occur involving a death or injury without criminal charges then being filed isn't a barrier to adopting technology, it's already the case that that can occur both with and without human involvement.

Unless there is a factor such as adverse weather (even then you are expected to drive accordingly) or a mechanical failure, most accidents happen because someone messed up. And if the police believe you are at fault you will likely be facing some form of sanction especially if serious injury or death are involved.

You can go jump in front of a DLR train and there is no driver to charge, but if you did it in front of a real train a driver probably wouldn't be charged either. Criminal charges re: accidents require some sort of recklessness or negligence, if say a company had covered up a known flaw then that might be criminal.

Sorry but your DLR example is laughable. If someone throws themselves in front of any vehicle it isn't going to be blamed on the vehicle driver, unless they of course fail to act when there was amble chance to do so. If someone walks into the road and you just keep on driving and run them over you are going to be facing jail and rightly so.
On the general point of what happens if a machine kills someone and no human was responsible, that's happened for centuries.

Happened for centuries? Laws have changed in the last couple of decades. What companies might have got away with regarding their actions or lack of them now come with serious penalties including corporate manslaughter. So a company testing beta systems on public highways should face the same sanctions as a member of the public. If their car kills or injures someone because its not good enough to do its job day in, day out, then those responsible for putting it on our roads should be treated as if they were driving it. If they faced such jeopardy then maybe they'd take less risks where others sometimes pay the price.
 
Caporegime
Joined
29 Jan 2008
Posts
58,920
They are testing new/beta technology that is far from infallible on public highways where the other road users haven't signed up to be part of the test. We all signed up to share the roads with other humans as long as they meet certain criteria. So I would say states allowing companies to do this are putting corporations before public safety.

How exactly are they doing that given that these cars are safer than human drivers?

These are also small scale tests and they still make mistakes. In my opinion they should be infallible before being allowed on scale on public highways. They shouldn't be causing any issues on the roads that could lead to an accident.

Why though? Humans already cause accidents on the road, why do these need to be infallible rather than just merely massively safer than humans?
Sorry but your DLR example is laughable. If someone throws themselves in front of any vehicle it isn't going to be blamed on the vehicle driver, unless they of course fail to act when there was amble chance to do so. If someone walks into the road and you just keep on driving and run them over you are going to be facing jail and rightly so.

OK replace walks in front of the DLR then, the same point applies. The laughable thing here is your fixation on wanting to lock people up over accidents, if there's some recklessness or negligence then sure but automation isn't something new, it's been around for centuries! Some idea you've got stuck in your head re: whether or not you can prosecute in some hypothetical situation isn't a barrier for the adoption of this tech.
 
Last edited:
Soldato
Joined
3 Oct 2007
Posts
12,104
Location
London, UK
How exactly are they doing that given that these cars are safer than human drivers?

You are assuming they will always be safe, nothing will happen, it won't make a mistake. They haven't even scaled it up to any level that gives a good sample over time yet. And no just being safer than humans should not be the standard. Removing humans from the equation should require infallibility.

Why though? Humans already cause accidents on the road, why do these need to be infallible rather than just merely massively safer than humans?

Because they aren't human. Computers don't get distracted or believe they are better than they actually are. They shouldn't be making mistakes when it comes to human safety.

OK replace walks in front of the DLR then, the same point applies. The laughable thing here is your fixation on wanting to lock people up over accidents, if there's some recklessness or negligence then sure but automation isn't something new, it's been around for centuries! Some idea you've got stuck in your head re: whether or not you can prosecute in some hypothetical situation isn't a barrier for the adoption of this tech.

There is a reason why the police changed from "accident" to "incident". Accident implies there is no fault, where as so many "accidents" have fault, be it just being distracted, driving too fast and taking risks you shouldn't etc. If you look at most road accidents you can apportion blame to nearly every one of them unless it was mechanical failure and most aren't. And if death or serious injury is involved then yes there will be consequences, just as there should be. I'm not sure what makes you think you can kill someone and there not be concequences just because you didn't mean to.

Please give us some examples of automation that compare with putting a computer in charge of moving vehicles on public roads with other road users and all the thousands of different variables that can happen and with zero human supervision.
 
Last edited:
Caporegime
Joined
29 Jan 2008
Posts
58,920
You are assuming they will always be safe, nothing will happen, it won't make a mistake.

No, I've not assumed that at all. How on earth have you gone from me pointing out that they don't need to be infallible (and indeed aren't) to concluding that I'm assuming they'll not make a mistake? That's a complete contradiction.

Removing humans from the equation should require infallibility.

*Why should it?*

That just doesn't make any sense, it's an impossible criterion anyway. These vehicles are already much safer than humans and that will only imprvoe, but that doesn't mean there will be 0 accidents just that we can drastically reduce accidents.

Because they aren't human. Computers don't get distracted or believe they are better than they actually are. They shouldn't be making mistakes when it comes to human safety.

This is just magical thinking, they can't see around corners and they're not omnipotent. The idea that they even could be infallible is flawed; in some situations, an "accident" may be unavoidable and they'll need to take action to minimise the damage/injuries caused. What action should have been taken in some given situation isn't necessarily so clear cut and involves various tradeoffs.

Fact is it's already pretty clear that taking humans out of the equation will reduce accidents and improve road safety immensely, these are coming, they're already present in SF and your impossible "infallible' standard will never be met and doesn't need to be.
 
Soldato
Joined
10 May 2012
Posts
10,062
Location
Leeds
There's no form of transportation in human history that has been infallible; Concorde, the Space Shuttle, the F-35.. yet apparently self driving cars need to be infallible, because reasons *

* the actual reason being that he hates Elon Musk and can't stand the notion of him bringing self driving vehicles to market
 
Caporegime
Joined
30 Jun 2007
Posts
68,784
Location
Wales
So the first crash happened because one of the pilots was clearly mentally unwell. The second because the aircraft was giving the pilots inconsistent air speed. Remove the pilots in that case and its likely the autopilot might also stall the aircraft.

Well and a failed part that the computers believed meant the aircraft was nose down and bad software. With no pilots those planes still crash, in fact crash faster as there would be no pilots trying to fight it. Yes poor training and attempts to save money played a huge part but that is what companies do. And that is what companies making driverless cars will do. Elon has already removed/turned off sensors from Tesla cars that made it safer.
No the second was because the plane gave control to the pilots. It told them it had no measurement for airspeed.

The augment it still fixed by "the automatic car still has a driver for back up" same as the plane as a pilot as back up though. We have already reached that point with self driving cars woth Mercedes being granted lvl 3 permissions. You have to be ready to take the wheel but you don't actually have to have your eyes on the road.

In lvl 2 you have legal responsibility in lvl 3 mercades has legal responsibility same as pilots and plane manufacturers with auto pilots.

They've been driving for over a year now and not crashed yet, I think statistically for miles driven they probbaly beat the average driver already
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom