Excellent community noting
Aye. Man was much better walking.So what stops someone hacking into these things and killing dozens of pedestrians or companies using it to scare politicians into supporting them for less money?
This is flawed and you're not up to date on this at all, robotaxis already exist in San Francisco and you've got the public safety aspect completely backwards; moving to self driving removes human errors. It doesn't mean they are going to be completely infallible though, just that they'll make roads much safer than they are currently:
Secondly, the other point re: putting people in jail seems like a total red herring, this isn't some new problem you need to solve.
If you live on a hill and your handbrake fails, sending your car into someone's house down the hill and causing significant damage then you're liable as it's your car. That no one is going to jail over it vs if you'd perhaps been in the car and driving recklessly or drunk is irrelevant.
Tbf those are the conditions where its usualy rhe plane flying. Because the software can see things the human can't as it isn't limited to vision.
The human pilot will kill you much more often than the software.
Should put a disclaimer on your post. There are users on this forum that worship the man
Should put a disclaimer on your post. There are users on this forum that worship the man
And we accept that risk because humans make mistakes. Planes however have multiple redundancies now they are fly by wire. I don't think the general public would be quick to accept planes with no pilots though.
Just look at the Boeing 737 Max crashes. IMO there should be people at Boeing and the FAA in jail now for allowing those crashes to happen.
Look at German wings flight 9525 or airfrance 447.
The max disasters where due to the cost of pilot training and the corrupt attempts to avoid it.
And that vehicle looks nothing like current FSD vehicles and it clearly still makes mistakes and is easily confused by things its not expecting to come across. And California is one of the states I was talking about where they allow cooperation's to take such risks in public places.
You are talking about a mechanical failure while no one is in the vehicle. Yes if you are driving along and your brakes fail, you have kept the vehicle well serviced and the failure is deemed not to be your fault, you aren't going to jail if your car ploughs into a load of people killing them. Your insurance company are looking at a large claim but that is it. If however you make a mistake and plough into a load of people killing them, you are facing jail. Chances are these vehicles will be owned by companies anyway rather than individuals if being used as robotaxis. If that Waymo car makes a mistake and kills someone, you don't think those responsible at the company should face the same criminal sanctions you or I would had we been driving and made a mistake?
No, you said "other than in maybe some states that put corporations above public safety." which is totally silly as they're safer than human drivers already. Pointing out that in some instance the car did something silly isn't an argument against that as no one claimed they were infallible. It only reinforces that your earlier claim re: needing them to be infallible is flawed.
No, I was making a broader point re: people suffering property damage, injuries or death and no one necessarily being prosecuted. In fact it's not necessarily the case that a human driver would either, sometimes accidents happen and people are injured or die but no one is criminally responsible. Also just because Waymo currently operates robo taxis doesn't imply that individuals won't own self-driving cars, that's absurd as a line of reasoning.
The notion that an accident can occur involving a death or injury without criminal charges then being filed isn't a barrier to adopting technology, it's already the case that that can occur both with and without human involvement.
You can go jump in front of a DLR train and there is no driver to charge, but if you did it in front of a real train a driver probably wouldn't be charged either. Criminal charges re: accidents require some sort of recklessness or negligence, if say a company had covered up a known flaw then that might be criminal.
On the general point of what happens if a machine kills someone and no human was responsible, that's happened for centuries.
They are testing new/beta technology that is far from infallible on public highways where the other road users haven't signed up to be part of the test. We all signed up to share the roads with other humans as long as they meet certain criteria. So I would say states allowing companies to do this are putting corporations before public safety.
These are also small scale tests and they still make mistakes. In my opinion they should be infallible before being allowed on scale on public highways. They shouldn't be causing any issues on the roads that could lead to an accident.
Sorry but your DLR example is laughable. If someone throws themselves in front of any vehicle it isn't going to be blamed on the vehicle driver, unless they of course fail to act when there was amble chance to do so. If someone walks into the road and you just keep on driving and run them over you are going to be facing jail and rightly so.
How exactly are they doing that given that these cars are safer than human drivers?
Why though? Humans already cause accidents on the road, why do these need to be infallible rather than just merely massively safer than humans?
OK replace walks in front of the DLR then, the same point applies. The laughable thing here is your fixation on wanting to lock people up over accidents, if there's some recklessness or negligence then sure but automation isn't something new, it's been around for centuries! Some idea you've got stuck in your head re: whether or not you can prosecute in some hypothetical situation isn't a barrier for the adoption of this tech.
You are assuming they will always be safe, nothing will happen, it won't make a mistake.
Removing humans from the equation should require infallibility.
Because they aren't human. Computers don't get distracted or believe they are better than they actually are. They shouldn't be making mistakes when it comes to human safety.
Lol don't worry, we never struggle to understand what you mean old sport* the actual reason being that he hates Elon Musk and can't stand the notion of him bringing self driving vehicles to market
No the second was because the plane gave control to the pilots. It told them it had no measurement for airspeed.So the first crash happened because one of the pilots was clearly mentally unwell. The second because the aircraft was giving the pilots inconsistent air speed. Remove the pilots in that case and its likely the autopilot might also stall the aircraft.
Well and a failed part that the computers believed meant the aircraft was nose down and bad software. With no pilots those planes still crash, in fact crash faster as there would be no pilots trying to fight it. Yes poor training and attempts to save money played a huge part but that is what companies do. And that is what companies making driverless cars will do. Elon has already removed/turned off sensors from Tesla cars that made it safer.