You can't apply a principle of it being simply acceptable that autopilot is better than 'humans'.Humans are far from perfect drivers so surely the benchmark is to be better than humans but not zero collisions? By that logic, some collisions are therefore acceptable.
I also don’t think it’s unreasonable to expect the autonomous systems to make different mistakes to what humans ordinarily would make.
It’s not actually an autonomous car so it’s sort of a moot point anyway. The human is meant to be in control, that’s who is ultimately at fault but the cars system was clearly the second contributing factor followed by the road layout.
There is definitely a long way to go on these automated features, no matter what said CEOs tell their investors.
Driving standards vary wildly so are we simply saying that if it crashes less than the average driver then that's ok?
With life and death stuff like this you can't take a black and white view of X number of people die on the roads now, it would be x - 10% if everyone used autopilot. That isn't good enough. Nor is accepting that the systems make different mistakes, they need to make no mistakes.
I accept that crashes will happen (software can't work beyond the laws of physics) but we can't have a situation where the computer having a hiccup ends up with someone losing their life.
"I'm terribly sorry that your son/daughter died because of a duff bit of code. Hopefully it will be of some comfort that generally the system is better than an average driver". Imagine the litigation if nothing else, although I'd prefer to focus on the human nature of whether I could forgive a mistake by an individual or that of a car company working to a budget trying to shift the next shiny shiny with tech.
Last edited: