Humans are expected to make mistakes, we get distracted, we think we are better than we are, we are human, its why we have to have insurance and pass tests. Autonomous vehicles however don't suffer from those human traits/weaknesses. To allow vehicles with no human in it, it has be infallible. The public won't accept an autonomous vehicle making an error and killing someone. If a human does it the human pays a price, they lose their licence and likely their liberty as well. Are you going to put that vehicle in jail? or the CEO of the company that made it? As for the all conditions, we drive in all conditions and in some conditions these vehicles struggle massively.
Would you rather have a pilot who has trained and passed the required tests flying your plane or software that isn't infallible and struggles to know what is going on around it when its dark/raining/foggy/snowing. Although for a plane there isn't other planes coming at it just half a meter away at 60mph or planes pulling out of junctions or kids running into the road. We won't see robotaxi style vehicles for years and years other than in maybe some states that put corporations above public safety. And the first time it goes badly wrong and someone gets killed or badly hurt the public backlash will be massive.
I think for all intents and purposes self driving vehicles will be infallible, the accidents will happen due to a human error somewhere in the chain. A self driving car can understand that the conditions are affecting it's ability to sense the environment, at which point it can just stop moving. There will always be accidents and deaths when you move people around, self drive will substantially reduce those deaths in the future, additionally accidents will be a point for the engineers to learn from; though I expect AI will just make self drive so good that this won't actually be issue.
Last edited: