I think the software should be infallible or as damned close to it as is possible. I don't think that is too much to ask. These companies want us to place life and death trust in their software, I think they should have to produce that standard of product.
Close to is fine, infallibility on the other hand certainly is too much to ask for several reasons:
Software bugs: speak to anyone who has worked on a large software project that is regularly updated. Granted this is a safety critical task so while 'no bugs' isn't realistic we'd hope that any bugs wouldn't cause serious issues... still, there is *some* possibility.
But even if we assume perfectly stable software with no bugs etc.. we still have issues:
Ethical dilemmas/subjectivity: A few years back there were articles about trolley problem-type dilemmas, that kinda misses how these vehicles are developed but even though the AI is not necessarily getting developed as a utilitarian or deontologist or whatever your notion that there's always a logically "correct" course of action is flawed. In various situaitons people can argue over the ethics of a decision; there isn't necessarily a "right" answer.
Explainability: Following on from the previous point; deep neural nets are to some extent black boxes, you don't necessarily even know exactly why a course of action was taken that you may have some debate over whether it was correct from a third party perspective in the first place.
Uncertanty: There are probabilities at play under the hood, this isn't a load of handwritten if statements, the "correct" decision is the one that minimises the loss for the model. It's never going to be 100% infalible, that's just not how it works. It's taking what it thinks is the best decision given the uncertanty, so there's always going to be the potential for more training data, a bigger or more improved model and a further reduction in loss.
Hardware limitations:
You want to ignore hardware, sensors/cameras - addional or better sensors may make for a better decision but you're putting that aside and just talking about the software. Not so fast; even if we ignore sensors etc.. the problem is the software isn't some magical black box and doesn't exist in isolation. It's also limited by it's access to computing power!
Can you run the very latest AAA games on a 10 year old PC at 4k and 120 FPS?
There are two obvious problems here; the speed at which a given model is able to do inference tasks and the limitations on the size of the model you can actually fit on the hardware.
A car developed 10 years later might well not only be able to apply the brakes a fraction of a second quicker but has a much much bigger model with many more parameters that can make better decisions... and 10 years after that an even bigger and better model is available and so on. How can any of those models ever be infallible when there is always room for improvement as this is such a huge problem space.
Infalibility isn't possible, there are inherent limitations to software; we don't have infinite training data, hardware with infinate storage and inference done faster than the speed of light.
All that we can do is have cars developed that are significantly safer than humans, a big reduction in road deaths and continued improvement.