Soldato
- Joined
- 3 Oct 2007
- Posts
- 12,169
- Location
- London, UK
No, you just have very little understanding of the area you're talking about, you've backtracked already from the cars being infallible to just the software (which makes no sense as the software is constrained by the hardware, they're not driving around carrying a datacentre sized supercomputer and even then a huge model on a supercomputer is still limited) but you have no clue about ML/AI if you did you wouldn't have given such an absurd criterion. The problem space here is massive, the notion of an infallible model is impossible in the first place even if we assume perfect sensors/cameras etc.
The car hardware is impossible to be infallible with out current tech becuase hardware breaks, that was always given. You seem to be happy to roll out driverless cars just because in some small scale tests they are statically safer than humans, even though neither the companies or governments are sharing the times they **** up with us. Sorry but I want a lot lot more than that. And expecting the software not to be badly written and make mistakes is what we should all demand. I'm not as keen as some of you to just hand it all over to software. We've just seen with the Post Office how software ruined people's lives, even costing the life of one person. And the chances of those responsible ending up in jail where they belong seems like its none to **** all. You should all seem way to naive in trusting these companies to have our best interests at heart or the politicans that enable them.
Last edited: