Autonomous Vehicles

wow the logic in this one is non exstent. you really should read up on critical thinking and try and apply it.
have you got any clue about averages. not everyone crashes, but autonomous cars still crash less than humans.
 
wow the logic in this one is non exstent. you really should read up on critical thinking and try and apply it.
have you got any clue about averages. not everyone crashes, but autonomous cars still crash less than humans.

You seem to have missed the points somewhat.
 
Nope. Maybe we should ban people. That seems to be the logical conclusion a lot of Al systems come to.
Yes, the logical conclusion is that AI should replace human drivers as extensively as possible, if public safety is a significant aim.

The barriers against this are financial more than technical.

The technology is good enough to improve safety significantly. It is isn't cheap enough to allow universal adoption yet though.
 
Yes, the logical conclusion is that AI should replace human drivers as extensively as possible, if public safety is a significant aim.

The barriers against this are financial more than technical.

The technology is good enough to improve safety significantly. It is isn't cheap enough to allow universal adoption yet though.
The other main barrier is legislation, one which the UK government want to address by 2020 at the latest.
 
Yes, the logical conclusion is that AI should replace human drivers as extensively as possible, if public safety is a significant aim.

The barriers against this are financial more than technical.

The technology is good enough to improve safety significantly. It is isn't cheap enough to allow universal adoption yet though.

Yes but one serious accident could change opinion on that. Some of the stuff getting move around the UK is really bad for life. We need a level of responsibility and control that I don't think a machine can ever offer.
 
Yes but one serious accident could change opinion on that. Some of the stuff getting move around the UK is really bad for life. We need a level of responsibility and control that I don't think a machine can ever offer.
Why should one serious accident for AI condemn it, when humans cause many serious accidents every year?

AI can (factually, not just anecdotally with pointless examples) provide a measurably better degree of control than a human can, more of the time.

Why do you not view this as a potential improvement? What is it that you think prevents AI from offering improved performance versus a human, considering that so far they have demonstrated that they are significantly safer?
 
That's another way of saying you can't hear becuase these statistics prove it.

It's a way of saying contribute with something that actually has a bit of substance, not just 'my opinion is this and I don't care about anything that indicates it to be wrong' :p

All you've done is repeatedly say 'I dont think machines can do this, I don't think AI can do that' but you've not explained why. Everything you criticise AI for applies equally or more so to humans.
 
I do feel we should have a strike rule for people that have to many accidents or Insurance claims. Maybe those people should be forced into small self driving cars, but that is another discussion .

So what you’re saying is, human drivers that cause too many accidents should be put in vehicles under AI control in order to reduce accidents...yet you’ve been arguing the other way about AI this whole time :confused:

Well and truly mind fluffed.
 
It's a way of saying contribute with something that actually has a bit of substance, not just 'my opinion is this and I don't care about anything that indicates it to be wrong' :p

All you've done is repeatedly say 'I dont think machines can do this, I don't think AI can do that' but you've not explained why. Everything you criticise AI for applies equally or more so to humans.

Well all you've done is say the average AI is better than the average driver most of the time based on statistical data. Most of what seems to have compiled by companies looking to sell AI systems.

You seem convinced AI can overcome all as long as we keep people out of the loop. I'd question that.
 
So what you’re saying is, human drivers that cause too many accidents should be put in vehicles under AI control in order to reduce accidents...yet you’ve been arguing the other way about AI this whole time :confused:

Well and truly mind fluffed.

I'm saying some people probably shouldn't be on the roads :confused:
 
Well all you've done is say the average AI is better than the average driver most of the time based on statistical data. Most of what seems to have compiled by companies looking to sell AI systems.

You seem convinced AI can overcome all as long as we keep people out of the loop. I'd question that.

'All ive done' is tell you what the evidence shows. In my book, evidence trumps uniformed thoughts.

I seem convinced it can overcome all? Perhaps try reading through again - i've specifically said that it isn't perfect. What it is, is better than humans more of the time. This is an improvement. Why do you refuse to accept that being better more often is an improvement?

You are arguing that until it's perfect we shouldn't accept that it might be better than the situation we have now.
 
Yes I agree a computer can do all those things but it can also malfunction and make the wrong decision for the right reasons.

But it all boils down to the questions:

Is the AI controlled vehicle safer per km than an average human driver?

If it is, and if the accident rate is significantly lower, do we put up with the small chance (per km) that the machine may malfunction and “make the wrong decision”?

If the answer to 1 is yes, then there’s little reason not to to answer yes to the second question.

If a particular company has an accident rate higher than others and/or higher than human drivers then there can be/already are mechanisms in place* to fine and/or revoke their licence to be allowed to make automated vehicles.

Sure, if a loved one is killed by an automated vehicle then that’s a tragedy, just as it would be if they were killed by a driver. Unfortunately in the grand scheme of things that death is just a statistic and the aim of any regulatory authority is to reduce deaths in general any way they can. One of the best ways in future is likely to be removing the human element from driving.

Perhaps you should go watch iRobot as (ignoring the evil AI bent on destroying the world) it covers a fair amount of this sort of stuff. :p

*see California as an example. Self driving car companies have to log every incident and can have their licences removed if there are too many.
 
Last edited:
Back
Top Bottom