Driverless cars

I'd say safer than a human implies being able to deal with any situation a human could have handled,

well that's just wrong and isn't how anyone measure it all. Machines and humans will have different pluses and negatives. What actually matters is the incident rate.


no one is waiting, however much is learnt n crash. again this is normal. look at the aviation industry and how the safety has dramatically increased by investigating and implementing change. something that "driverless" cars do. Again unlike human drivers where cars generally don't have footage, sensors and cant really be improved(except external stuff like barriers, intersection design etc).
 
Last edited:
well that's just wrong and isn't how anyone measure it all. Machines and humans will have different pluses and negatives. What actually matters is the incident rate.


no one is waiting, however much is learnt n crash. again this is normal. look at the aviation industry and how the safety has dramatically increased by investigating and implementing change. something that "driverless" cars do. Again unlike human drivers where cars generally don't have footage, sensors and cant really be improved(except external stuff like barriers, intersection design etc).

The issue here though is that by saying how safe it is, and quoting statistics, your promoting faith in a system that is flawed.

Calling the model x's system 3x safer than a human is like calling a ship unsinkable, and as a result peoples minds will be "oh i'm fine, this system is so safe i can ignore everything and let the machine do it" which is fine until the midden hits the windmill.

Its like (to quote the aviation industry) the increased focus on keeping pilots hands in at actually flying planes by the seat of their pants after the airfrance disaster caused by a mechanical failure followed by a pilot forgetting how to fly.

Like i say i'd hope we could try and push past people needing to die to get change implemented
 
just no,
every systems is flawed, including humans.
for a start I didn't call it 3 times safer.
nowhere have I said that its infallible, quite the opposite.
and I actually agree with the Germans, that calling it auto pilot will cause issues witht he masses, which until model 3 is released isn't to much of an issue for them, as their current buyers generally know the score, unlike when it becomes a mass product.

there's a reason you currently are meant to keep your hands on and keep looking out the window. it is currently called beta and that at intervals it beeps and you have to apply force to the steering wheel, to know you have your hands close and are paying some attnetion.
 
Last edited:
And i've never said humans are infallible, i just said i wouldnt call a system safer than humans until i could say with confidence it could handle any motoring situation a human driver could. obviously there are scenarios where humans can and will fail, but stopping when theres a static object on a motorway in front of them in plain daylight isnt one of them.
 
actually that happens all the time.

and you stance isn't based on anything. basing it on numbers is the only way to gauge the difference in safety.
 
actually that happens all the time.

and you stance isn't based on anything. basing it on numbers is the only way to gauge the difference in safety.

My stance is based on my opinion as to how the subject of driverless cars should be handled, and watching that video and thinking how that accident should never have happened, and that as an engineer the first thing i'd into an adaptive cruise control system is the ability to stop if theres something in front of it.

You heard the fallible, failure of a human driver shouting "stop stop stop" long before anything actually happened.

Do you have some of these numbers available? Considering many people live their entire lives without an accident on their part, covering hundreds of thousands of miles it'd be interesting to see where these claims of yours come from.
 
I'm not denying that the system will improve, i'm just saying my point of view as to what "safer than a human" should mean when applied to an autonomous car.

I'm not seeing any meaningful statistics there, just elon musk saying his system is twice as safe as a human, a claim i contest after seeing one of his systems fail to do what a human would do in that situation (a human driver who was actually driving that is, not facebooking while vaguely operating a car).

The only number of note is google claiming to having intervened 341 times in a year, i realise that its early stages but thats a bit more revealing as to the confidence that should be instilled in these early systems.

Humans are fallible, and ironically the greatest challenge to a driverless car is other drivers, once we reach full driverless cars there wont be an issue but at the minute the man/machine bridge is still quite weighted in favour of man.
 
so every rear ender that happens is down to facebook, phones etc? this is not how humans work.

not only that but thy will have already updated the software.

your points make no sense.
 
Good grief how on earth is this crap legal.

It's not.

What is legal is driver aids to help the driver, while they are still aware and concentrating on their surroundings.

That video is a prime example of why it is a driver aid, not automation, and why even the google cars have to have a person in them ready to stop or take over at a moments notice.

This is unfortunately the biggest issue with this technology at the moment. People not understanding it, or the laws.

Note - Tesla call this tech a "beta" version. That should tell you all you need to know about it.
 
so every rear ender that happens is down to facebook, phones etc? this is not how humans work.

not only that but thy will have already updated the software.

your points make no sense.

They fixed the bug yes, but that particular bug is not one that should have been there innthe first place.

Are you saying that no human alive could have prevented that accident? (Taken out of context intentionally btw)

My points may not make sense to you, but your points seem to be nonexistent now save to question mine.

The long and the short of it is whilst a system like this may have issues coping with the unexpected, traffic moving in strange ways, but something so simple as encountering a static object on a motorway and not having the range to stop is an issue that should not exist, especially doing 55, god help it meeting a tailback on the autobahn.
 
of course not, I'm saying lots of drivers so make that mistake and as such your stance makes no sense.
how does it not make sense, safety based on stats. which is how the safety groups base safety on.
combined with we are finally getting more aviation like improvements. Thanks to sensors.

the thing is you have no idea what caused this, although it looks simple that probably isn't the case. its not like all the other Teslas haven't come across similar situations but they haven't crashed. Probably more like the one that caused the death and it misread the situation due to other factors. If this was common and happened to every stationery object you would have a point.
 
Last edited:
of course not, I'm saying lots of drivers so make that mistake and as such your stance makes no sense.
how does it not make sense, safety based on stats. which is how the safety groups base safety on.
combined with we are finally getting more aviation like improvements. Thanks to sensors.

the thing is you have no idea what caused this, although it looks simple that probably isn't the case. its not like all the other Teslas haven't come across issues but they haven't crashed. Probably more like the one that caused the death and it misreading the situation due to other factors. If this was common you have a point.

You keep mentioning safety based on stats, what stats? The ceo of a company saying he's looked at stats and thinks his product is awesome buy one now from your local dealer?

So far the only stat you've shown is google showing that 341 times has their machine failed to do its job and has needed a human to take over, now thats 341 test cases for them to work on and improve but its gonna take more than that to get this anywhere near palettable for the general public.

You are right, i have no idea what caused that issue, i just saw a machine fail at a task a human would consider simple, look at a claim of "2x safer than a human" and think to myself "that doesnt look safe to me".

Now unless your going to provide something more than making claims based on ethereal statistics i'm going to catch up on the apprentice.
 
I think glaucus is assuming that because computers don't make mistakes computationally, that they are somehow safer?


That's clearly nothing to do with reality when you actually consider the task here. Driving.

As has been evidenced in various videos, and what Glaucus seems to deny, is that the computers failed even though their calculations were perfect.
 
the thing is you have no idea what caused this, although it looks simple that probably isn't the case. its not like all the other Teslas haven't come across similar situations but they haven't crashed. Probably more like the one that caused the death and it misread the situation due to other factors. If this was common and happened to every stationery object you would have a point.

Erm it's not hard to realise what happened.

The Tesla didn't have any programming which told it that there was a stationary car in front of that car which switched lanes. While anyone with a pair of eyes could have easily realised that the lane is at a standstill from a mile off!

In fact knowing what is going on 3, 4, 5 cars in front is a fundament of any good driver! But this technology cant even see past 100 metres, or the car in front lol :D
 
Last edited:
I think glaucus is assuming that because computers don't make mistakes computationally, that they are somehow safer?


That's clearly nothing to do with reality when you actually consider the task here. Driving.

As has been evidenced in various videos, and what Glaucus seems to deny, is that the computers failed even though their calculations were perfect.

Maybe, the system will execute its programming perfectly and repeatably, and with no room for distraction, which is his point that a computer can be safer than a human, or at least as long as the system works and i'm sure being on a pc forum we can agree god help us if microsoft ever make a driiverless car....

But at this early stage its a mistake to assume a computer program is fit to perform the enormously complicated task of driving better than a human, like i have said once all cars are driverless (and therefore predictable, completely law abiding, and able to transmit ahead their actions to others) then it will work.

Think about something so simple as a traffic light, getting a computer to recognise a traffic light that is red as opposed to many other things that can be mistaken purely by being red. Theres a road near me with a train line and at night the signal is so easily confusable for a traffic light you'd be fooled if you didnt know it was for the train.

Whats needed is practical road testing, and lots of it, its too early days methinks for the bold claims being made and people should be naturally skeptical and wary around them until bugs like this are ironed out to a level that a driverless car is safer than an ideal human driver (or perhaps better, with faster reaction times).
 
Think about something so simple as a traffic light, getting a computer to recognise a traffic light that is red as opposed to many other things that can be mistaken purely by being red. Theres a road near me with a train line and at night the signal is so easily confusable for a traffic light you'd be fooled if you didnt know it was for the train.

Exactly.

The roads need to be designed to be machine readable FIRST. This whole palaver of programming machines to read stuff which was never meant to be machine readable is a disastrous idea. Evidently.
 
Exactly.

The roads need to be designed to be machine readable FIRST. This whole palaver of programming machines to read stuff which was never meant to be machine readable is a disastrous idea. Evidently.

The downside to that though is cost, although probably will happen in the very long term, but things like country roads etc wont recieve the same level of attention.

Cheaper to make the car adaptable
 
In fact knowing what is going on 3, 4, 5 cars in front is a fundament of any good driver! But this technology cant even see past 100 metres, or the car in front lol :D

https://electrek.co/2016/10/20/tesla-new-autopilot-hardware-suite-camera-nvidia-tesla-vision/

One thing that carried over from the original Autopilot 2.0 suite is the triple front-facing cameras:

Main Forward Camera: Max distance 150m with 50° field of view
Narrow Forward Camera: Max distance 250m with 35° field of view
Wide Forward Camera: Max distance 60m with 150° field of view

So let's check some facts before shall we. Plus looking at the video the car would have braked before it hit the car in front he over reacted to me. I didn't actually see what he had the distance set at as well in that video.

The paranoia about self driving cars cracks me up and lets not get started on idiots using them incorrectly off highways on canyon roads.
 
https://electrek.co/2016/10/20/tesla-new-autopilot-hardware-suite-camera-nvidia-tesla-vision/

One thing that carried over from the original Autopilot 2.0 suite is the triple front-facing cameras:

Main Forward Camera: Max distance 150m with 50° field of view
Narrow Forward Camera: Max distance 250m with 35° field of view
Wide Forward Camera: Max distance 60m with 150° field of view

So let's check some facts before shall we. Plus looking at the video the car would have braked before it hit the car in front he over reacted to me. I didn't actually see what he had the distance set at as well in that video.

The paranoia about self driving cars cracks me up and lets not get started on idiots using them incorrectly off highways on canyon roads.

Those figures are simply how far the camera can see, absolutely nothing more! The actual problem isn't merely to do with how far the camera can see lol. The issue is the camera can only recognise what it's image processor is programmed to recognise.

Funny you mention paranoia. It's actually a very useful human trait which can be pretty useful while driving. The amount of times I'm paranoid about the car next to me veering into my lane in a 2 lane bend/turn, and then I stagger myself, and then he actually ends up cutting in, and I'm like phew good thing I staggered myself. Also many other scenarios where paranoia is very useful, a computer simply cannot anticipate like a human brain can. Even straight up fear is useful while driving!


I'm not paranoid about self driving cars. My concern is that the programming is nowhere near complete enough yet they've already unleashed these cars into public hands who will use them because they think it lets them concentrate less on driving. And then they do just that and then they crash.
 
Last edited:
Back
Top Bottom