Self-drive Uber identifies woman as false positive (which inc. plastic bags)

I thought this was going to be a SJW thread by the title. :p What the article doesn't seem to mention (could be a case of nobody knows yet) is how long this person was on the road before the car hit her? Did she step out and get hit straight away or was there a number of seconds that she was on the crossing and visible?
 
I thought this was going to be a SJW thread by the title. :p What the article doesn't seem to mention (could be a case of nobody knows yet) is how long this person was on the road before the car hit her? Did she step out and get hit straight away or was there a number of seconds that she was on the crossing and visible?

A colleague at work claims to have seen video and says she walked straight out into the road with no chance for even the best driver to stop safely.
 
Harsh, but true.. ? Wasn't it a dark/unlit road and she decided to cross outside of a crossing with no lights etc on her? :confused: It's a great shame but hardly proof that driverless cars aren't safe.

The point made in the story is that the cars program did not react because it was instructed to ignore more information from sensors in an attempt to have less false positives.
 
lolstockhausen.

How long has it been since we had these editorialised RSS feeds from you? When did they start up again?
 
Because the point of a robotic car is to remove the driver. Doing this requires proving that it is safe to remove the driver.

A robotic car failing to acknowledge someone in the road and hitting them at 40mph without touching the brakes does not support this.

Since the robot has no capacity to "think" the decision to not do anything was part of its coding and this means it will do it every single time in the same circumstances.

Its news because it was not a fault in the way a human doing something stupid would be an individual fault because they know what they are doing is wrong.
The stats already support that self-driving cars are far more safe than those of human drivers.
 
The stats already support that self-driving cars are far more safe than those of human drivers.

What did you need to quote me to say that for. I was talking about why it was newsworthy due to that being questioned.

But while you're here:

The stats already support that self-driving cars [in the circumstances they are being tested in] are far more safe than those of human drivers [in a much wider set of circumstances].

We don't need to disagree but lets be realistic about how the numbers are achieved when using them as a point.
 
A colleague at work claims to have seen video and says she walked straight out into the road with no chance for even the best driver to stop safely.

The video of the accident is freely available online, it looks very dark and the woman appears to be crossing the road 'from nowhere'

The weird thing is, that there is Dashcam footage of the same location from cars and it actually looks very well lit at night, so the footage from the car is a bit misleading.
 
What did you need to quote me to say that for. I was talking about why it was newsworthy due to that being questioned.

But while you're here:
Even just plain highway/motorway driving, statistically, is safer with autonomous cars.


We don't need to disagree but lets be realistic about how the numbers are achieved when using them as a point.
Quoting != disagreeing.
 
Well that isn't necessarily false but learning is a highly abused term when it comes to computers.

well not really in this case as they literally are using machine learning here

In this example of keeping a steady 40mph through a pedestrian I would be amazed if the cars programs had any say in the variables that allowed it to happen.

How do you mean, of course it is relying on weights that have been updated through training... Sure there will be some human input here but this isn't something that is programmed directly for every possible action it can perform from the start but rather something that is trained and learns (updates the weights) through that training. The ability to recognise objects is something it* has trained to do, classifying that object as something it can drive through rather than a person it has to stop for has happened as a result of learning.

*or rather multiple versions of it
 
well not really in this case as they literally are using machine learning here

How do you mean, of course it is relying on weights that have been updated through training... Sure there will be some human input here but this isn't something that is programmed directly for every possible action it can perform from the start but rather something that is trained and learns (updates the weights) through that training. The ability to recognise objects is something it* has trained to do, classifying that object as something it can drive through rather than a person it has to stop for has happened as a result of learning.

*or rather multiple versions of it

I mean and meant I have huge skepticism that the car can adjust the parameters of what it is allowed to drive through.

Building a database of what sensor feedback = an object is important but are you saying the cars program adjusts this information without intervention.

I have the idea that blame was being given to "tuning" of the detection system to class more sensor information as false. This doesn't sound like car level decisions.
 
I mean and meant I have huge skepticism that the car can adjust the parameters of what it is allowed to drive through.

the car (or rather their self driving system in general) is likely adjusting weights in relation to how it classifies objects (at least when training)

Building a database of what sensor feedback = an object is important but are you saying the cars program adjusts this information without intervention.

I have the idea that blame was being given to "tuning" of the detection system to class more sensor information as false. This doesn't sound like car level decisions.

sure there is going to be some human involvement in terms of what sort of errors they're happy with etc..

the whole thing is going to be probabilistic though, it is going to classify an object and at some point needs to make a decision re: whether to carry on driving or whether to stop - it is never going to be 100% sure that a particular object is not a person/is something that can be driven through
 
The text accompanying that video includes the following statement:
The autonomous Uber failed to slow down before it hit a 49-year-old woman walking her bike across the street. It has raised fresh questions about why the vehicle did not stop when a human entered its path.
I wonder whether similar Dashcam footage would stop a human driver being charged with causing death by dangerous driving?

If you can't see far enough ahead to stop safely - you are driving too fast for the conditions is how I was taught to drive - but then, I don't drive for Uber.

ps - God only knows what the woman was thinking when crossing where she was, on a dark road.
 

Looking at this video, would many people see her in time? It could have swerved I suppose?

At 40? I've had similar at 60 when an object (a car side on in the road) appeared in my dipped headlights round a bend - it didn't help with visibility that there were oncoming headlights too - and I swerved, caught the front of the car all down the side of mine, then had to swerve back to avoid the oncoming vehicle.So I managed it, but it was sheer luck it wasn't a huge pile up with fatalities.

So in the above case, I don't think that was too difficult to avoid tbh. But I will still maintain, self drive cars don't have to be 100% safe, just safer than human drivers and with generally driving around I would say they easily will be.
 
Back
Top Bottom