Autonomous Vehicles

Impossible.

How would it possibly be able to read the mind of the person on the pavement/side of the road? Someone could be running along the pavement and literally turn into the path of the car. No amount of predictive AI will stop pedestrians being hurt short of being able to read minds.

No doubt it can still improve a lot, but there will never be zero fatalities with tons of metal flying around at 70mph.

While not 100% bullet proof, body language can telegraph a persons next movement. (Head movement, shoulder position, etc..)
 
While not 100% bullet proof, body language can telegraph a persons next movement. (Head movement, shoulder position, etc..)

Absolutely, but even if the car can react instantly to the body language change, the laws of physics still mean the car will have a certain stopping distance depending on its speed.
 
Absolutely, but even if the car can react instantly to the body language change, the laws of physics still mean the car will have a certain stopping distance depending on its speed.

You may be missing the point. There have been countless demonstrations by Waymo over the years showing how its Predictive AI software can anticipate actions by pedestrians, bikers and others well in advance of a potential hazardous situation and the AV is able to slow down to enable a safe reaction. One of the more famous examples was of a swerving biker and its ability to correctly read the path the biker would take long before it was anywhere near the biker. Another example I recalled was its ability to "see" a deer in woods moving towards the road well before a human passenger could see the deer. Waymo's vehicle can already see three football fields ahead of the car and in all directions. Seeing the pedestrian in the shadows from a distance could have been more than likely.

All this is possible due to a combination of their LIDAR, radar, sensors and Predictive AI software that operate through its algorithms. Compare this to Uber's Predictive AI which clearly was insufficient to avoid the fatal crash.

My point is that one can speculate that Waymo's vehicle has already seen the exact scenario that occurred in the Uber fatal crash and would have reacted correctly by anticipating the action of the pedestrian well before it approached the fatal scene. As the previous poster indicted, shoulder and head movements of the pedestrian may well be part of the Predictive AI, honed by Waymo from more than 5 million miles of driving on public roads in autonomous mode and billions of miles of simulated autonomous driving in Waymo's labs during Waymo's decade of testing.

If I am correct that Waymo's AI can be perfected due to all these attributes and seeing more "corner cases" than any human could possible see in a lifetime of driving, I see no reason why their self driving cars cannot ultimately be a thousand times or more safer than a human driver. In such a world, road traffic fatalities can be totally eliminated.
 
In such a world, road traffic fatalities can be totally eliminated.

I disagree.

For example, person working dead ahead on the pavement, car is going 40 mph approaching it. Person makes zero signs that they will cross, then suddenly turns into the path of the car. I am not saying the car would not be able to react and apply the brakes immediately, but that doesn't mean the car will stop in time. The alternative is for self driving cars to be ridiculously cautious and stop at absolutely any sign or possibility a person might step into the road, rendering them an awful mode of transport.

You keep talking about how far it might be able to see ahead (three football pitches etc). That is pointless and irrelevant when the person walking 2 feet beside you just suddenly turns into the path of the car.
 
They could just reduce their speed in proximity to people to make it so they they can always stop in time, rather than trying to go from 70 to 0.
We're pretty much heading that direction as it is with the growth of 20Mph limits popping up everywhere.
 
They could just reduce their speed in proximity to people to make it so they they can always stop in time, rather than trying to go from 70 to 0.
We're pretty much heading that direction as it is with the growth of 20Mph limits popping up everywhere.

Exactly.

If you look at the recent YouTube videos showing Waymo's Early Rider ride sharing SUVs in action in Chandler Arizona, that is exactly what the AV does when it senses a danger from a nearby pedestrian or bike rider. I remember Google showing earlier videos from the California test site, a purpose built facility in Central California, where they are constantly pushing bikes and trollies and even people in front of the car in close proximity to it. The algorithm in effect forces the car to travel cautiously around people, bikers, etc whether they are clearly in view or lurking in shadows which is easily detected by the LIDAR, radar, sensors, etc.

I am not suggesting that their Predictive AI is yet perfect in all situations, but that is the direction they are moving towards.
 
.............

Certainly the local police chief says it is unlikely that Uber will be found at fault in any way, as the collision happened in such a way that no one would have been able to react and stop in time.

His statement may be irrelevant in the hands of a smart lawyer representing the plaintiff (the pedestrian killed in Arizona).

Uber recently went to trial in the Uber v Waymo trade secret theft (which was settled out of court with a payment by Uber) and during the early days of the trial and indeed during discovery before the trial began, it was made very clear to the public that Uber was in a race to get its AVs on the road perhaps at the expense of safety.

Anthony Levandowski and Travis Kalanick of Uber exchanged many messages that were made public in court hearings. For example, as Verge points out: "
Waymo CEO John Krafcik said that Levandowski had vehemently held that redundant systems for steering and braking were unnecessary. “I think it’s fair to say we had different points of view on safety,” said Krafcik in court."

Also, "His messages to Travis Kalanick were more casual. “We need to think through the strategy, to take all the shortcuts we can find,” he said in one text message. And in another, “I just see this as a race and we need to win, second place is first looser [sic].”

Kalanick was similarly breezy. “Burn the village,” he texted Levandowski at one point."

https://www.theverge.com/2018/3/20/...dent-arizona-safety-anthony-levandowski-waymo
 
for someone who spams the thread so much, you seem to have a very loose grasp of the technology and capabilities.

yes ai can learn from drivers, having bad drivers or more often fine drivers who have bad occasions does not make the ai bad that's not how it works.
it will never be perfected, it just needs to be demonstrated better than human drivers which many already are and that will continue to improve.
Same applies to driver aids or partial autonomous modes, does it cause fewer accidents than humans on their own.
 
His statement may be irrelevant in the hands of a smart lawyer representing the plaintiff (the pedestrian killed in Arizona).

Uber recently went to trial in the Uber v Waymo trade secret theft (which was settled out of court with a payment by Uber) and during the early days of the trial and indeed during discovery before the trial began, it was made very clear to the public that Uber was in a race to get its AVs on the road perhaps at the expense of safety.

Anthony Levandowski and Travis Kalanick of Uber exchanged many messages that were made public in court hearings. For example, as Verge points out: "
Waymo CEO John Krafcik said that Levandowski had vehemently held that redundant systems for steering and braking were unnecessary. “I think it’s fair to say we had different points of view on safety,” said Krafcik in court."

Also, "His messages to Travis Kalanick were more casual. “We need to think through the strategy, to take all the shortcuts we can find,” he said in one text message. And in another, “I just see this as a race and we need to win, second place is first looser [sic].”

Kalanick was similarly breezy. “Burn the village,” he texted Levandowski at one point."

https://www.theverge.com/2018/3/20/...dent-arizona-safety-anthony-levandowski-waymo

Point one; the Tempe Local Police chief is a she (Sylvia Moir), so you obviously didn't read the article I linked to, when you quoted my post.

Point two, the Police Chief has seen onboard video and other evidence, of the incident that has not been released to the public as yet, and is basing her statement on having seen that evidence. If that shows that the vehicle never saw the pedestrian until she stepped out in front of the vehicle way to late for anyone or anything to react, then no lawyer anywhere will be able to defend the pedestrians actions.

Point three, unless the AV is going to come to a complete stop every single time it senses a pedestrian standing at the road edge, or walking close to the road edge, assuming said pedestrian is going to instantaneously step out in front of it, then incidents like Tempe WILL happen occasionally, and there is zero amount of programming and learning and sensors that will stop that.

Studies have shown that it takes the average driver from one-half to three-quarters of a second to perceive a need to hit the brakes, and another three-quarters of a second to move your foot from the accelerator to the brake pedal. Everybody’s reaction times are different granted, but that’s up to a full one-and-a-half seconds between when you first start to realise you’re in trouble and before you even start to slow down.

So Ok sensors on an AV may be able (one day) to cut that time down by a reasonable percentage say, halve it to 3/4 of a second to sense the threat and activate the brakes, but we also know that at 40mph it WILL take up to 80 feet to actually stop the vehicle once the brakes are activated, (also remembering AV's are considerably heavier than a similar sized none AV, so will take longer to stop), so even if the very clever AV senses the pedestrian earlier than a human driver and activates the brakes earlier, if said pedestrian is any closer than 80 feet from the vehicle, when they unexpectedly step out into the path of the vehicle, they are going to be hit, that is simple physics and cannot be altered.

Also if said AV's were to always slow down to a crawl or come to a dead stop in a 30mph or 40mph limit (such as Tempe was), as soon as they are near pedestrians, "just to be on the safe side in case they step out", then traffic in general will be even slower in places than it is currently, and some of the perceived benefits of AV's are instantaneously null and void, in that they should (in theory) allow traffic to move faster in cities and towns than currently traffic does in most urban areas. Plus the fact that human drivers following will never expect a vehicle to suddenly come to a complete stop in front of them, in a situation whereby it is perfectly reasonable to stay at a steady 40mph, therefore causing a load of collisions.
 
It will happen again. With the AI it's compromise between when to stop and when not to, otherwise the car just wouldn't move. There is no ability to make judgement calls or think for itself. If you find yourself in that specific situation, it WILL run you over every time.
 
Here is the video released by the Tempe Police Department showing the moments before the Uber self driving vehicle makes contact with the pedestrian. It shows both the external view and the internal view of the safety driver.

You will note that the pedestrian has approached the vehicle from the left side of the road. I did not see a parked vehicle that might have shielded the view of the AV.

The Police and the NTSB investigation continues and Uber's fleet of self driving cars remains grounded.

https://www.bloomberg.com/news/arti...-video-of-fatal-uber-autonomous-car-collision
 
Can easily see how a driver would miss that, the vehicle sensors absolutely should have seen the hazard though.
Arstechnica pointed out that the human eye has better low light capabilities than some cameras, so potentially a human could see it, but we will never know. I agree that there is no reason why they car would miss it though.

I do have to say, who crosses the street without looking both ways. you can easily spot a cars headlights in the dark.
 
Can easily see how a driver would miss that, the vehicle sensors absolutely should have seen the hazard though.

The internal video appears to show that at the last moment, the test driver looks down and away, perhaps at a mobile device?

I too am still puzzled as to why the LIDAR, cameras and sensors did not appear to pick up the pedestrian. The vehicle did not attempt to slow down at all. The pedestrian was well into the traffic lanes when the accident occurred. Had the AV spotted the pedestrian (which it clearly did not), was there time for it to slow down or take evasive action?

Once the evidence is reviewed, the Maricopa Arizona Attorney General will determine if any charges will be filed in the case.
 
While we focus on Level 4 AV's following the Uber crash, it is worth keeping in mind that there are still issues with a Level 2 autopilot as this news story from China involving Mercedes shows:

"
In China’s He Nan province, a Mercedes C200L lost control and drove 100 kilometers on automated cruise control this week . Luckily no one was hurt, but it could have been disastrous.

The incident took place on the highway between He Nan and Shan Xi provinces (article in Chinese) where the driver was driving at 120km per hour and realized he could not slow down or stop the car. Panicking, he called Mercedes, who they tried to regain control of the car remotely, but to no avail.

The driver was able to get in touch with highway patrol, which cleared a toll plaza of cars so the uncontrollable car could pass without hurting anyone. The technology to blame was a Level 2 autopilot system, whose introduction was fiercely debated in China."
 
Yea that cruise control thing has happened a number of times. Also throttles getting stuck open when the car battery died just at the wrong moment.

The problem right now with AVs is you'd be constantly on edge as the driver, poised to take over at any moment if it looks like it's going to crash. That's more stressful than just driving the car yourself.
 
The internal video appears to show that at the last moment, the test driver looks down and away, perhaps at a mobile device?

I too am still puzzled as to why the LIDAR, cameras and sensors did not appear to pick up the pedestrian. The vehicle did not attempt to slow down at all. The pedestrian was well into the traffic lanes when the accident occurred. Had the AV spotted the pedestrian (which it clearly did not), was there time for it to slow down or take evasive action?

Once the evidence is reviewed, the Maricopa Arizona Attorney General will determine if any charges will be filed in the case.


From a couple of similar vehicles we have been working on, the position at where the driver appears to be looking down towards, is possibly a screen that shows what the sensors are seeing.

Our "drivers" also spend most of their time looking at this screen to check what the car is "seeing" and reacting to.

From the look of shock on his face I would assume the car "saw" nothing, nothing showed on the screen, and he just happened to look up too late.

You can see right at the end of the interior shot, he moves to grab the steering wheel to swerve but it is too late.

The woman pedestrian/cyclist however is completely and totally at fault for crossing the road where there is no crossing, and did not yield when she saw/heard the vehicle approaching but just carried on into it's path.

As has been posted earlier

The local city traffic codes Sec. 19-151. (Ord. No. 86.45, 7-10-86) state

Within the central business district other than within a marked or unmarked crosswalk.

(a) No pedestrian shall cross the roadway within the central business district other than within a marked or unmarked crosswalk.
(b) Every pedestrian crossing a roadway outside of the central business district at any point other than within a marked or unmarked crosswalk shall yield the right-of-way to all vehicles upon the roadway.
(c) No pedestrian shall cross a roadway where signs or traffic control signals prohibit such crossing.
 
From a couple of similar vehicles we have been working on, the position at where the driver appears to be looking down towards, is possibly a screen that shows what the sensors are seeing.

Our "drivers" also spend most of their time looking at this screen to check what the car is "seeing" and reacting to.

From the look of shock on his face I would assume the car "saw" nothing, nothing showed on the screen, and he just happened to look up too late.

You can see right at the end of the interior shot, he moves to grab the steering wheel to swerve but it is too late.

The woman pedestrian/cyclist however is completely and totally at fault for crossing the road where there is no crossing, and did not yield when she saw/heard the vehicle approaching but just carried on into it's path.

As has been posted earlier

The local city traffic codes Sec. 19-151. (Ord. No. 86.45, 7-10-86) state

Within the central business district other than within a marked or unmarked crosswalk.

(a) No pedestrian shall cross the roadway within the central business district other than within a marked or unmarked crosswalk.
(b) Every pedestrian crossing a roadway outside of the central business district at any point other than within a marked or unmarked crosswalk shall yield the right-of-way to all vehicles upon the roadway.
(c) No pedestrian shall cross a roadway where signs or traffic control signals prohibit such crossing.

Your insights from your own testing are very valuable for all of us.

I still find it puzzling how the Uber's sensors, LIDAR, radar and cameras apparently failed to detect the pedestrian. Clearly it would be impossible for Uber's predictive AI to kick in if they never had sight of the pedestrian in the first place. I see this as a major defect in their hardware/software solution.

I have found no satisfactory explanation given yet but await further data releases.
 
While we focus on Level 4 AV's following the Uber crash, it is worth keeping in mind that there are still issues with a Level 2 autopilot as this news story from China involving Mercedes shows:

"
In China’s He Nan province, a Mercedes C200L lost control and drove 100 kilometers on automated cruise control this week . Luckily no one was hurt, but it could have been disastrous.

The incident took place on the highway between He Nan and Shan Xi provinces (article in Chinese) where the driver was driving at 120km per hour and realized he could not slow down or stop the car. Panicking, he called Mercedes, who they tried to regain control of the car remotely, but to no avail.

The driver was able to get in touch with highway patrol, which cleared a toll plaza of cars so the uncontrollable car could pass without hurting anyone. The technology to blame was a Level 2 autopilot system, whose introduction was fiercely debated in China."

Have they not heard of take it out of gear and turn the ignition off ?????? Then slowly drift to a stop on the brakes, yes you will have no assistance on the brakes but they will work you just press the pedal a bit harder.

Although the car in question will automatically slow down to about 20mph and turn off the auto cruise when you open the car door and takeoff your seatbelt, so there should have been zero issue and danger to anyone, perhaps some of these people should actually read the handbooks given with their cars.
 
Back
Top Bottom