Autonomous Vehicles

Soldato
Joined
5 Apr 2009
Posts
24,864
I've seen it reported as both walking and cycling - at this point it's probably fairly safe to assume half of what is being reported is complete nonsense. Tomorrow we'll probably find out it wasn't an Uber :p
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
Permabanned
Joined
17 Aug 2016
Posts
1,517
By the way, when fault is determined in a car crash in Arizona, they use the concept of pure comparative fault. This implies that the fault is apportioned among the parties involved. For anyone wanting to know more about this I found a link:

https://www.breyerlaw.com/car-accidents/comparative-fault.html

However, assuming the Bloomberg story to be correct about the pedestrian and assuming a further report that the pedestrian was crossing outside the crosswalk, comparative fault will also take into account the local highway code:

Here is the local Tempe, Arizona city codes covering this.

Sec. 19-151. - Crossing a roadway.
(a) No pedestrian shall cross the roadway within the central business district other than within a marked or unmarked crosswalk.
(b) Every pedestrian crossing a roadway outside of the central business district at any point other than within a marked or unmarked crosswalk shall yield the right-of-way to all vehicles upon the roadway.
(c) No pedestrian shall cross a roadway where signs or traffic control signals prohibit such crossing.
(Ord. No. 86.45, 7-10-86)

Both the National Transportation Safety Board and the National Highway Traffic Safety Administration said they were dispatching teams to Tempe to investigate the accident. NHTSA said it was in contact with Uber, state and local authorities as well as Volvo, the car maker Uber relies on for its self-driving vehicles.
 
Last edited:
Permabanned
Joined
17 Aug 2016
Posts
1,517
Still awaiting more details on this fatality but I do not understand how the self driving Uber vehicle did not pick up the presence of the woman using its LIDAR and radar, which work perfectly well at night, in time for the car to have stopped or at least slowed down considerably.

Tech Crunch writes: Here's How Uber's self driving cars are supposed to detect pedestrians.....even if to a human driver they are blocked by parked cars.

https://techcrunch.com/2018/03/19/h...ving-cars-are-supposed-to-detect-pedestrians/
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
I wonder whats going to happen to the human safety driver.

A few speculative observations:
1. I expect the Tempe, Arizona police department is in possession of the Uber vehicle so the company will not yet be able to gain access to the data captured by its many sensors during the crash.
2. That data, when examined, should help to determine whether the operator was at fault or whether the system was at fault. At that point Uber will be able to move forward because if it is was a system failure, they will need to closely examine the software and suspend the fleet for the length of time it takes to correct the software--ie, an indefinite period. If it proves to be an operator error, then Uber will be able to redeploy the fleet almost immediately. I suggest the operator would be charged and face consequences.
 
Associate
Joined
10 Apr 2008
Posts
1,010
I guess either way Uber won't want that particular operator "behind the wheel" of one of their cars again, if only to reduce press coverage. They'll probably get an NDA and/or a desk job at Uber and PTSD and never be heard from again.
Can you imagine the press hounding you after being involved in an accident like that?
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
I guess either way Uber won't want that particular operator "behind the wheel" of one of their cars again, if only to reduce press coverage. They'll probably get an NDA and/or a desk job at Uber and PTSD and never be heard from again.
Can you imagine the press hounding you after being involved in an accident like that?

Attempting to look beyond the operator question at this difficult time (esp for the family of the deceased), what I am reminded of is the fact that perfecting this technology (AVs on public roads) is one of the most difficult challenges to solve but that the only way to do it is to drive more miles in autonomous mode. This raises ethical questions too. While I am confident it is possible to make rapid advances in AV technology, it is probably the last 1% of the challenge (so called "corner cases") that will be the most difficult for the robot to exceed the human operator).

Can it be done, esp in a hostile atmosphere where there will be loud calls to suspend the testing of AVs? It will probably be more difficult and may slow down the adoption.

It is worth remembering that human driving results in more than 100 fatalities per day on average in the US alone. That is approximately one death for every 86 million human driven miles (or 37,000 fatalities last year over 3.2 trillion miles driven in US). AVs have driven perhaps 15 or 20 million total miles to date, with Waymo having recently announced completion of more than 5 million miles. This suggests that many more miles need to be driven in autonomous mode to be able to accurately compare results.

It will be more difficult while ethical and moral questions remain front and centre.
 
Last edited:
Caporegime
Joined
28 Feb 2004
Posts
74,822
A few speculative observations:
1. I expect the Tempe, Arizona police department is in possession of the Uber vehicle so the company will not yet be able to gain access to the data captured by its many sensors during the crash.
2. That data, when examined, should help to determine whether the operator was at fault or whether the system was at fault. At that point Uber will be able to move forward because if it is was a system failure, they will need to closely examine the software and suspend the fleet for the length of time it takes to correct the software--ie, an indefinite period. If it proves to be an operator error, then Uber will be able to redeploy the fleet almost immediately. I suggest the operator would be charged and face consequences.


Or of course the third option.

The pedestrian deliberately walked out in front of the vehicle, giving neither it nor the on board operator time to react to slow and stop the vehicle before the impact.

Suicide will be a thought in the minds of investigating officers.


Certainly the local police chief says it is unlikely that Uber will be found at fault in any way, as the collision happened in such a way that no one would have been able to react and stop in time.

http://uk.businessinsider.com/tempe...fatal-self-driving-car-crash-2018-3?r=US&IR=T
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
Or of course the third option.

The pedestrian deliberately walked out in front of the vehicle, giving neither it nor the on board operator time to react to slow and stop the vehicle before the impact.

Suicide will be a thought in the minds of investigating officers.

Certainly the local police chief says it is unlikely that Uber will be found at fault in any way, as the collision happened in such a way that no one would have been able to react and stop in time.

http://uk.businessinsider.com/tempe...fatal-self-driving-car-crash-2018-3?r=US&IR=T

You are certainly right in the legal sense about this third possibility and with the comments from the Tempe officer that the pedestrian unexpectedly walked into the traffic lane, was not on a pedestrian crossing and did so from an area of shadows seems to indicate that Uber is not legally at fault.

However there is both the legal court and the court of public opinion. Even on this thread some contributors have talked about their concerns about the speed of introducing AVs onto public roads. In the court of public opinion there are those that believe the state of autonomous driving (and for Uber in particular) is far from being ready for widespread market introduction. And this accident will likely cause further regulatory scrutiny of the AV market.

My sense (speculative of course) is that Uber's vehicle sensors were able to detect the pedestrian before the accident because of the roof-mounted LIDAR and sensors and radar on the vehicle. Thus the problem appears that neither the machine nor the human operator were able to conclude that there was a pedestrian about to walk into their path.

It would be unreasonable to expect a human driver to have detected the pedestrian in this case but that is what the machine is supposed to be able to do. Specifically the machine, with built in intelligence (AI), is supposed to be able to predict a potential hazard. This is not an easy AI problem to solve. The machine has to be able to predict the intentions of the detected pedestrian with great accuracy but detecting such movement is a really difficult AI problem. For example, was the movement in the shadows a pedestrian with intentions or was it due to leaves blowing or an animal moving, etc? I think you can appreciate the difficulty.

Predictive AI is all about huge amount of data and the quality of the forecast depends upon the data remaining stable. Most likely the Uber algorithm had not seen the parameters of this data set before and was unable to identify the hazard correctly and in time to avoid the fatality.

The robot must be able to predict this hazard successfully, and other "corner cases" in order for AVs to be commercially successful. However, Uber may not be on a level playing field as it has driven in autonomous mode far fewer miles than Waymo, who appears to be leading the pack. There is a greater likelihood that a Waymo vehicle would have been able to detect the danger (ie, its predictive AI would have had the experience gained to detect the hazard from its miles driven in autonomous mode and would have handled the situation more successfully).

So the catch-22 situation remains. The temptation is to slow down AV road testing while the details are considered but doing so will slow down the accumulation of miles driven in autonomous mode that enables the AV to learn more and more corner cases improving its predictive AI abilities.
 
Associate
Joined
10 Apr 2008
Posts
1,010
You probably don't have to let the AI drive to train it. If you let a well behaved well trained human drive, and kit the car up with sensors, then it can learn from how they drive, and you can compare how it thinks it would have handled situations compared to how the human did.
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
You probably don't have to let the AI drive to train it. If you let a well behaved well trained human drive, and kit the car up with sensors, then it can learn from how they drive, and you can compare how it thinks it would have handled situations compared to how the human did.

Problem is that with 1.3 million road deaths globally per year mostly caused by human error (95%) and with most humans telling you that they are well trained and well behaved behind the wheel, the facts tell you a different story.

We have already seen significant use of sensors in autos yet last year the US had a rise in road deaths.

The problem is the human driver. As I have mentioned before, the human driver over time begins to trust the autonomous features too much (Waymo has tested this extensively) and it slows reaction times in an emergency.

No, the only true solution is a fully robotic vehicle and testing it fully in autonomous mode so that it is exposed to every conceivable corner case. We want the robotic car to be significantly safer than the human driver, not just slightly better. It can only improve its predictive intelligence by millions of more miles of fully autonomous testing.
 
Associate
Joined
10 Apr 2008
Posts
1,010
You don't think that it would help for the robot car to lean how a human drives throughout a whole journey to better predict them, rather than small snapshots as it passes them? It doesn't have to take what it sees as an example of how it should drive. It can uses stats to decide what was good and bad about what is sees in the recording.
 
Caporegime
Joined
28 Feb 2004
Posts
74,822
You are certainly right in the legal sense about this third possibility and with the comments from the Tempe officer that the pedestrian unexpectedly walked into the traffic lane, was not on a pedestrian crossing and did so from an area of shadows seems to indicate that Uber is not legally at fault.

However there is both the legal court and the court of public opinion. Even on this thread some contributors have talked about their concerns about the speed of introducing AVs onto public roads. In the court of public opinion there are those that believe the state of autonomous driving (and for Uber in particular) is far from being ready for widespread market introduction. And this accident will likely cause further regulatory scrutiny of the AV market.

My sense (speculative of course) is that Uber's vehicle sensors were able to detect the pedestrian before the accident because of the roof-mounted LIDAR and sensors and radar on the vehicle. Thus the problem appears that neither the machine nor the human operator were able to conclude that there was a pedestrian about to walk into their path.

It would be unreasonable to expect a human driver to have detected the pedestrian in this case but that is what the machine is supposed to be able to do. Specifically the machine, with built in intelligence (AI), is supposed to be able to predict a potential hazard. This is not an easy AI problem to solve. The machine has to be able to predict the intentions of the detected pedestrian with great accuracy but detecting such movement is a really difficult AI problem. For example, was the movement in the shadows a pedestrian with intentions or was it due to leaves blowing or an animal moving, etc? I think you can appreciate the difficulty.

Predictive AI is all about huge amount of data and the quality of the forecast depends upon the data remaining stable. Most likely the Uber algorithm had not seen the parameters of this data set before and was unable to identify the hazard correctly and in time to avoid the fatality.

The robot must be able to predict this hazard successfully, and other "corner cases" in order for AVs to be commercially successful. However, Uber may not be on a level playing field as it has driven in autonomous mode far fewer miles than Waymo, who appears to be leading the pack. There is a greater likelihood that a Waymo vehicle would have been able to detect the danger (ie, its predictive AI would have had the experience gained to detect the hazard from its miles driven in autonomous mode and would have handled the situation more successfully).

So the catch-22 situation remains. The temptation is to slow down AV road testing while the details are considered but doing so will slow down the accumulation of miles driven in autonomous mode that enables the AV to learn more and more corner cases improving its predictive AI abilities.


Simple way to look at it would be, if a human driver WOULD NOT have had time to slow and stop and avoid the incident, then there is zero fault with the AV, and we should all just carry on as we were before and let the AV's carry on their testing, accidents happen.

Look at it this way, we do not ban every normal car from the road every time someone walks into the path of one deliberately do we ?

We do not expect every human to predict every scenario when driving, so there will be accidents and incidents that involve fatalities occasionally, so we need to accept that AV's will have errors and issues, and there will be fatalities occasionally.

The worst thing we can do is panic and stop development and talk about limiting AV's every time there is an incident.

Don't forget, you guys will never ban all handguns and rifles, etc, so people will carry on massacring kids in schools on occasions, its the old chicken and egg scenario, you cannot have one without the other, they're intertwined and can never be separated.
 
Caporegime
Joined
20 May 2007
Posts
39,703
Location
Surrey
Or of course the third option.

The pedestrian deliberately walked out in front of the vehicle, giving neither it nor the on board operator time to react to slow and stop the vehicle before the impact.

Suicide will be a thought in the minds of investigating officers.


Certainly the local police chief says it is unlikely that Uber will be found at fault in any way, as the collision happened in such a way that no one would have been able to react and stop in time.

http://uk.businessinsider.com/tempe...fatal-self-driving-car-crash-2018-3?r=US&IR=T

Exactly. Self driving cars will not eliminate fatalities/accidents. It is beyond the laws of physics to eliminate any chance of pedestrians being hit. You can't stop a ton of metal going 40mph dead in an instant, and because of that, accidents will still happen.

Now obviously if it is found that the car didn't react in time and it was scientifically possible to stop in time then that is an issue that needs to be fixed. If the person just stepped out right in front of it whilst it was at speed, then there is nothing to fix/that can be fixed.
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
......

The worst thing we can do is panic and stop development and talk about limiting AV's every time there is an incident......

You might have heard that Toyota has today suspended their self driving car programme. Recently we had read that Uber had been proposing to sell its technology to Toyota. Guess that is on hold now.

Waymo has made no comment about the fatal accident involving Uber. It continues its programme in Chandler, Arizona.
 
Permabanned
Joined
17 Aug 2016
Posts
1,517
Exactly. Self driving cars will not eliminate fatalities/accidents. It is beyond the laws of physics to eliminate any chance of pedestrians being hit. You can't stop a ton of metal going 40mph dead in an instant, and because of that, accidents will still happen.

Now obviously if it is found that the car didn't react in time and it was scientifically possible to stop in time then that is an issue that needs to be fixed. If the person just stepped out right in front of it whilst it was at speed, then there is nothing to fix/that can be fixed.

I have to politely disagree with your analysis. I take the view that it will be possible to make major improvements in Predictive AI such that it will be possible to anticipate the action of the pedestrian killed in the Tempe, Arizona crash for example. Remember, AVs today are capable of seeing up to three football fields ahead already today. That is what I was trying to discuss in my previous post.

If I am correct, then we are talking about the robot driver not being at least as good or even 10 times better at driving than the human but a better driver by a factor of thousand times or more. I see the possibility someday of totally eliminating the chance of an autonomous vehicle having a crash with either another vehicle or a pedestrian and I see a day when there will be zero fatalities on the road.

When could this happen? Many years from now. So this is why I see it so important for the testing to continue. But I am also a realist and would note that ethical, moral, legal and other constraints will now be aimed at the AV industry causing pauses and delays such as we have seen with Toyota today.

And there is an additional pressure point on trying to delay continued testing that has not been discussed here for some time..namely, the lobbying efforts of the insurance companies. The insurance companies currently charge well for the insurance risks of humans driving vehicles. Imagine a world where there is zero risk of a traffic accident. What do you think that would do to the profits of these property and casualty insurers. Or to the legal professional involved in road traffic accidents? These groups will attempt to move heaven and earth to throw roadblocks up to slow the testing and implementation of AVs capable of perfect Predictive AI algorithms incorporated into AV software because their entire business model is based on road traffic accidents. Expect this lobbying pressure to be intense.
 
Caporegime
Joined
20 May 2007
Posts
39,703
Location
Surrey
I have to politely disagree with your analysis. I take the view that it will be possible to make major improvements in Predictive AI such that it will be possible to anticipate the action of the pedestrian killed in the Tempe, Arizona crash for example. Remember, AVs today are capable of seeing up to three football fields ahead already today. That is what I was trying to discuss in my previous post.

If I am correct, then we are talking about the robot driver not being at least as good or even 10 times better at driving than the human but a better driver by a factor of thousand times or more. I see the possibility someday of totally eliminating the chance of an autonomous vehicle having a crash with either another vehicle or a pedestrian and I see a day when there will be zero fatalities on the road.

Impossible.

How would it possibly be able to read the mind of the person on the pavement/side of the road? Someone could be running along the pavement and literally turn into the path of the car. No amount of predictive AI will stop pedestrians being hurt short of being able to read minds.

No doubt it can still improve a lot, but there will never be zero fatalities with tons of metal flying around at 70mph.
 
Back
Top Bottom