I've seen it reported as both walking and cycling - at this point it's probably fairly safe to assume half of what is being reported is complete nonsense. Tomorrow we'll probably find out it wasn't an Uber![]()
I wonder whats going to happen to the human safety driver.
I guess either way Uber won't want that particular operator "behind the wheel" of one of their cars again, if only to reduce press coverage. They'll probably get an NDA and/or a desk job at Uber and PTSD and never be heard from again.
Can you imagine the press hounding you after being involved in an accident like that?
A few speculative observations:
1. I expect the Tempe, Arizona police department is in possession of the Uber vehicle so the company will not yet be able to gain access to the data captured by its many sensors during the crash.
2. That data, when examined, should help to determine whether the operator was at fault or whether the system was at fault. At that point Uber will be able to move forward because if it is was a system failure, they will need to closely examine the software and suspend the fleet for the length of time it takes to correct the software--ie, an indefinite period. If it proves to be an operator error, then Uber will be able to redeploy the fleet almost immediately. I suggest the operator would be charged and face consequences.
Or of course the third option.
The pedestrian deliberately walked out in front of the vehicle, giving neither it nor the on board operator time to react to slow and stop the vehicle before the impact.
Suicide will be a thought in the minds of investigating officers.
Certainly the local police chief says it is unlikely that Uber will be found at fault in any way, as the collision happened in such a way that no one would have been able to react and stop in time.
http://uk.businessinsider.com/tempe...fatal-self-driving-car-crash-2018-3?r=US&IR=T
You probably don't have to let the AI drive to train it. If you let a well behaved well trained human drive, and kit the car up with sensors, then it can learn from how they drive, and you can compare how it thinks it would have handled situations compared to how the human did.
You are certainly right in the legal sense about this third possibility and with the comments from the Tempe officer that the pedestrian unexpectedly walked into the traffic lane, was not on a pedestrian crossing and did so from an area of shadows seems to indicate that Uber is not legally at fault.
However there is both the legal court and the court of public opinion. Even on this thread some contributors have talked about their concerns about the speed of introducing AVs onto public roads. In the court of public opinion there are those that believe the state of autonomous driving (and for Uber in particular) is far from being ready for widespread market introduction. And this accident will likely cause further regulatory scrutiny of the AV market.
My sense (speculative of course) is that Uber's vehicle sensors were able to detect the pedestrian before the accident because of the roof-mounted LIDAR and sensors and radar on the vehicle. Thus the problem appears that neither the machine nor the human operator were able to conclude that there was a pedestrian about to walk into their path.
It would be unreasonable to expect a human driver to have detected the pedestrian in this case but that is what the machine is supposed to be able to do. Specifically the machine, with built in intelligence (AI), is supposed to be able to predict a potential hazard. This is not an easy AI problem to solve. The machine has to be able to predict the intentions of the detected pedestrian with great accuracy but detecting such movement is a really difficult AI problem. For example, was the movement in the shadows a pedestrian with intentions or was it due to leaves blowing or an animal moving, etc? I think you can appreciate the difficulty.
Predictive AI is all about huge amount of data and the quality of the forecast depends upon the data remaining stable. Most likely the Uber algorithm had not seen the parameters of this data set before and was unable to identify the hazard correctly and in time to avoid the fatality.
The robot must be able to predict this hazard successfully, and other "corner cases" in order for AVs to be commercially successful. However, Uber may not be on a level playing field as it has driven in autonomous mode far fewer miles than Waymo, who appears to be leading the pack. There is a greater likelihood that a Waymo vehicle would have been able to detect the danger (ie, its predictive AI would have had the experience gained to detect the hazard from its miles driven in autonomous mode and would have handled the situation more successfully).
So the catch-22 situation remains. The temptation is to slow down AV road testing while the details are considered but doing so will slow down the accumulation of miles driven in autonomous mode that enables the AV to learn more and more corner cases improving its predictive AI abilities.
Or of course the third option.
The pedestrian deliberately walked out in front of the vehicle, giving neither it nor the on board operator time to react to slow and stop the vehicle before the impact.
Suicide will be a thought in the minds of investigating officers.
Certainly the local police chief says it is unlikely that Uber will be found at fault in any way, as the collision happened in such a way that no one would have been able to react and stop in time.
http://uk.businessinsider.com/tempe...fatal-self-driving-car-crash-2018-3?r=US&IR=T
......
The worst thing we can do is panic and stop development and talk about limiting AV's every time there is an incident......
Exactly. Self driving cars will not eliminate fatalities/accidents. It is beyond the laws of physics to eliminate any chance of pedestrians being hit. You can't stop a ton of metal going 40mph dead in an instant, and because of that, accidents will still happen.
Now obviously if it is found that the car didn't react in time and it was scientifically possible to stop in time then that is an issue that needs to be fixed. If the person just stepped out right in front of it whilst it was at speed, then there is nothing to fix/that can be fixed.
I have to politely disagree with your analysis. I take the view that it will be possible to make major improvements in Predictive AI such that it will be possible to anticipate the action of the pedestrian killed in the Tempe, Arizona crash for example. Remember, AVs today are capable of seeing up to three football fields ahead already today. That is what I was trying to discuss in my previous post.
If I am correct, then we are talking about the robot driver not being at least as good or even 10 times better at driving than the human but a better driver by a factor of thousand times or more. I see the possibility someday of totally eliminating the chance of an autonomous vehicle having a crash with either another vehicle or a pedestrian and I see a day when there will be zero fatalities on the road.