Autonomous Vehicles

My impression is that the initial comments by the Tempe police about Uber not being at fault were ill-advised and have since been walked back. Thinking some more about Uber's potential culpability in this case, there is a history of questionable practices at Uber that this article the other day in The Verge captures well:

https://www.theverge.com/2018/3/20/...dent-arizona-safety-anthony-levandowski-waymo

The article's title says it all: Uber's former head of self driving cars put safety second

And this second article in Recode from nearly one year ago points to turmoil in Uber's self driving car staff, many executive departures, low moral and concern about the specific secrets that were alleged to have been stolen from Waymo and which Uber chose to settle recently. And the case revolved around the specific theft of LIDAR secrets from Waymo. LIDAR as the article points out is "used to spot and measure distances to other objects so cars can avoid collisions", exactly what happened in this fatal crash.

https://www.recode.net/2017/3/24/14737438/uber-self-driving-turmoil-otto-travis-kalanick-civil-war

Am I surprised that the first fatal self driving car crash should involve an Uber vehicle? No.
 
Having just watched the video IMO that shows a clear issue with Ubers systems. While it would have been hard for a human driver to see that pedestrian* I'm extremely surprised it wasn't very obvious to the car systems. It was dark sure, but otherwise it looked like a clear night. Most of the sensors don't work on the visible light spectrum as far as I know so...

As much as the person shouldn't have been there there's certainly an issue if the car didn't spot them.

*although I did spot it the first time I looked at the video, not knowing when or what I was looking for, but knowing something was about to happen, so it may have been averted with a proper human driver not distracted by whatever she was looking down at.
 
Having just watched the video IMO that shows a clear issue with Ubers systems. While it would have been hard for a human driver to see that pedestrian* I'm extremely surprised it wasn't very obvious to the car systems. It was dark sure, but otherwise it looked like a clear night. Most of the sensors don't work on the visible light spectrum as far as I know so...

As much as the person shouldn't have been there there's certainly an issue if the car didn't spot them.

*although I did spot it the first time I looked at the video, not knowing when or what I was looking for, but knowing something was about to happen, so it may have been averted with a proper human driver not distracted by whatever she was looking down at.

According to Bloomberg article tonight, "Human driver could have averted Uber crash" and Uber's self driving systems should have detected the pedestrian.

Specifically, these experts believe that given the conditions, a typical human driver could have stopped 8 feet before the crash.

Other experrs questioned Uber's technology. An autonomous driving analyst at Gartner said: ""There’s only two possibilities: the sensors failed to detect her, or the decision-making software decided that this was not something to stop for." The LIDAR, radar and sensor system is designed to provide a 360 degree virtual view of the environment surrounding the car.

Moreover this analyst said: "It is "mystifying" why the vehicle didn’t react given that lidar systems like the one used on Uber’s SUV have a detection range of at least 100 meters and work better at night than during the daytime."

"The comments contrast with those made by the Tempe police chief, who told multiple media outlets that the pedestrian moved suddenly in front of the car and the crash didn’t seem preventable after reviewing footage of the collision."

https://www.bloomberg.com/news/arti...failure-of-uber-s-tech-in-fatal-arizona-crash
 
It is developing tech so there are gonna be teething problems, however, the vehicle should have had a better chance than any human to spot that, despite what some of the driving gods claim here.

Human pedestrian totally to blame imho, no excuse for not seeing the car.
 
Article on ars about the crash site
TL:DR, The scene is a lot brighter than the Uber video. XC90 using dipped beam illuminates far enough in front to give 4 seconds of reaction time at the speed they where going.

https://arstechnica.com/cars/2018/0...victim-came-from-the-shadows-dont-believe-it/

Isnt that exactly the point I was making yesterday and I got my head bitten off.....

It was obvious from the original video there was plenty of street lighting and it was just the exposure on the camera making it look darker than it was. There is street lights on both side of the road every few meters.
 
Here are some statements from a NY Times article that just appeared tonight that support what I have been saying about Uber for the last three days:

Uber's self driving cars were no where near as good as its competitors and its systems were in fact struggling to perform adequately. A crash was inevitable.

In fact the NY Times obtained an internal Uber document and found:

1. Uber's new CEO (he replaced Travis Kalanick last August) wanted to eliminate the self driving car programme at Uber but was convinced otherwise that it was in their interests to keep it going.
2. Uber's AVs were not living up to expectations for many months. "The cars were having trouble driving through construction zones and next to tall vehicles, like big rigs. And Uber’s human drivers had to intervene far more frequently than the drivers of competing autonomous car projects."
3. "Waymo, formerly the self-driving car project of Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona, according to 100 pages of company documents obtained by the NY Times".
4. To accelerate their AV miles to give the appearance of success, Uber took the decision to remove one of the previously two test drivers they had in the vehicle so they could allocate more single test drivers to more vehicles to gain miles. In other words, safety be damned. Uber claims the second driver was not there to perform safety but to enter data, a task that fell to the single driver once the second driver was removed. In other words, less safety. By the way, Waymo removed their second driver in Phoenix in 2015 after more than 6 years of testing, but still uses two drivers when it enters a new location. And recently Waymo has felt its programme is now so advanced in Phoenix that it has removed all test/safety drivers from its self driving cars.
5. Uber was put under pressure to deliver a driverless service by the end of this year and their new CEO was expected to visit Phoenix next month to ride in one of their self driving cars in a project internally called "Milestone 1. Confidence". That visit will not take place now. Recall that last month, Uber agreed to settle a long-simmering legal case brought against them by Waymo.
6. Uber's safety driver was looking down and her hands were not where they were supposed to be, namely hovering above the wheel in case it needed to slow down suddenly. So not only was Uber's car not road ready according to internal documents but its safety driver was not following company policy.
7. A year ago, Uber arrived in Phoenix with more than 400 employees and more than 150 self driving cars. In October they decided to eliminate the second driver in the car to accelerate their programme, and this decision was taken over the objections of some of their safety drivers. These drivers were worried that it would be difficult to stay alert after hours of monotonous driving.
8. Comparing the two programmes at Waymo and Uber, the Times notes: "Uber also developed an app, mounted on an iPad in the car’s middle console, for drivers to alert engineers to problems. Drivers could use the app anytime without shifting the car out of autonomous mode. Often, drivers would annotate data at a traffic light or a stop, but many did so while the car was moving, said the two people familiar with Uber’s operations. Mr. Kallman said it designed the app to meet government safety guidelines for in-car software to minimize distractions.

Waymo had a different solution when it moved to a single safety driver. It added a button on the steering wheel for drivers to create an audio explanation when they took the car out of autonomous mode.

Not all drivers followed Uber’s training. One was fired after falling asleep at the wheel and being spotted by a colleague. Another was spotted air drumming as the autonomous car passed through an intersection, according to the two people familiar with Uber’s operations."

Leaving aside the implications of this tragedy and the legal issues facing Uber, the reputation of Uber's self driving programme has been severely damaged, perhaps beyond repair. The fast and loose culture that the new CEO had hoped to bring under control is still a huge problem for them to solve. Had the new CEO followed his initial decision to exit the pursuit of creating their own self driving vehicle, this tragedy would never have happened.

What strategic advice would you give Uber at this stage? Mine would be: close it down and enter a partnership with Waymo (an Uber shareholder) to provide an Uber ride hailing service using Waymo's technology as soon as Waymo decides it is ready to offer it commercially. Recently Waymo filed for a commercial license to operate a ride hailing service with their own app in Phoenix and this was approved in January. I would also recommend that Uber let Waymo choose which employees, including Uber's engineers, they want to retain. And Uber should focus their attention on limiting the reputational and legal damage they will surely face as a result of this tragedy which could have been avoided.

The Times cites a Gartner analyst who suggests that other competitors might be given the benefit of the doubt with a crash. Not so with Uber.

https://www.nytimes.com/2018/03/23/...column-region&region=top-news&WT.nav=top-news
 
Last edited:
Isnt that exactly the point I was making yesterday and I got my head bitten off.....

It was obvious from the original video there was plenty of street lighting and it was just the exposure on the camera making it look darker than it was. There is street lights on both side of the road every few meters.

You might notice my recent post about a New York Times article that relies upon a 100 page internal Uber document they obtained from two sources with knowledge and my comments.

One additional item you might find interesting, in view of your comment, is the photo recreation of the street in the New York Times article. Recall that the Tempe police chief said that the pedestrian had appeared suddenly from the dark bushes? Well the NY Time photo showing where the crash occurred clearly showed that in order to arrive at the point of the accident, the pedestrian had already walked across three lanes of traffic, with the accident happening in the fourth lane. Certainly not a sudden appearance from dark bushes!
 
Having read the story/comments before watching the vid I was a bit surprised, looked like the pedestrian was very hard to see with dipped beams until the car got quite close (I assume it was obeying speed limit), the first part to become visible is the reflective shoes but they might be mistaken for road lines initially(?).
I think there is a good chance the pedestrian would've been hit by a human driver but it does seem weird that the car didn't brake at the last moment once the legs became visible.
 
Having read the story/comments before watching the vid I was a bit surprised, looked like the pedestrian was very hard to see with dipped beams until the car got quite close (I assume it was obeying speed limit), the first part to become visible is the reflective shoes but they might be mistaken for road lines initially(?).
I think there is a good chance the pedestrian would've been hit by a human driver but it does seem weird that the car didn't brake at the last moment once the legs became visible.

The NY Times story seems to answer your questions. First the car should never have had to rely on the "dipped beams". It is a $ 250,000 machine decked with sensors, radar, cameras and LIDAR which should have identified the pedestrian long before it came into view of the safety driver. At the very least the car's software should have caused it to at least slow down if not stop. The car did neither. The cars advanced software and hardware failed because Uber's technology is below standard to be driving on a public road. Second, the human safety driver was not obeying rules set by Uber, namely to be holding her hands above the wheel ready to intervene if a hazardous situation arose.

In a rush to win over a dubious CEO who thought their self driving programme should be abandoned, Uber cut corners and the crash appeared inevitable.

By the way, there is a further point to make about the video you watched. It is video from Uber's dash cam inside the vehicle--a very poor quality video. Ars Tecnica has an article out yesterday which shows videos taken by locals in Tempe, Arizona and the street is very well lit. All this will be revealed in discovery when the lawsuit against Uber commences. It seems unlikely that there will be any chance of a cheap cover up of the facts.
 
Last edited:
Isnt that exactly the point I was making yesterday and I got my head bitten off.....
agree I thought your earlier point was perceptive.

on an earlier point
Also if said AV's were to always slow down to a crawl or come to a dead stop in a 30mph or 40mph limit (such as Tempe was), as soon as they are near pedestrians, "just to be on the safe side in case they step out", then traffic in general will be even slower in places than it is currently, and some of the perceived benefits of AV's are instantaneously null and void, in that they should (in theory) allow traffic to move faster in cities and towns than currently traffic does in most urban areas. Plus the fact that human drivers following will never expect a vehicle to suddenly come to a complete stop in front of them, in a situation whereby it is perfectly reasonable to stay at a steady 40mph, therefore causing a load of collisions.

Do the AI systems identify/provide feed-back on the behaviour of the pedestrians and other road users ?
eg. we would categorize pedestrains based on many factors clothing/build/age/fitness/intent/integrated-behaviour(glances/posture/gait) and cover the brake, moderate speed appropriately, use pre-emptive avoidance.
.. so will Uber be transparent with their disclosure of failure mechanim ? if for example ai system had decided this was just a regular cyclist 'no threat' versus an erratic pedestrian pushing a bike ?
 
agree I thought your earlier point was perceptive.

on an earlier point


Do the AI systems identify/provide feed-back on the behaviour of the pedestrians and other road users ?
eg. we would categorize pedestrains based on many factors clothing/build/age/fitness/intent/integrated-behaviour(glances/posture/gait) and cover the brake, moderate speed appropriately, use pre-emptive avoidance.
.. so will Uber be transparent with their disclosure of failure mechanim ? if for example ai system had decided this was just a regular cyclist 'no threat' versus an erratic pedestrian pushing a bike ?

The lawsuit will require Uber to be "transparent" with respect to all "mechanisms". The article from the NY Times, which reveals internal Uber documents, mentions numerous issues that Uber's systems were having trouble identifying. Any attempt at cover up will be met with strong rebuke by the courts. It will all come out over time.

Another complication is suggested in this recent link from Brad's blog----Brad being a former employee of Google. In this extensive link, written before the NY Times expose article linked above, he mentions that he has heard that Uber may have been testing their vehicle with the LIDAR turned off completely. As you may know, it has been reported that Tesla has been testing its systems without use of LIDAR. But Uber should have had its LIDAR systems on as back-up if that were true.

http://ideas.4brad.com/it-certainly-looks-bad-uber
 
Velodyne, which is the manufacturer of the LIDAR hardware used by Uber now denies its LIDAR malfunctioned in any way. Velodyne's LIDAR must be integrated with Uber's other hardware (sensors, radar, cameras, etc) and software systems in order to interpret the data that Velodyne's hardware generates. Velodyne says it is baffled as to what happened but to look elsewhere as its LIDAR did not malfunction.

Velodyne says: "our Lidar doesn’t make the decision to put on the brakes or get out of her (the pedestrian's) way.”

Bottom line message: look no further than Uber.

https://www.bloomberg.com/news/arti...self-driving-uber-defends-tech-after-fatality
 
The connection between Uber's system failure and its attempts to steal Waymo LIDAR technology is mentioned in this blog author at Curbed in a story titled: "It's time to delete Uber from our cities" as "Uber's self driving program never played by the rules and now our safety is at risk".

The Waymo technology that Uber was trying to steal is the technology that makes Waymo self driving cars much safer than any other AV on the road today. Waymo describes it as follows:

"One of the most powerful parts of our self-driving technology is our custom-built LiDAR — or “Light Detection and Ranging.” LiDAR works by bouncing millions of laser beams off surrounding objects and measuring how long it takes for the light to reflect, painting a 3D picture of the world. LiDAR is critical to detecting and measuring the shape, speed and movement of objects like cyclists, vehicles and pedestrians."

https://www.curbed.com/transportation/2018/3/23/17153200/delete-uber-cities
 
(after my comment on categorizing pedestrians), an interesting recent paper here on the topic

Virtual Reality based Study to Analyse Pedestrian Attitude towards Autonomous Vehicles

that shows how far the autonomous vehicles are from the interaction we have when we are just using a zebra crossing.
this bit is interesting

1.1 Problem statement Research shows that pedestrian-driver interaction at a crossroad relies heavily on eye-contact between the pedestrian and the humandriver [25, 26]. Other signals such as gaze, body movements and posture helps the driver understand pedestrian's intent and in turn helps the pedestrian understand the driver's intention [26]. There is a tacit transaction taking place at road crossings where the pedestrian and driver interact to decide about who crosses the road first. The pedestrian can deduce the intention of the driver based on the driving behaviour without explicit communication between the two. The purpose of this study is to understand how a pedestrian would perceive vehicle behaviour in a situation where no driver is present. The pedestrian might have to rely on other factors to understand vehicle intention. This insight about pedestrian behaviour could be used to design interactions for autonomous vehicles. The following research question has been framed:

2.3 Existing solutions Concepts for autonomous vehicle's interaction with pedestrians have been developed and prototyped by few companies and research institutes. Most of the concepts involve some form of visual communication— either in the form of LED displays or projectors. Smiling Car (Figure 4) is a concept developed by Semcon, where a self-driving car interacts with pedestrians by smiling. When the vehicle detects a pedestrian, a display in front of the car lights up to depict a smile trying to communicate that the car will stop for the pedestrian [29]. Mercedes's F 015 Luxury in Motion (Figure 4) communicates with pedestrians using LED displays outside its body. F 015 lets the pedestrian know that the car has noticed them and illuminates their path with projected lights to guide them [19]. The projectors are also able to display informative messages according to the situation (Figure 4, bottom row). AEVITA [22] concept
 
(after my comment on categorizing pedestrians), an interesting recent paper here on the topic

Virtual Reality based Study to Analyse Pedestrian Attitude towards Autonomous Vehicles

that shows how far the autonomous vehicles are from the interaction we have when we are just using a zebra crossing.
this bit is interesting

Going forward, the interaction between pedestrian and AV will surely be a major focus of study, research and effort. I believe much has been done in this area but more is needed. Semcon's smiles and Mercedes' LED lights are useful. I recall a Waymo patent that attempts to soften the material/composite of the front of a self driving vehicle so that if a pedestrian or cyclist is struck, the car's composite acts like a shock-absorber to soften any blow.

Part of the problem I believe is the current laws, esp in the US that define with amazing precision what exactly can be allowed to drive on public roads. Recall that Waymo first considered building vehicles with no steering wheel or brakes but abandoned the idea when it realized that legislation would have changed (a lengthy process). Now its vehicles have a steering wheel and brakes but the passenger is prevented from using them, leading to interior design inefficiencies.

An evolving issue.
 
I will now stop reading or contributing to this thread, as the main contributor is so anti uber and pro waymo that there can never be an open or equal discussion or debate.
 
Back
Top Bottom