Autonomous Vehicles

Google's parent Alphabet appears to be building a transportation company of the future through a series of investments in far flung projects that can be loosely tied together as transport companies. In addition to self driving vehicles, they are investors in electric scooters, flying cars and SpaceX (rocket launches).

Alphabet was the most active corporate investor in the US, with more than 100 deals, both directly and through its three venture capital subsidiaries, CapitalG, GV and Gradient.

Google Maps is the most relevant way most users access Google's transportation features. According to CNBC, "More than 55 percent of smartphone users in the U.S. use Maps, according to comScore, and it has more than one billion monthly active users worldwide relying on it for walking, vehicle, or public transportation directions." Google Maps also features Waze and Alphabet's subsidiary SideWalk Labs is actively thinking about the future of transportation in cities.

https://www.cnbc.com/2018/07/03/alphabet-transportation-investments-projects-overview.html
 
Memory price declines are significant in 2018, a component of self driving vehicles.

After a historic rise in 2017, prices this year of two major types of memory chips—NAND and DRAM—have fallen 37% and 16%, respectively, according to semiconductor sales tracker DRAMeXchange.

Samsung reported results for the prior quarter today and noted the price decline in memory for their smaller profit growth. Supply has now overtaken demand, esp with weakness evident in smartphone sales, another main user of memory chips.
 
Will Uber and Alphabet develop AVs in partnership, or will Uber partner elsewhere?

Alphabet was a relatively early investor in Uber and added to their equity position when the out of court settlement between Uber and Waymo was reached in the trial. Google also has an equity stake and AV partnership in the US with Lyft, Uber's principal competitor.

Meanwhile, Softbank of Japan, which holds equity stakes in many transportation companies globally (including a 15% stake in Uber and a 19% stake in GM Cruise) may wish to broker a deal between Uber and GM Cruise. What would each bring to a partnership? GM makes the robot (driverless system), owns the data and manages the fleet. Uber has the network in place to accelerate GMs efforts in utilization. Recently Uber has announced that it plans to begin testing AVs again in Pittsburgh after the fatal crash a few months ago. Meanwhile, GM has installed 18 DC fast chargers in San Francisco's Embarcadaro District, further efforts in their commercialisation of robo-taxis.

As Waymo is attempting to achieve with AV orders from Fiat Chrysler and Jaguar of approximately 82,000 vehicles, EV and AV infrastructure works better with fleets (a point successfully made by Uber and Lyft). Fleets will accelerate the development of EV infrastructure such as charging stations and grid improvements and AV infrastructure such as communication networks, 5G, sprectrum and supporting regulations. Also EV charging station utilization can be far higher with ride sharing fleets than with individually owned EVs (retail customers). An EV network will want to optimize charger locations, usage patterns and duty cycles to optimise battery life and reusability.

GM's President recently left the Lyft Board of Directors, suggesting a lower likelihood of a strategic relationship.

Worth watching future tie ups in this space perhaps even one between Uber and GM.
 
The Verge has published a fascinating article as to why we are still a long way off from true autonomous vehicles (a so called AI roadblock) and the difficulties in reaching it.

Will new edge cases keep popping up and cause the robotic system to fail to identify the right solution? As the Verge puts it: "Will self-driving cars keep getting better, like image search, voice recognition, and the other AI success stories? Or will they run into the generalization problem like chat bots? Is autonomy an interpolation problem or a generalization problem? How unpredictable is driving, really?"

https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber
 
Interesting chat with a Jaguar engineer today, regarding the I-Pace contact with Waymo, it may well have hit a roadblock.

Waymo is insisting on having the vehicles pass a series of tests that are approximately 1000% tougher than any test any current production vehicle is designed to pass, or has to by regulation pass to be allowed to be sold anywhere.

JLR now worried the deal may fall through as the vehicles will not be up to Waymo spec, and without huge modifications could never be.
 
Interesting chat with a Jaguar engineer today, regarding the I-Pace contact with Waymo, it may well have hit a roadblock.

Waymo is insisting on having the vehicles pass a series of tests that are approximately 1000% tougher than any test any current production vehicle is designed to pass, or has to by regulation pass to be allowed to be sold anywhere.

JLR now worried the deal may fall through as the vehicles will not be up to Waymo spec, and without huge modifications could never be.

While we are still in the speculation stage about the deal's ultimate future, nonetheless it is a fascinating tidbit about the way each of Waymo and JLR see their respective roles and functions. Thanks for sharing.

I can understand that in dealing with Waymo as an AV supplier, JLR has not experienced Waymo's critical and absolute need for safety----the rule book has not been written by the States because they do not want to crimp innovation so it falls upon the "driver" (Waymo, etc) to establish their standard of safety. Following the Uber fatality and the Tesla Autopilot, Waymo surely sees itself as needing to be leagues ahead of anyone else in passenger safety and is laying down what it expects to be able to achieve that. If JLR is able to work with Waymo and help it meets its high standards, it will have established its own AV supplier reputation as perhaps second to none. This is the true nature of a partnership.

Studies have shown that the majority of the population still does not trust an AV to be safer than a human driven vehicle and the fear factor can only be overcome by many millions of miles and perhaps billions of miles of safe AV driving. This of course presents the classic chicken and egg dilemma of which comes first....extreme passenger safety track record or consumer willingness to experiment with AVs.
 
Well so far it doesn't look safer. There's been a number of deaths and crashes now and there are only a very small number of them.



The acknowledged metric for comparing the safety levels for autonomously controlled car systems versus human controlled car systems is the number of fatalities per 100,000,000 miles (that is one hundred million miles)

Currently human powered vehicles have a death rate of approximately, 1.25 deaths per 100 million vehicle miles, or 12.5 deaths per one billion vehicle miles.



Latest data is that the Tesla fleet has now travelled over 3 billion miles in all, and over 1,3 billion while in autopilot mode, (as close to autonomous as you will get currently)


There have been 4 deaths reported because of autonomous vehicles, 3 drivers (all in Tesla's, while in autopilot mode and in every one, the death was caused by human error, as in not paying attention) and one pedestrian.



So that equates to 4 deaths per 1.3 billion miles or 3.07 deaths per one billion autonomous vehicle miles, so only 24.5% of the number of human driven deaths, thus making AV's more than 4 times safer than having a human behind the wheel.


And after all that, AV's will only ever get better.

Whereas after over a hundred years of human development behind the wheel of a car, we have not improved at all.

The cars have improved considerably over that time, and had humans evolved at the same rate, or even at the rate AV's are evolving, we should already have zero deaths by vehicles at all, and thus never need AV's.

But we have not, we are as stupid as ever, and get distracted, and get tired, and get plain idiotic when behind the wheel of a vehicle, so we do need AV's to stop us killing ourselves unnecessarily.,
 
The acknowledged metric for comparing the safety levels for autonomously controlled car systems versus human controlled car systems is the number of fatalities per 100,000,000 miles (that is one hundred million miles)

Currently human powered vehicles have a death rate of approximately, 1.25 deaths per 100 million vehicle miles, or 12.5 deaths per one billion vehicle miles.



Latest data is that the Tesla fleet has now travelled over 3 billion miles in all, and over 1,3 billion while in autopilot mode, (as close to autonomous as you will get currently)


There have been 4 deaths reported because of autonomous vehicles, 3 drivers (all in Tesla's, while in autopilot mode and in every one, the death was caused by human error, as in not paying attention) and one pedestrian.



So that equates to 4 deaths per 1.3 billion miles or 3.07 deaths per one billion autonomous vehicle miles, so only 24.5% of the number of human driven deaths, thus making AV's more than 4 times safer than having a human behind the wheel.


And after all that, AV's will only ever get better.

Whereas after over a hundred years of human development behind the wheel of a car, we have not improved at all.

The cars have improved considerably over that time, and had humans evolved at the same rate, or even at the rate AV's are evolving, we should already have zero deaths by vehicles at all, and thus never need AV's.

But we have not, we are as stupid as ever, and get distracted, and get tired, and get plain idiotic when behind the wheel of a vehicle, so we do need AV's to stop us killing ourselves unnecessarily.,

I mentioned previously that it would be necessary for AVs to travel hundreds of millions of miles or perhaps billions of miles in order to demonstrate their safety level/reliability in terms of fatalities and injuries. But to do so would take us years and I feel there is a moral imperative, like Entai has suggested, to release them on public roads as the record of human drivers is accepted as poor. This is why I suggested in my earlier reply that the speculated push and pull between Waymo and JLR is so very important to both of them because both need to find innovative methods of demonstrating safety and reliability. We just cannot afford to wait for these perhaps billions of miles to otherwise take place (the chicken and egg problem).

Therefore I refer to a study by Rand Corporation.
1. "To demonstrate that fully autonomous vehicles have a fatality rate of 1.09 fatalities per 100 million miles (R=99.9999989%) with a C=95% confidence level, the vehicles would have to be driven 275 million failure-free miles. With a fleet of 100 autonomous vehicles being test-driven 24 hours a day, 365 days a year at an average speed of 25 miles per hour, this would take about 12.5 years." But testing to comparable or better rates than humans according to Rand will take many years more.
2. A different form of pilot studies needs to be created to overcome the obvious problems listed in item 1. Such pilot studies would need to involve "public-private partnerships in which liability is shared among developers, insurers, the government, and consumers. Simultaneously, the technology will evolve rapidly, as will the social and economic context in which it is being introduced. In fast-changing contexts such as these, regulations and policies cannot take a one-shot approach. Therefore, in parallel to creating new testing methods, it is imperative to begin developing approaches for planned adaptive regulation."

While not entirely satisfactory, this suggests room for a compromise solution that will be deemed safe enough for the public to feel comfortable in their use of AVs.

https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf
 
Yes but there are human drivers driving in places where AVs just can't. Is this just on public roads or all vehicle deaths? Also is it globally, including places with basically no rules on the road? I'd like to see how they navigate around an Indian city...

Driving on a mostly clear road, at a sensible speed your probably never going to die in a western country. But in an AV you don't know if it's just going to randomly drive in to something it didn't notice (like a frickin fire engine). Which has happened more than once. There is always that chance the AV screws up in a perfectly safe setting and kills you, people don't want to place their lives in the hands of a piece of code.
 
Last edited:
Yes but there are human drivers driving in places where AVs just can't. Is this just on public roads or all vehicle deaths? Also is it globally, including places with basically no rules on the road? I'd like to see how they navigate around an Indian city...

Driving on a mostly clear road, at a sensible speed your probably never going to die in a western country. But in an AV you don't know if it's just going to randomly drive in to something it didn't notice (like a frickin fire engine). Which has happened more than once. There is always that chance the AV screws up in a perfectly safe setting and kills you, people don't want to place their lives in the hands of a piece of code.


So no one ever gets on an aeroplane then ?

You place your life in the "hands" of a piece of code every time you fly.

Yes there are pilots on board, but no controls on modern planes are directly linked to the flight control surfaces, it is all fly by wire with computers actually doing the flying for the entire journey, including takeoffs and landings.

The pilots have the controls in their hands, but they are only switches instructing the computers do something when the control is in a certain position.

As you say at any moment that piece of code, in the computer could decide to do the exact opposite of what the pilot is asking for, and no one would be able to do anything about it.

On every modern plane, you could leave the pilot at home and the plane woudl do it all itself very easily.

A friend of a friend is a British Airways pilot and she has said she has set the plane on autopilot for the entire journey, including take off and landing, and she has not touched any control for the entire trip on quite a few occasions.
 
So no one ever gets on an aeroplane then ?

You place your life in the "hands" of a piece of code every time you fly.

Yes there are pilots on board, but no controls on modern planes are directly linked to the flight control surfaces, it is all fly by wire with computers actually doing the flying for the entire journey, including takeoffs and landings.

The pilots have the controls in their hands, but they are only switches instructing the computers do something when the control is in a certain position.

As you say at any moment that piece of code, in the computer could decide to do the exact opposite of what the pilot is asking for, and no one would be able to do anything about it.

On every modern plane, you could leave the pilot at home and the plane woudl do it all itself very easily.

A friend of a friend is a British Airways pilot and she has said she has set the plane on autopilot for the entire journey, including take off and landing, and she has not touched any control for the entire trip on quite a few occasions.

On a plane there are human pilots who can take over if the autopilot goes wrong (and they do sometimes go wrong!). Also in the air there aren't fire engines to crash in to or cyclists to run over, it's mostly empty space...
 
On a plane there are human pilots who can take over if the autopilot goes wrong (and they do sometimes go wrong!). Also in the air there aren't fire engines to crash in to or cyclists to run over, it's mostly empty space...


And on several occasions the pilots have tried to take over when the autopilot goes wrong, and been unable to, the computers have locked everything out and just dived to the ground, and the plane has crashed, then 200 to 300 or more die, not just one or two.

There was a study commissioned by the Federal Aviation Administration, whose conclusions were that "pilots sometimes rely too much on automated systems and may be reluctant to intervene". The study added that many pilots "lack sufficient or in-depth knowledge and skills’ to properly control their plane's trajectory, should they need to fly manually, due to inadequate training methods".

One report that studied over 200 plane crashes over a 40 year period, found that in around two-thirds of those crashes, autopilot issues were a major contributing factor, and the pilots were not able to control the plane manually.
 
Back
Top Bottom