Autonomous Vehicles

but as humans we still haven't adapted to stop making the same errors we always have, at least as AV/AI evolves it should get better every generation.

Although humans themselves won't be evolving (as drivers) with each generation, we do take actions off the back of experiences so for example changing legislation to influence behaviour, making cars safer, driving instruction / tests evolve, identifying blackspots etc all with the goal of giving better outcomes for drivers. This doesn't make a 17yo a better driver than a 17yo 30 years ago, but it does provide a better framework for them to operate in. So errors are less impactful, or discouraged from occurring because we try to reduce dangerous or risk-taking behaviour such as drink driving, speeding, use of mobiles etc.
 
Although humans themselves won't be evolving (as drivers) with each generation, we do take actions off the back of experiences so for example changing legislation to influence behaviour, making cars safer, driving instruction / tests evolve, identifying blackspots etc all with the goal of giving better outcomes for drivers. This doesn't make a 17yo a better driver than a 17yo 30 years ago, but it does provide a better framework for them to operate in. So errors are less impactful, or discouraged from occurring because we try to reduce dangerous or risk-taking behaviour such as drink driving, speeding, use of mobiles etc.

To some extent, but in general we do a really bad job of it.

Take poor road layout, the normal approach is to put a speed camera nearby to stop speeding, maybe drop the limit.
We don't for example, force the landowner nearby to remove the hedge causing an viewing obstruction, fix the negative camber corners on the road, fill in all the pot holes in a timely manner, ensure road markings are kept clear etc etc etc
 
The best IT systems have an expected uptime a helluva lot higher than 99% though - the expected downtime is orders of magnitude lower than 1%.
That said, I agree that there will be accidents and part of the issue will be when people die as a result there will be a big backlash against the technology, perhaps disproportionate relative to the scale of the problem compared to other killers.
Humans have this little advantage called "life experience". It allows us to approach new and unexpected situations without a) shutting down/aborting or b) catastrophic failure due to failure to recognise or interpret danger.

AI/machine learning is focused on a task. The machine has no "life experience" to fall back on when it encounters a situation it cannot interpret. In such cases it might choose to shut down as safely as possible (ie stop) or might make a catastrophic decision because it just doesn't understand and can't interpret something genuinely new.

That's the biggest difference. Humans can handle new situations (which are never completely new but maybe sufficiently different from previous experiences to be called "new") by applying related experiences. Machines lack other "experiences" beyond the focused task they are built to solve. If something crops up that cannot be interpreted then you have much more scope for failure, some of which could easily be fatal in this application.

What's more, some of these failure modes might take years to discover, if they are fringe cases. I would expect (in my ignorance) multiple yearly disasters with autonomous cars, as the various failure modes come to light.

e: Lastly, do you trust the mega-corps to have your safety in mind? Really?

https://www.theguardian.com/technology/2016/jan/12/google-self-driving-cars-mistakes-data-reports

I'm not sure what the motivation is behind the companies that are actively pushing the technology, tbh, besides the prestige/competitive nature of these firms. But Google lobbied to be able to withhold data regarding accidents and "AI disengagement" events from the authorities. They seem to have been forced into disclosing these, but get to decide themselves which to report and which to keep secret. And it seems they choose to keep the vast majority of such incidents unreported.

Do you trust Google to act with your best interests in mind? Your safety? I know I don't.
 
Last edited:
To some extent, but in general we do a really bad job of it.

Take poor road layout, the normal approach is to put a speed camera nearby to stop speeding, maybe drop the limit.
We don't for example, force the landowner nearby to remove the hedge causing an viewing obstruction, fix the negative camber corners on the road, fill in all the pot holes in a timely manner, ensure road markings are kept clear etc etc etc

Most of those would likely cause varying degrees of issues for autonomous vehicles as well. Humans can improve by reducing drink driving, speeding and use of mobile phones, AV can't because presumably they are programmed not to speed anyway and the other two points not relevant.
 
Humans have this little advantage called "life experience". It allows us to approach new and unexpected situations without a) shutting down/aborting or b) catastrophic failure due to failure to recognise or interpret danger.

AI/machine learning is focused on a task. The machine has no "life experience" to fall back on when it encounters a situation it cannot interpret. In such cases it might choose to shut down as safely as possible (ie stop) or might make a catastrophic decision because it just doesn't understand and can't interpret something genuinely new.

That's the biggest difference. Humans can handle new situations (which are never completely new but maybe sufficiently different from previous experiences to be called "new") by applying related experiences. Machines lack other "experiences" beyond the focused task they are built to solve. If something crops up that cannot be interpreted then you have much more scope for failure, some of which could easily be fatal in this application.

What's more, some of these failure modes might take years to discover, if they are fringe cases. I would expect (in my ignorance) multiple yearly disasters with autonomous cars, as the various failure modes come to light.

e: Lastly, do you trust the mega-corps to have your safety in mind? Really?

https://www.theguardian.com/technology/2016/jan/12/google-self-driving-cars-mistakes-data-reports

I'm not sure what the motivation is behind the companies that are actively pushing the technology, tbh, besides the prestige/competitive nature of these firms. But Google lobbied to be able to withhold data regarding accidents and "AI disengagement" events from the authorities. They seem to have been forced into disclosing these, but get to decide themselves which to report and which to keep secret. And it seems they choose to keep the vast majority of such incidents unreported.

Do you trust Google to act with your best interests in mind? Your safety? I know I don't.

The discussions in the last few days have been very thoughtful and have raised a number of questions about safety, AI, AV goals, AVs vs humans and now with your comments, questions about the motivations of AV companies such as Google/Waymo. On the last comment, you cite a Guardian article from January 2016 about incident disclosure. While relevant, I see your comment and the Guardian article as missing the big picture.

I start with the basic assumption that safety is a major, unsolved human problem when it comes to road traffic deaths globally of 1.3 million per annum. 94% of these deaths are the result of human driving error. So why did Google get involved? Any search of self driving technology history will tell you that Google considers it a corporate mission to attempt to solve large-scale, unsolved human problems. Road traffic deaths is one such problem that they believe is soluble. For example, life expectancy is another (see Verily, its Life Sciences effort). Will all their efforts succeed? Probably not. Is it worth trying by allocating huge corporate resources to the unmet human needs? I believe so. Will they attempt to make a profit in the process? Of course. Do they have a competitive advantage? I believe so with their longer operating history than most competitors, with their track record and miles driven, with their virtual testing experience, with their proven ability to analyse data and apply AI/ML techniques.

Your comments refer to the ability of humans to exercise judgement in fringe cases vs the ability of technology to perform in such fringe cases. Do we know exactly where the ranking of the two abilities stand today? Not really as much of the information is still a corporate secret. While we suspect that humans still hold the edge, esp in fringe cases, that gap appears to be narrowing. How safe must an AV be to be approved for wider use? Must it be twice as good as a human driver? Ten times as good? What we do know is that Google Waymo has operated their growing fleet of vehicles in autonomous mode in the US in over 4 million miles of "driving" in varying conditions, with more testing being done every day and in varying conditions: snow, fog, ice, rain, inner city, etc. Also we know that in terms of Waymo's ranking, the number of disengagements during these 4 million miles driven is the lowest of any other company's AV efforts. We also know that Waymo has simulated driving experience of more than 3.5 billion miles where edge cases are tested and retested routinely. Waymo is now testing their vehicles on road in 24 US cities including operating a Level 4 ride-hailing service with no driver in the front seat in Chandler, Arizona. Yesterday I note that Waymo and the City of Atlanta, Georgia announced that Waymo's Chrysler Pacifica SUVs have begun 3D mapping Atlanta for AV road testing to begin soon--Atlanta becoming Waymo's 24th test city.

I expect Waymo to introduce its technology at a gradual pace in ride-hailing fleets in controlled environments at first to give the public and the regulators increasing confidence. Waymo states that safety is their primary goal. When their AI/ML technology is combined with their mission of dramatically reducing road traffic deaths, I see good possibilities. Will traffic deaths still occur when AVs are introduced? Sadly yes, esp with human drivers still part of the equation. I see Level 4 automation as achievable in controlled environments in the next couple of years with ride hailing with Level 5 AVs still some years away. And still further away is the prospect of consumers ability to purchase Level 5 AVs.

http://www.ajc.com/news/local/gridl...y-improve-road-safety/WbKQNSFQrJt7bBJWtUr2aM/

Looking at the announced partnerships that Waymo has already formed reveals something more about their strategy. They have formed partnerships with Fiat Chrysler to produce the Pacifica SUV with Waymo's AV technology built in at the factory. They have a partnership with Avis to service their AVs in Phoenix and Chandler Arizona. They have a partnership with Autonation to provide parts to their AVs. They have a partnership with Truvo, an insurance company backed by Munich Re to insure passengers in their ride hailing fleets.

I expect much more progress in 2018. Stay tuned.
 
Humans have this little advantage called "life experience". It allows us to approach new and unexpected situations without a) shutting down/aborting or b) catastrophic failure due to failure to recognise or interpret danger.

AI/machine learning is focused on a task. The machine has no "life experience" to fall back on when it encounters a situation it cannot interpret. In such cases it might choose to shut down as safely as possible (ie stop) or might make a catastrophic decision because it just doesn't understand and can't interpret something genuinely new.

That's the biggest difference. Humans can handle new situations (which are never completely new but maybe sufficiently different from previous experiences to be called "new") by applying related experiences. Machines lack other "experiences" beyond the focused task they are built to solve. If something crops up that cannot be interpreted then you have much more scope for failure, some of which could easily be fatal in this application.

What's more, some of these failure modes might take years to discover, if they are fringe cases. I would expect (in my ignorance) multiple yearly disasters with autonomous cars, as the various failure modes come to light.

e: Lastly, do you trust the mega-corps to have your safety in mind? Really?

https://www.theguardian.com/technology/2016/jan/12/google-self-driving-cars-mistakes-data-reports

I'm not sure what the motivation is behind the companies that are actively pushing the technology, tbh, besides the prestige/competitive nature of these firms. But Google lobbied to be able to withhold data regarding accidents and "AI disengagement" events from the authorities. They seem to have been forced into disclosing these, but get to decide themselves which to report and which to keep secret. And it seems they choose to keep the vast majority of such incidents unreported.

Do you trust Google to act with your best interests in mind? Your safety? I know I don't.



Ok humans have "life experience" but in the vast majority of cases they are purely individual "life experiences" and humans do not communicate between each other that much, to share said experiences, except in very local familial associative situations, they do not tend to spread their individual knowledge amongst all other humans world wide.

The AI in AV's do, and/or will.

Tesla's already do send information of all their local individual vehicle "experiences" of new situations and previously unknown instances, to a central hub where they are all processed and sent back to all vehicles linked to the hub, so all Tesla's wherever they are around the world, get the same knowledge and "life experiences" and build together a knowledge base of how to deal with all situations they come across. So a car in Australia comes across a situation unknown previously and deals with it successfully without endangering it's occupants, within minutes or at least an hour or so, every Tesla in the world will know how to deal with that situation, or anything approximating it, should they find one similar.

The long term plan of all AV manufacturers is to link them all, so they all learn together, all share their knowledge and experiences, it is in the best interests of all.

A human learns their "life experiences" over their lifetime (70-80 years) and a few hundred thousand miles of driving their own cars in their own little area.

AV's machine learning AI's will learn a a massive multiplication more over just a few months, and several hundreds of millions of miles, across all sorts of terrains across all of the world, and will be able to apply related knowledge gained from all other AV's around the world, to any given situation and know how to solve it safely, or if a situation comes to light that the vehicle is not sure of, they will just do exactly what any human would do, put the brakes on come to a stop safely and under complete control, and work out what to do next, not like a fair few humans would in a scary situation, panic and escalate a simple situation, into one that causes an accident.
 
AV's machine learning AI's will learn a a massive multiplication more over just a few months, and several hundreds of millions of miles, across all sorts of terrains across all of the world, and will be able to apply related knowledge gained from all other AV's around the world, to any given situation and know how to solve it safely
Wow. In just a few months of AVs being operational they will know how to solve any given situation safely?

I suspect that is a position of faith rather than evidence.
 
No I didn't say

Wow. In just a few months of AVs being operational they will know how to solve any given situation safely?.

I should have been clearer, by the time full level 5 AV's and AI's are in mainstream use, yes I reckon I will be not far off.

Then as every new AV comes online it has the knowledge of every one previous, and all the knowledge from all the simulators etc

However as that situation will not be for another 30/40+ years, and Waymo cars already have many millions of miles knowledge, and Tesla's also have many millions, add in another 30+ years of knowledge gained from all the manufacturers, simulators, and cars on the roads by then and we will be looking at knowledge gained over many trillions of miles, so by probability laws 99.99999% of every possible scenario will have been encountered and a solution known.
 
OK. 30-40 years is a lot more realistic a timescale than some expectations I've heard lately :) There are a lot of people who think this technology is just around the corner, but I think 30+ years is much more realistic.
 
OK. 30-40 years is a lot more realistic a timescale than some expectations I've heard lately :) There are a lot of people who think this technology is just around the corner, but I think 30+ years is much more realistic.

I think you are being far too conservative in your assessments of when AVs (using the power of artificial intelligence) will become common place on roads. Initially I see ride hailing in controlled environments as most likely the first commercial step---Waymo ride hailing AVs in Chandler, Arizona for example.

In a recent speech, Google CEO stated: "AI is “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire,” adding that people learned to harness fire for the benefits of humanity, but also needed to overcome its downsides, too. Pichai also said that AI could be used to help solve climate change issues, or to cure cancer."

Google says they are an "AI first" company now. Here is interesting link:

https://www.theverge.com/2018/1/19/...ial-intelligence-fire-electricity-jobs-cancer
 
There's nothing in that quote that says anything beyond AI "being important for our future".. it's aspirational stuff. No timelines.

Historically we've always been much too optimistic in predicting how the future will look. Remember in the 70s they thought we'd all be living in space by now.

As for curing cancer... everybody would herald that as a fantastic achievement if and when it happens. If AI has a part to play then great. But I'd be extremely wary of anyone saying, for example, that AI will cure cancer in the next two decades. Same with cars. If/when it happens it will a fantastic technical accomplishment. I'll be impressed. I just don't see that future coming in the next 10 years.
 
OK. 30-40 years is a lot more realistic a timescale than some expectations I've heard lately :) There are a lot of people who think this technology is just around the corner, but I think 30+ years is much more realistic.

This technology is on the roads now, not just around the corner.

We have had level 1 for a few years already, adaptive cruise control, self parking, lane assistance, these are all level 1 autonomous vehicles, with a small degree of AI.

All Tesla's with autopilot are level 2 vehicles, they have been around for a while too, again all with a small degree of AI.

The new 2018 new Audi A8 is the first production car to have Level 3 autonomy. At the push of a button, the A8′s AI Traffic Jam Pilot manages starting, steering, throttle and braking in slow-moving traffic at up to 60km/h on major roads where a physical barrier separates the two carriageways. When the system reaches its limits the driver is alerted to take over the driving. This car will be on our roads this year in their hundreds. Granted in some areas regulations legislation and laws will need to change to allow these cars on the roads, but it will happen this year, the AI on these is getting more complex, although still not at a level it can take over totally without any human interference ever.

Level 4 automated cars can drive themselves with a human driver onboard. The car takes control of the starting, steering throttle and braking as well as monitoring its surroundings in a wide range of environments and handling the parking duties.
When the conditions are right, the driver can switch the car to autonomous mode then sit back, relax and take their eyes off the road. When the vehicle encounters something that it cannot read or handle it will request the assistance of the driver.
However, even if the driver does not intervene and something goes wrong, the car will continue to manoeuvre autonomously. These cars are truly self-driving and the Google/Waymo self-driving vehicle has been operating at the level of autonomy for a few years. These are the ones they are starting to implement as ride hailing services as we type now.

Those levels of autonomous vehicles, we already have and are in production currently, all be it on a low level scale, with low level AI, the human still has the option to take over should they feel it necessary.

At level 5 the vehicle needs no human control at all. It doesn’t need to have pedals, or a steering wheel, or even a human onboard. The car is fully automated and can do all driving tasks on any road, under any conditions, whether there’s a human on board or not.

These level cars are in research currently but technology and complexity of AI, needs to catch up before they are fully workable, but I do think within 5 to 10 years (so yes just around the corner) they will be trialling this level of vehicle on the roads, again if legislation and laws are changed to allow it.

That brings us back to what I originally said, which was that I think it will take 30-40 years before full level 5 vehicles will be in full mass production by a number of manufactures so that the are tens, if not hundreds of thousands of this level vehicle on the roads, which will then make them mainstream vehicles.
 
Yes... 30-40 years for a full "driverless/autonomous vehicle" solution.

When people say "AV is just around the corner", they don't mean trials with a Google employee being ready to take over at a moment's notice. I'm sure it hasn't escaped anyone here that such trials are already underway. Give me some credit :p

When people states that "AV is nearly ready", they don't mean trials, they mean level 5 solutions. And for that, 30-40 years sounds much more reasonable.

Eg, driverless lorries are only truly driverless if there isn't a driver in the cab ready to take over... the whole point being to remove the driver... And this situation isn't "just around the corner", nor is a proper unrestricted riverless tax service, etc... (where speed isn't limited to 35km/h and is allowed to go anywhere).
 
I said 30-40 years for hundreds of thousands of level 5 vehicles to be on the roads.

However I firmly believe here will be trials of level 5 (with people in them to take over if necessary) within 5 years, there will be vehicles on the roads ( with no one in them at all) within 10 years at the most.
 
They will run in to a major problem they need to over-come, which will set them back a very long time. They haven't even started testing them on hazardous roads or in extreme weather conditions. Only on nice, dry, open roads in California. Even then they were incidents. 5 years is very optimistic.
 
We have had 3 almost full level 5 cars running round our tracks without issue for 18 months now, we could be testing on the open road but government law and legislation will not allow it so we can only test on our private tracks and roads, the time (5 years or so) will be what it takes to get said laws changed, then we can get them on the public roads the day after the law is changed, and 3D mapping of more roads is complete.

The cars we are testing are run alongside other human driven vehicles on our tracks without incident, we fully 3D mapped all our circuits almost 4 years ago now, when we started liaising with manufacturers and the government on AV work.

The cars we have test in all weathers and almost every possible surface condition including off road and gravel roads, and icy surfaces and full snow, at our sister testing facilities.

All this testing is currently done right now, this moment, with no humans in the cars at all, everything is monitored remotely and there is an instant kill switch if needed, which to date it never has.

So yes, I see trials of these sort of vehicles on public roads within 5 years, (once laws are changed) and level 5 vehicles on sale to the public within 10 is a definite possibility.

There is nothing major holding it back, the technology is here now, it just needs to be developed some more in some areas.
 
We have had 3 almost full level 5 cars running round our tracks without issue for 18 months now, we could be testing on the open road but government law and legislation will not allow it so we can only test on our private tracks and roads, the time (5 years or so) will be what it takes to get said laws changed, then we can get them on the public roads the day after the law is changed, and 3D mapping of more roads is complete.

The cars we are testing are run alongside other human driven vehicles on our tracks without incident, we fully 3D mapped all our circuits almost 4 years ago now, when we started liaising with manufacturers and the government on AV work.

The cars we have test in all weathers and almost every possible surface condition including off road and gravel roads, and icy surfaces and full snow, at our sister testing facilities.

All this testing is currently done right now, this moment, with no humans in the cars at all, everything is monitored remotely and there is an instant kill switch if needed, which to date it never has.

So yes, I see trials of these sort of vehicles on public roads within 5 years, (once laws are changed) and level 5 vehicles on sale to the public within 10 is a definite possibility.

There is nothing major holding it back, the technology is here now, it just needs to be developed some more in some areas.

Very interesting personal contribution to AV thread. I am curious if you work for a major company or a small company involved in AVs---car manufacturer or tech company?

I am particularly interested in learning why you have not tested your AVs on road in those States that permit it---eg Nevada.

This link in PC Mag caught my eye. Do you share this view about a "God View" eye in the sky? Particularly for the edge cases where the vehicle is unsure what to do. How do you see teleoperation implemented?

http://uk.pcmag.com/news/92963/why-self-driving-cars-will-require-a-god-view-eye-in-the-sky
 
Back
Top Bottom