The ongoing Elon Twitter saga: "insert demographic" melts down

Status
Not open for further replies.
In these future driverless cars there is likely no controls to take control of. You just sit there and tell it where you want to go.

In which case the manufacturers will have to be liable for any accidents.

Unless some massive new change in legislation comes in for how car insurance and liability works etc.

As said earlier, it's going to be a complete legal minefield.
 
Yes manufacturers will be liable if their technology catastrophically malfunctions resulting in injury or death, but how is that different from if a plane falls out of the sky because Boeing designed a system poorly? Most accidents won't be as a result of poor driverless technology because we'll adapt the environment to suit them, people driving cars is just terrible. For example if you're sat on a motorway in a long queue of traffic, when the first car starts moving there's a delay between the second car starting, then the third, and fourth, then the fifth guy was distracted so he doesn't react for 10 seconds - this all causes such poor traffic flow that will be improved ten fold when you take humans out of the equation. It's crazy that you all drive on the road and see these sorts of things yet lack the imagination to see how computers will make roads will so much better.
 
Yes manufacturers will be liable if their technology catastrophically malfunctions resulting in injury or death, but how is that different from if a plane falls out of the sky because Boeing designed a system poorly? Most accidents won't be as a result of poor driverless technology because we'll adapt the environment to suit them, people driving cars is just terrible. For example if you're sat on a motorway in a long queue of traffic, when the first car starts moving there's a delay between the second car starting, then the third, and fourth, then the fifth guy was distracted so he doesn't react for 10 seconds - this all causes such poor traffic flow that will be improved ten fold when you take humans out of the equation. It's crazy that you all drive on the road and see these sorts of things yet lack the imagination to see how computers will make roads will so much better.

Like I've said before in this thread, my experience with technology has highlighted how prone to failure it is and how it rarely works as intended 100% of the time.

It WILL make a lot of mistakes and there will still be collisions. It's therefore interesting to discuss who is liable.
 
Yes manufacturers will be liable if their technology catastrophically malfunctions resulting in injury or death, but how is that different from if a plane falls out of the sky because Boeing designed a system poorly?

1) how many Boeings are in the air worldwide every day? A lot. How many cars are on the roads worldwide every day? A lot more.

2) how many aircraft collisions have there ever been? A bunch. How many car collisions have there ever been? So many more.

3) how will cars from manufacturer A react in a situation and how will cars from manufacturer B react? Identically? Doubtful, unless they're programmed identically.

We're gonna need legislation that's not just on a par with the hoops aircraft manufacturers jump through but exceeding such. We're gonna have to test these systems past the point of absurdity to make absolutely certain that they are going to play nice with each other.

You blithely state that manufacturers will be liable - I don't know how you can make that claim with any level of certainty. Not unless laws are in place that force the issue. Until then, no insurer is going to go anywhere near a driverless car.
 
You blithely state that manufacturers will be liable - I don't know how you can make that claim with any level of certainty. Not unless laws are in place that force the issue. Until then, no insurer is going to go anywhere near a driverless car.
I would suggest that we won't see automated cars until manufacturers are indemnified. That may be some criteria beyond which they have to meet and expectations on the "driver". Legislation will drive that and I'm sure the automotive lobby will help define that.
 
Yes manufacturers will be liable if their technology catastrophically malfunctions resulting in injury or death, but how is that different from if a plane falls out of the sky because Boeing designed a system poorly? Most accidents won't be as a result of poor driverless technology because we'll adapt the environment to suit them, people driving cars is just terrible. For example if you're sat on a motorway in a long queue of traffic, when the first car starts moving there's a delay between the second car starting, then the third, and fourth, then the fifth guy was distracted so he doesn't react for 10 seconds - this all causes such poor traffic flow that will be improved ten fold when you take humans out of the equation. It's crazy that you all drive on the road and see these sorts of things yet lack the imagination to see how computers will make roads will so much better.

We actually have very near matching views on this. I agree with most you say, and partly just feel to hell with regulation, get these cars on the roads as soon as possible, as I feel they are safer than many bad drivers. If it can replace a 90 year from driving I'd much rather it do that, while those confident drivers continue the old fashioned way.

One thing I think you're wrong with is a out planes. It's just silly to compare them. In particular, how many checks do planes required daily. A whole team of professionals are checking the planes. That doesn't happen with cars. It's just so far that you can't compare them, just as you can't compare automated trains to cars either.

I can't wait for automated vehicles, I wish it was something we had now, it's the biggest change to society that I can see coming up. No longer will you need to even own a vehicle, but at least not have a big drive, you can have a vehicle drop you off at home and go park at an indoor parking lot, calling it to your place however long before you need it. No longer will you have roads fill of vehicles parked up in the way. Old people will be a lot more mobile. You can stop owning vehicles but go to a subscription based option, with services that maintain the vehicles, keep them safe and clean. Driveways can go back to front gardens, or eve houses can be built with minimal front space, but a much larger garden in rhe rear. There's just so many advantages.
 
We actually have very near matching views on this. I agree with most you say, and partly just feel to hell with regulation, get these cars on the roads as soon as possible, as I feel they are safer than many bad drivers. If it can replace a 90 year from driving I'd much rather it do that, while those confident drivers continue the old fashioned way.

All fun and games saying "to hell with regulation", right up until it's your car that's been thrutched by a driverless one and the insurers are in the third year of a court case to determine who's actually liable ;)
 
I would suggest that we won't see automated cars until manufacturers are indemnified. That may be some criteria beyond which they have to meet and expectations on the "driver". Legislation will drive that and I'm sure the automotive lobby will help define that.

This for me is one of the concerns and it would be a terrible idea to indemnify them, they should be held responisble for their software decisions.
 
I would suggest that we won't see automated cars until manufacturers are indemnified. That may be some criteria beyond which they have to meet and expectations on the "driver". Legislation will drive that and I'm sure the automotive lobby will help define that.

I imagine in this case it will be the car itself that will have to be insured and not any people or the owner etc. ie no people are held liable in the case of an accident caused by the vehicle when it is driving autonomously. **** knows how they will work out liability in an accident though.

Who knows, maybe there will whole new insurance industry/branch dedicated to insuring cars as a lifetime package from new, which the manufacturers have to pay for/ include in the cost of the vehicle. This would be good, in that it would be a great incentive to make a car that isn't prone to accidents as any vehicle that is shown to cause accidents will no doubt cost the manufacturer more to insure and/or make it more expensive for the consumer to buy (leading to less sales if the competition is the same but cheaper as it doesn't have as many accidents).

I guess its all workable, but boy is there going to have to be a huge amount of work on the legislation and laws surrounding all of this.

I just don't think autonomous driving will take off if people cannot be a passenger to it and relax (knowing they have no liability). Otherwise everyone may as well still drive as it offers little advantage to the customer.
 
Last edited:
One thing I think you're wrong with is a out planes. It's just silly to compare them. In particular, how many checks do planes required daily. A whole team of professionals are checking the planes. That doesn't happen with cars. It's just so far that you can't compare them, just as you can't compare automated trains to cars either.

Yeah but that's because if your car breaks down it will likely roll to a stop, a plane operates in an environment that is quickly deadly to humans (35,000 feet), and if a plane has a mechanical problem then obviously it doesn't just roll to a stop like a car will in a lot of cases - it will lose energy and need somewhere to land very quickly in a good case scenario, and in a bad one it will hit the ground like a dart and explode. Plus a plane can be carrying hundreds of people and flying over potentially populated areas, where as a car will usually only have a couple. Plus if a plane kills everyone on board it can damage the entire airline industry through people being afraid to fly. There's just much different safety standards for a lot of reasons.

I can't wait for automated vehicles, I wish it was something we had now, it's the biggest change to society that I can see coming up. No longer will you need to even own a vehicle, but at least not have a big drive, you can have a vehicle drop you off at home and go park at an indoor parking lot, calling it to your place however long before you need it. No longer will you have roads fill of vehicles parked up in the way. Old people will be a lot more mobile. You can stop owning vehicles but go to a subscription based option, with services that maintain the vehicles, keep them safe and clean. Driveways can go back to front gardens, or eve houses can be built with minimal front space, but a much larger garden in rhe rear. There's just so many advantages.

Agreed.
 
Last edited:
For the simple reason that when a driverless vehicle kills or seriously injures someone through its mistake (and it will) the public aren't going to be happy with but but but its safer than humans.

That some people are going to be unhappy an accident has occurred is not a huge barrier to overcome, certainly not to the point where vehicles need meet some impossible criterion like being "infallible".

Railway signals are automated, we still sometimes have rail accidents, that there isn't necessarily someone to lock up isn't a reason to bring back individual human signalmen to operate railway signals. Not that locking people up is necessarily needed for many accidents in the first place anyway, it's just an utterly absurd criticism you've come up with.

You seem to be confusing omnipotent with infallible. The former means all knowing. The latter means with all the information available it makes the correct decision every single time. I don't think that is too much to ask from a machine trusted to drive vehicles at 70mph in all conditions.

edit: oh and are you going to show me this automation you mentioned that fit the criteria I mentioned?

Nope, I'm not confusing the two, I also pointed out that they can't see around corners that doesn't mean I think the ability to see around corners = is infallible. You're being way too simplistic here I'm highlighting that it would be impossible for vehicles to be completely infallible, there isn't necessarily a "correct" decision for every scenario, they're also constrained by the laws of physics. Stop treating tech as some magical black box, the reality is your requirement will never be satisfied and that's not an issue.

Edit - re: your question; what automation I mentioned? Automated cars already exist in San Francisco and you've got that example. The broader point I made re: automation doesn't have additional requirements to it (those are somewhat arbitrary) but rather is just highlighting that automation is already here, sometimes it does result in injuries/death but that the computer/machine killed people isn't a barrier to adoption. Software and mechanical failures occur, the laws of physics still apply, other road users exist, weather still occurs, people, animals, fallen trees can get into the path of a moving car etc..
 
Last edited:
Does the data actually support this?



And i'm not saying it does, I'm just asking for evidence because maybe that data above hasnt account for the overall amount of cars on the road.

But the video I posted earlier in the thread mentioned Elon stating that Tesla FSD has done 150 million miles.

Tesla has 11.3 deaths per 100 million miles driven.

The average for human beings is 1.35 deaths per 100m miles driven.

That's a different company, Waymo is fully automated and is Google's vehicle arm Tesla doesn't provide the taxis. Tesals can have automation but that's not necessarily in use, if you want to look at the impact of automation then look at when it is being used.
 
It doesn't have to be 100% one or the other either. I personally don't see how we have autonomous cars on B roads in Europe anytime soon. I know a lot of issue of the current crop of cars is due to wireless devices used. 5g/6g stuff which is coming over the next decade will massively improve the data being transmitted between cars, making them much safer, but even then i struggle to see how cars will see that hedge that hasn't been cut or the pot hole thats formed recently. But autonomous cars which can do 90% of the leg work for you on motorways and a lot of A roads, that won't be far away with the newer cars/tech i don't think.
 
In which case the manufacturers will have to be liable for any accidents.

Unless some massive new change in legislation comes in for how car insurance and liability works etc.

As said earlier, it's going to be a complete legal minefield.

I think it's more a mixture, they ought to be liable for accidents where the driver would be as a result of some decision he/she made.

Like a child running into the road, if a driver wasn't paying attention or was speeding and hits the child then he/she is liable, but in some cases the driver may have been traveling <= [speed limit], slam on the brakes and still hit the child. It was physically impossible to stop in time and the child ran from some obscured position (hedge, parked car etc..).

A similar deal with automated cars, if it failed to detect the child within a reasonable time and apply brakes that's a different scenario to it being a practically unavoidable collision.

Obvs there can be situations where no one is at fault, some damage occurs due to debris in the road, maybe a tree branch falls on the car etc. And in some cases the driver/owner still has some potential fault - did they maintain the car properly, did they take it out in stormy/icy conditions after ignoring warnings not to drive etc.

Dealing with liability inc big payouts for low probability events is already solved though and has been for centuries via insurance, you'd perhaps have to tweak how the policies operate; if it's an automated driving-only policy then the concept of a no claims bonus based on the policyholder being assumed to be on of the good drivers is flawed... every one with that model of car has the same "AI driver". Your insurance policy perhaps changes based on your location and driving habits; where you're typically using the car etc..

It doesn't necessarily need a change in legislation (that isn't to say there won't be changes) but rather you could bundle insurance into some form of monthly charge, manufacturers are technically liable say but they offset that by requiring a policy/charge that covers the cost of that liability anyway.
 
Dealing with liability inc big payouts for low probability events is already solved though and has been for centuries via insurance, you'd perhaps have to tweak how the policies operate; if it's an automated driving-only policy then the concept of a no claims bonus based on the policyholder being assumed to be on of the good drivers is flawed... every one with that model of car has the same "AI driver". Your insurance policy perhaps changes based on your location and driving habits; where you're typically using the car etc..

yeh, that's why (as mentioned in other post), there could be automated driving policies that come with the car itself. As you say though, that wouldn't cover the differing risk in areas/where it is kept at night etc etc. I guess there could to be two policies, perhaps in tandem, or i guess (if the cars don't come with insurance for any mistakes/errors caused by the automated driving) one component of the policy would just be a fixed rate based upon the automated driving ability of the make/model/version of the vehicle (which i guess would be created via accident data of similar systems etc).

Edit: come to think of it that would likely be too complicated :p. Probably better to just to have one policy but have a component based on the risk data associated with the autonomous driving part of the car. However they would have to not increase premiums just for you if the accident was caused by the autonomous driving system. that would have to be added to the risk data for everyone who owns that vehicle i guess.
 
Last edited:
That some people are going to be unhappy an accident has occurred is not a huge barrier to overcome, certainly not to the point where vehicles need meet some impossible criterion like being "infallible".

That is great for the family of the victim. Sorry you won't get justice for your dead child because its a software glitch.

Railway signals are automated, we still sometimes have rail accidents, that there isn't necessarily someone to lock up isn't a reason to bring back individual human signalmen to operate railway signals. Not that locking people up is necessarily needed for many accidents in the first place anyway, it's just an utterly absurd criticism you've come up with.

Railway signals are nothing remotely like cars interacting on roads :cry: and they rely on physical signals from both trackside and train to work correctly. I'm talking about software here and nothing else. Not hardware failures, just software. If a camera or senor fails, that can and will happen, that isn't what we are talking about.

If you crash into someone because you weren't paying attention and they are seriously injured/killed, that is driving without due care and attention and carries a max jail term of 5 years. I find it baffling that you think serious injury/death aren't treated by the state as the serious thing that it is.

Nope, I'm not confusing the two, I also pointed out that they can't see around corners that doesn't mean I think the ability to see around corners = is infallible. You're being way too simplistic here I'm highlighting that it would be impossible for vehicles to be completely infallible, there isn't necessarily a "correct" decision for every scenario, they're also constrained by the laws of physics. Stop treating tech as some magical black box, the reality is your requirement will never be satisfied and that's not an issue.

You clearly are. No system is going to be expected to see around corners :cry: or you are just creating a strawman. You are the one talking about cars with god like powers.

Software when presented with all the information makes a decision, if it makes the wrong decision then its fallible. If it is advanced enough and makes the right decision every time then it is infallible. I don't care about laws of physics, that is a ridiculous thing to being into this, everything is bound by the laws of physics and no one will expect a system to do something that physical laws don't allow.

Edit - re: your question; what automation I mentioned? Automated cars already exist in San Francisco and you've got that example. The broader point I made re: automation doesn't have additional requirements to it (those are somewhat arbitrary) but rather is just highlighting that automation is already here, sometimes it does result in injuries/death but that the computer/machine killed people isn't a barrier to adoption. Software and mechanical failures occur, the laws of physics still apply, other road users exist, weather still occurs, people, animals, fallen trees can get into the path of a moving car etc..

So you can't provide an example of automation that is even close to removing the human from millions of cars using the roads on a daily basis? Why not just say that.

I don't care about mechanical failure for the second time. Hardware breaks, that is expected.
Weather, yes if there is sheet ice there is little any system could do. However if there is fog/rain and the car drives at 60mph and ploughs into someone/thing then that isn't acceptable and is a failure.
If it sees a fox in the road and swerves to miss it and mounts the pavement hitting someone that is a failure. No human should be making that choice.
No one would expect a system to be able to stop if a tree falls directly in front of the car, that is just stupid. However if the tree is clearly visible then yes it should stop because a human would stop. Just because it isn't human doesn't mean it gets a pass on such obstacles. It has to be able to deal with all eventualities where it is physically possible to do something.
 
Edit: come to think of it that would likely be too complicated :p. Probably better to just to have one policy but have a component based on the risk data associated with the autonomous driving part of the car. However they would have to not increase premiums just for you if the accident was caused by the autonomous driving system. that would have to be added to the risk data for everyone who owns that vehicle i guess.

They could self-insure tbh... they're big enough companies. The AI driver risk aspect would be constantly evolving too as it's updated but with many drivers they'll rapidly have good price estimates for that and could reduce insurance costs. They could for sure have variable components to the policy based on milage, high crime area, types of roads driven on etc..

That is great for the family of the victim. Sorry you won't get justice for your dead child because its a software glitch.

But that's already the case, most road accidents don't result in prosecutions, we accept there is a risk of using roads. If there's a big pile up because of fog on some motorway and several people die then you can't take the weather to court.

Of course if there's some negligence or recklessness then maybe you can prosecute people involved in the construction of automated vehicles, it depends on the situation.

Railway signals are nothing remotely like cars interacting on roads :cry: and they rely on physical signals from both trackside and train to work correctly. I'm talking about software here and nothing else. Not hardware failures, just software. If a camera or senor fails, that can and will happen, that isn't what we are talking about.

That they're not cars isn't particularly relevant, they're an automated system that can fail was the point... and that is a software example. Hardware failures are part what we're talking about though if you want these cars to be infallible then limitations with the sensors, varying quality of cameras etc.. are an issue too.

Your objection here is really unclear tbh. you claim that these cars need to be "infallible" then you backtrack and only want to consider software making a "mistake" but you don't seem to understand that in some driving scenarios, there are a variety of decisions that could be taken and argued over which one was "correct" and it becomes entirely subjective.

That you could have some ethical dilemmas re: the decision taken in an emergency to minimise risk of some other vehicle(s), a pedestrian and the car/occupants itself and completely differing opinions on what action should have been taken to minimise loss in itself means your "infallible" criteria is flawed as it can't be met. There isn't necessarily a universal "correct" opinion on what the right action should be for every given scenario... let alone the inevitable tail-end issues/glitches where some accident can occur which the software/AI didn't handle well because of some very specific set of circumstances.
 
Last edited:
I think they only need to be "infallible" if you think about insurance and liability in its current form. If that changes, as we've been discussing, they don't need to be infallible because no tech is. They will simply just have to have data to support that they are safer (IE less likely to cause an accident) than humans, and insurance that puts liability on the autonomous driving system (if that is the cause of the RTA).
 
Yup, it should be obvious that the adoption of these will be based on them massively reducing deaths not some impossible criteria of infallibility.

I mean imagine road deaths can be reduced by say 99% in future, the notion that we'd not want this because there are still 1% of cases where deaths occur and some of those involve situations where the software was forced into an accident scenario where it needed to take a decision to minimise risk where there are no particularly good options for the AI to take and a few others occurred because of some convoluted issue that it wasn't able to handle well, because of some particular set of circumstances it caused a crash that arguably ought to have been avoidable is silly.

"Infallibility" or rather lack of it isn't going to be a barrier to the adoption of these rather the risks of accidents will be reduced so significantly that it will become completely absurd for jurisdictions to not allow them. If anything, in the future the debate will more likely be about why humans haven't been banned from driving yet.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom