The ongoing Elon Twitter saga: "insert demographic" melts down

Status
Not open for further replies.
All fun and games saying "to hell with regulation", right up until it's your car that's been thrutched by a driverless one and the insurers are in the third year of a court case to determine who's actually liable ;)
Thing is that does essentially happen now.

If my future kid was hit and killed by a car from a drunk driver/careless driver, poor vision slow reaction grandma, take your pick, you think I won't be thinking why isn't automated driving a thing.

I was in a minor crash a few years back, lady pulled out and I drove into the side of her car. All her fault, insurance paid for my car etc (though all these years later I'm still waiting for personal injury payment, they offered £400 last week and that was quickly rejected) however, I wonder if existing automated driving capability would have reacted quicker than me, but it definitely unquestionably would have reacted better than her and not pulled out so blindly
 
Yup, it should be obvious that the adoption of these will be based on them massively reducing deaths not some impossible criteria of infallibility.

I mean imagine road deaths can be reduced by say 99% in future, the notion that we'd not want this because there are still 1% of cases where deaths occur and some of those involve situations where the software was forced into an accident scenario where it needed to take a decision to minimise risk where there are no particularly good options for the AI to take and a few others occurred because of some convoluted issue that it wasn't able to handle well, because of some particular set of circumstances it caused a crash that arguably ought to have been avoidable.

Yeh, and with use, no doubt circumstances will crop up that haven't been thought about so things can be improved in future cars/software or whatever.

I'd be happy to have a fully autonomous vehicle, as long as it was proven they were statistically safer than (by a decent measure) a human driver AND I wasn't held legally liable/faced increased insurance premiums or penalties for any RTA caused by the system.

If fully autonomous cars are allowed, but still with the law stating you have to be ready to take control at any point, and you remain liable, I'd rather just drive myself. I think that would also be a dangerous way to go as it could make manufacturers more complacent/less careful etc (if they now the driver will always still be liable for any accidents).
 
@Jono8 that latter part is almost basically the case already, some cars already can drive themselves on say motorways but currently, they're officially not fully automated and a driver does need to take control.

A thought re: insurance (or indeed some monthly contractual charge in lieu of insurance if manufacturers need cough up for accidents.)

Consider two equally skilled drivers, identical twins maybe - one commuting in a rural area and one commuting in London, over 10 years the London one may well have had several accidents and a higher premium on that basis; loss of no claims and/or increases in premium because of accidents. *(obs storing a car in London = higher premium to start with too).

So when you've got other road users who are not automated and limited adoption of automated cars then it may well be that an individual having an accident does impact things more, especially if they're one of only a few in their area. Obvs as adoption of the cars increases then you've got more data and indeed the risk in general of road travel can reduce as the proportion of automated vs human-driven cars increases.

With greater adoption then having an accident in an area with many automated cars and regular drivers doesn't need impact your insurance/charge much at all as an individual, it's just adding to the vast pool of data and is a mere blip, your insurance is the result of all of that data and still based on say location, hours driven. But I guess if you live in a remote area and things are still sparse then an accident could still have an impact.
 
Musk has gone full Partridge…

IMG-2899.jpg


Link (paywalled).
 
But that's already the case, most road accidents don't result in prosecutions, we accept there is a risk of using roads. If there's a big pile up because of fog on some motorway and several people die then you can't take the weather to court.

Of course if there's some negligence or recklessness then maybe you can prosecute people involved in the construction of automated vehicles, it depends on the situation.

Why don't you think that most road deaths carry blame?

That they're not cars isn't particularly relevant, they're an automated system that can fail was the point... and that is a software example. Hardware failures are part what we're talking about though if you want these cars to be infallible then limitations with the sensors, varying quality of cameras etc.. are an issue too.

Your objection here is really unclear tbh. you claim that these cars need to be "infallible" then you backtrack and only want to consider software making a "mistake" but you don't seem to understand that in some driving scenarios, there are a variety of decisions that could be taken and argued over which one was "correct" and it becomes entirely subjective.

That you could have some ethical dilemmas re: the decision taken in an emergency to minimise risk of some other vehicle(s), a pedestrian and the car/occupants itself and completely differing opinions on what action should have been taken to minimise loss in itself means your "infallible" criteria is flawed as it can't be met. There isn't necessarily a universal "correct" opinion on what the right action should be for every given scenario... let alone the inevitable tail-end issues/glitches where some accident can occur which the software/AI didn't handle well because of some very specific set of circumstances.

No you seem to be struggling with what I am saying but I have been clear from the beginning. I am talking about the software and only the software. Regulations I would hope would force the companies to have certain hardware on board to supply the software with all the information of what is happening around the vehicle that it requires to make the best choice available. From a logic point there is probably always a right choice, its when you add human emotion that things become blurred.

So with your train switching, if the software chose with every system working correctly to move the switches to put 2 trains into a head on collision that is bad and you would hope that the system would immediately be taken off line, the company responsible for the software would be looking down the barrel. You seem to be coming from "well if humans had been in charge it might have happened twice so meh"

:edited after not taking time to read it again
 
Last edited:
Thing is that does essentially happen now.

If my future kid was hit and killed by a car from a drunk driver/careless driver, poor vision slow reaction grandma, take your pick, you think I won't be thinking why isn't automated driving a thing.

I was in a minor crash a few years back, lady pulled out and I drove into the side of her car. All her fault, insurance paid for my car etc (though all these years later I'm still waiting for personal injury payment, they offered £400 last week and that was quickly rejected) however, I wonder if existing automated driving capability would have reacted quicker than me, but it definitely unquestionably would have reacted better than her and not pulled out so blindly

I've no problem with driverless cars in the future but I'd want to see a much higher standard than humans. Some seem to be ready to just roll them out because in SF 100 cars are driving around, even though they are having incidents and those incidents are being kept from the public. I can't stand our government but at least they don't let corporation beta test such things on our roads and then keep the data from the public sharing those roads.
 
Why don't you think most road death don't carry blame?

Eh?

No you seem to be struggling with what I am saying but I have been clear from the beginning. I am talking about the software and only the software. Regulations I would hope would force the companies to have certain hardware on board to supply the software with all the information of what is happening around the vehicle that it requires to make the best choice available. From a logic point there is probably always a right choice, its when you add human emotion that things become blurred.

That's flawed.
 
I've no problem with driverless cars in the future but I'd want to see a much higher standard than humans. Some seem to be ready to just roll them out because in SF 100 cars are driving around, even though they are having incidents and those incidents are being kept from the public. I can't stand our government but at least they don't let corporation beta test such things on our roads and then keep the data from the public sharing those roads.

That's fine, no one is objecting to wanting to see a higher standard than humans, it's the silly notion that they'd need to be infallible that's being objected to.
 
Last edited:
Musk: I hate Jews, they control the world and let all the immigrants in the southern border

Advertisers on Twitter: We are leaving

Musk: **** you, no one can blackmail me with money!

The Banks: Don't ask us for money with your antisemitic views goodluck getting a loan anywhere! It's a shame all your business are cash flow hogs, if only Twitter didn't require huge amounts of debt to avoid bankruptcy..

Musk: I'm now an aspiring Jew, look I visited the camp of that thing that I said didn't happen
 
Last edited:
Anyone here in the lucky 10?


I've heard of Mr Beast and was aware he's one of the most popular social media vlogers, but ~$265k from one old video put on X is insane... And apparently that's rubbish compared to YouTube! :eek:
 
Last edited:
That's fine, no one is objecting to wanting to see a higher standard than humans, it's the silly notion that they'd need to be infallible that's being objected to.

I think the software should be infallible or as damned close to it as is possible. I don't think that is too much to ask. These companies want us to place life and death trust in their software, I think they should have to produce that standard of product.
 
Status
Not open for further replies.
Back
Top Bottom