737-800 down in China

Just out of interest - if an A.I. is good enough to operate an aircraft in the same manner as a human, and I have no doubt that at some point in the future A.I. will become sophisticated enough to do so. Then surely one of the failure modes of such an A.I. (because it will not be infallible) may well include the A.I. equivalent of 'commiting suicide'

edit due grammar.
 
It's not 100,000 scenarios, it's 100,000 simulations WORTH of scenarios. Basically an AI pilot can have the equivalent of millions of hours of flying time experience in all sorts of conditions which a human pilot will never have experienced.

This sounds like the fantasy of what people want AI to be.

Anything in charge of machinery is nothing more than a script that repeats decisions that were decided a long time back by humans.

Might as well say a CNC machine has millions of hours of experience behind it. It absolutely does but not in any kind of sentient way.
 
Because AI will do a better job of anything that humans can do as long as it has access to the information needed, it can be programmed with 100,000 simulations worth of scenarios and know how to react better than humans, it has faster reaction times, this isn't even a question for anyone who understands how powerful AI is.
Even with all the redundancy on planes, stuff still breaks so there is problem number one. Pilots are highly trained, and it is uncommon for planes to crash during landing even in poor weather conditions, so the AI isn't adding much there. I guess an AI would be able to land where a human pilot would execute a go around or reroute to another destination. Reaction times are not to relevant here planes are so large that it takes time for inputs to actually affect what is happening.

Then there is the issue of the simulations. Wind simulations and turbulence modelling is incredibly hard and resource intensive, so unlike a chess game a computer won't be able to blitz through 100,000s of flight simulations per day. You would also need to essentially type rate these AI for use on different planes. Then there is the accuracy of the scenarios that a person comes up with.

Essentially you would be trading one type of problems for another type of problem.

Also removing the pilots actually takes a way one of the barriers in the swiss cheese failure model, that is used for a number of other tasks that an AI simply could not do.
 
Back to the issue of this tread which is an apparent pilot suicide, its important to come up with a solution which is proportionate to what is effectively a black swan event, and likely to be effective against the problem. Suggestions of A.I. pilots or deep vetting of flight crew are massive overkill, unworkable and unlikely to provide any protection against this event in any case.

Since the Germanwings accident it now necessary to screen flight crew before employment in a european airline for physiological issues - having recently returned to flying after covid, I've just undergone such screening. Personally, it struck me as a box ticking exercise and could not for the life of me see how it could identify me as a suicide risk, but then again what do I know.

Probably more importantly, the last two airlines I've worked for have implemented a mental health support network for all employees but specifically flight crew in the form of a specially trained pilot peer support group. How effective this is I don't know, but it strikes me as a definite move in the right direction and likely to be the way to go in stopping such events.
 
This sounds like the fantasy of what people want AI to be.

Anything in charge of machinery is nothing more than a script that repeats decisions that were decided a long time back by humans.

Might as well say a CNC machine has millions of hours of experience behind it. It absolutely does but not in any kind of sentient way.

You don't actually need "sentience" though, a computer can perfectly understand every variable much better than a human and react accordingly. I don't really need to prove anything either because this will just become the normal way planes operate in the next 20-40 years. Pilots will just be there as a back up.
 
You don't actually need "sentience" though, a computer can perfectly understand every variable much better than a human and react accordingly.

As long as the computer (which must never physically fail) is fed accurate info from multiple sensors in a minimum of quad-channel and uses software with no 0 day bugs.

I agree with you that AI is the future, I just disagree on the time scales involved. I also agree that even with AI a human would still need to be involved as a pilot can look out of the window and disregard every single sensor readout to land manually if they want, should the computers start misbehaving.
 
You don't actually need "sentience" though, a computer can perfectly understand every variable much better than a human and react accordingly. I don't really need to prove anything either because this will just become the normal way planes operate in the next 20-40 years. Pilots will just be there as a back up.
Considering that you made this suggestion due to the suspected pilot suicide, At this point your reason for adding this incredibly advance AI is redundant, if you still have a pilot that can override the AI.

You will also still need to have two pilots for backup as the workload is simply too much for a single pilot. And these pilots will need real world experience to keep their skills sharp. So now we've reached a point where this AI adds cost but no benefit.
 
You don't actually need "sentience" though, a computer can perfectly understand every variable much better than a human and react accordingly. I don't really need to prove anything either because this will just become the normal way planes operate in the next 20-40 years. Pilots will just be there as a back up.

Planes already have automated procedures right now and have done for decades.

Automation of every single scenario without needing human intervention is the "AI" fantasy and it would require enormous confidence that every single scenario is accounted for.

Why are you stating confidence in such automation and also saying it would require a human pilot anyway.
 
Considering that you made this suggestion due to the suspected pilot suicide, At this point your reason for adding this incredibly advance AI is redundant, if you still have a pilot that can override the AI.

You will also still need to have two pilots for backup as the workload is simply too much for a single pilot. And these pilots will need real world experience to keep their skills sharp. So now we've reached a point where this AI adds cost but no benefit.

Pilot can also take over a plane remotely lol. This is like people arguing against tractors taking over the work of humans.
 
every time you get on a plane now you will be waiting for the captain to come on speaker and say hello everyone today you are all going to die.
good bye.

its just so easy to do, how many have committed suicide like this in say the last 20 years.
 
Pilot can also take over a plane remotely lol. This is like people arguing against tractors taking over the work of humans.
There are quite a few issues with remoting into the plane, including lag and potentially not being able to get a proper feel for the plane. Also if they need to remote in, that implies an issue with system so how would they get reliable flight data for them to fly the plane?

Tractors tend not to kill 100+ people when they go wrong.
 
There are quite a few issues with remoting into the plane, including lag and potentially not being able to get a proper feel for the plane. Also if they need to remote in, that implies an issue with system so how would they get reliable flight data for them to fly the plane?

Tractors tend not to kill 100+ people when they go wrong.

We already have pilots killing 100+ people, an A.I. only has to be less faulty than a pilot. Again you're arguing against technology taking over the role of humans, it's silly, of course it will happen. Incremental improvements in A.I. will make humans obsolete in tasks like this.
 
Back to the issue of this tread which is an apparent pilot suicide, its important to come up with a solution which is proportionate to what is effectively a black swan event, and likely to be effective against the problem. Suggestions of A.I. pilots or deep vetting of flight crew are massive overkill, unworkable and unlikely to provide any protection against this event in any case.

Since the Germanwings accident it now necessary to screen flight crew before employment in a european airline for physiological issues - having recently returned to flying after covid, I've just undergone such screening. Personally, it struck me as a box ticking exercise and could not for the life of me see how it could identify me as a suicide risk, but then again what do I know.

Probably more importantly, the last two airlines I've worked for have implemented a mental health support network for all employees but specifically flight crew in the form of a specially trained pilot peer support group. How effective this is I don't know, but it strikes me as a definite move in the right direction and likely to be the way to go in stopping such events.

I did the assessment with a psychologist, it does indeed seem like a box ticking exercise and to cover themselves should anything happen. To be honest I question my sanity everyday for continuing in the ‘profession’ that is piloting but that’s a different story.
 
We already have pilots killing 100+ people, an A.I. only has to be less faulty than a pilot. Again you're arguing against technology taking over the role of humans, it's silly, of course it will happen. Incremental improvements in A.I. will make humans obsolete in tasks like this.
You are talking like this is a regular occurrence. There have been 8 potential suicide crashes in commercial aviation history, of those 8 only 4 have been confirmed to be sucide. In contrast there have been more hijackings and your solution is to add a remote system which essentially makes hijacking planes easier.

Actually I'm pointing out the problems in your argument that need to be overcomed for it to work.
 
You are talking like this is a regular occurrence. There have been 8 potential suicide crashes in commercial aviation history, of those 8 only 4 have been confirmed to be sucide. In contrast there have been more hijackings and your solution is to add a remote system which essentially makes hijacking planes easier.

Actually I'm pointing out the problems in your argument that need to be overcomed for it to work.

Why? I'm not designing a system, I'm saying it is inevitable A.I. will be flying planes one day. You don't need to tell me the flaws because people smarter than us both will work them out.
 
Back
Top Bottom