Heathrow total shutdown

No its all pure nonsense, the management is just useless its that simple.

Previously i thought that the infrastructure was trash, well turns out there are redundancies and backups.

Exactly and there is no way it should have taken so long to switch over to them.

The fact that they didn't means they were not even aware of the problem, which is a disgrace.
 
Put down the keyboard chaps, you are making yourselves look rather silly.

You simply have no clue as to the systems and processes which underpin the airports power supply, how they operate or any other considerations which need to be worked through before an airport can reopen.
You're right - I don't. I don't care about the how, because it's not my job.

What I can say with confidence is that it absolutely should not have happened.

I'll not call it incompetence just yet - I'll wait for the enquiry for that, but I do have a strong suspicion.

Let's just say it was Brize Norton instead - would people still think it was okay for the UK's largest military airport to be out of action for 24 hours due to a single transformer. I suspect not. And I suspect it couldn't happen either, as should be the case for Heathrow.
 
Bronze Norton is a complete different kettle of fish.

Even if it was out of action for any period of time, you wouldn’t know about it.

Let’s also be clear, aircraft could still land at Heathrow during this period because all those systems are backed up, they just had no capacity to process passengers. That’s wouldn’t be an issue at a military airfield.

So let’s just say it’s a false equivalence…
 
Bronze Norton is a complete different kettle of fish.

Even if it was out of action for any period of time, you wouldn’t know about it.

Let’s also be clear, aircraft could still land at Heathrow during this period because all those systems are backed up, they just had no capacity to process passengers. That’s wouldn’t be an issue at a military airfield.

So let’s just say it’s a false equivalence…

Also worth pointing out that Brize processes a minuscule fraction of the people that Heathrow does, and as we control all aspects of it (baggage, cargo, security etc) we can (and do) uproot it all and move elsewhere if needed, and do it all manually.

The two are not comparable.
 
Exactly and there is no way it should have taken so long to switch over to them.

The fact that they didn't means they were not even aware of the problem, which is a disgrace.

Put down the keyboard chaps, you are making yourselves look rather silly.

You simply have no clue as to the systems and processes which underpin the airports power supply, how they operate or any other considerations which need to be worked through before an airport can reopen.

You might have thought that they'd have an app for it. Updated and ready to go.
 
Bronze Norton is a complete different kettle of fish.

Even if it was out of action for any period of time, you wouldn’t know about it.

Let’s also be clear, aircraft could still land at Heathrow during this period because all those systems are backed up, they just had no capacity to process passengers. That’s wouldn’t be an issue at a military airfield.

So let’s just say it’s a false equivalence…

Look, as a current airline captain I can tell you this is not the case. We have in our flight manuals a table (and procedures) which guide us through exactly what the effect failed or downgraded equipment at the arrival airport has, in terms of landing at that airport. There is no provision in there for making an approach at an airport that is running solely on backup power. Therefore in a aviation context, it it not safe to do so. The only time I would consider making an approach under such circumstances would be after I had declared an emergency and absolutely needed to land. And if I did so and something went wrong, I could expect a deeply unpleasant level of scrutiny over my actions. I also personally doubt an ATC Tower would clear anyone to make an approach under such circumstances either. (I daresay I get the CAP413 version of 'on your head be it' - whatever that is)

Also worth pointing out that Brize processes a minuscule fraction of the people that Heathrow does, and as we control all aspects of it (baggage, cargo, security etc) we can (and do) uproot it all and move elsewhere if needed, and do it all manually.

The two are not comparable.

Well, they're comparable in a lot of ways except scale, the main one being that it's utterly unacceptable for either to be out of action for 24 hours. They require exactly the same services regarding the provision of facilities to aircraft and pretty much do everything LHR does (and more). But the main point I'm trying to make (and I'm sure you'll agree) it would be unacceptable for Brize to be out of action for 24 hours. As it is for LHR.

And just to re-emphasise my original point, which is from an aviation perspective and is that the loss of an airport such as LHR, puts undue pressure on the ATC system in the UK as a whole. That's not to say it is unsafe, but it reduces resilience in the system and is therefore less safe. It is much easier for additional shocks to the system (such as the closure of another airport) to render the whole system unsafe.

I daresay similar things are true for other pieces of critical infrastructure in the UK, from Hospitals to Power generation etc. And frankly it should never be acceptable.
 
Last edited:
There is no provision in there for making an approach at an airport that is running solely on backup power. Therefore in a aviation context, it it not safe to do so.

Interesting, and this raises more questions:

  • Would you even know if the airport / ATC was on UPS? I'd have thought that in order to prevent panic and anxiety that this would only be communicated if absolutely necessary.
  • How do you know what is / isn't backed up? I reckon the entire air-side systems are all backed up by UPS and non-critical things aren't. If there's even a small risk of an incident as a result of the landing lights/comms/ATC systems/whatever not being backed up then that's a massive risk.
  • Again these all ignore whether these systems have been properly maintained and are actually fit for purpose. Which I rather strongly suspect isn't the case for the non-critical systems.
 
Interesting, and this raises more questions:

  • Would you even know if the airport / ATC was on UPS? I'd have thought that in order to prevent panic and anxiety that this would only be communicated if absolutely necessary.
  • How do you know what is / isn't backed up? I reckon the entire air-side systems are all backed up by UPS and non-critical things aren't. If there's even a small risk of an incident as a result of the landing lights/comms/ATC systems/whatever not being backed up then that's a massive risk.
  • Again these all ignore whether these systems have been properly maintained and are actually fit for purpose. Which I rather strongly suspect isn't the case for the non-critical systems.
Not an airline...

But I'd imagine that information would be sent out immediately because it is a major safety concern as it means the airport is one step away from complete shut down/loss of power.
There are ways to send messages direct to aircraft without it being via voice, IIRC for example all modern commercial aircraft have digital text message systems (and have done for a long time) that can be used for things like updates on weather at an airport or urgent communications, and airline pilots aren't likely to panic over the idea that an airport is unavailable, they deal with airports becoming unavailable on a regular basis due to everything from weather, to incidents on the runway (everything from birds or debris reported to full on crashes), to equipment failure or even people breaching security.
 
OK so here's a bit of actual technical knowledge surrounding the whole shutdown, what should have happened and potentially why it all went wrong.

In an ideal scenario, the system would've indeed automatically swapped over to another supply. The reason this didn't happen could be extremely simple, or it could be extremely complicated. The most likely cause that it didn't swap over is that the failover kit didn't work correctly. These tend to come in a variety of flavours:

Further to this, when the failover switches activate, they need smoothing to ensure that the power supplies are in sync or you risk damaging the kit. This is called synchronisation. This happens by monitoring the sine waves of the supplies and ensuring that they match perfectly before swapping the load over. If they don't match perfectly and the switching takes place you risk crossing the phases of the three phase supplies and that's when things go bang. Like properly bang. This process usually takes a second or so but can take longer. In order for the load to not drop off during this very short process, another system is put in place. This will usually be a battery bank or a rotary UPS, which will carry the load using stored inertial energy.

As you can tell by now, we're already looking at three major potential failure points, each of which has hundreds of failure points within themselves, which is why they are subject to extremely strict maintenance.

Now let's assume that something has gone wrong, and the system has failed. The reason for this could be a fluke, lack of maintenance, poor system management, lack of management, inexperienced or inadequately trained staff or contractors, etc. The investigation will reveal all that in due course.

Anyway. The supply has now failed, and the failover systems have failed to switch to another supply for whatever reason. This power supply will likely have many interlocked safety systems connected to it, the purpose of this is to turn off other circuits or systems to prevent damage or injury to staff or other systems. Depending on how the system is configured, this could have a chain reaction and what may have led to the whole airport shutting down. For reference, this whole process will likely have taken less than a couple of seconds.

So now you're sat with a dead airport, nobody knows why, everyone's panicking and you need to get this back online. Your first thought will be to simply swap to another supply and voila, you're singing and dancing. The reality is that you have no idea what caused the power to fail, so you start investigating. First thing you do is call UKPN to see if the issue is on their end. They inform you of the transformer fire, but you know you have more supplies available to you so simply switch over to one of those right? No, because the inrush current on a system the size of Heathrow will be enough to take out an entire substation, so what'll happen if you do that is that hundreds or even thousands of safety devices will activate, tripping the power to thousands of circuits across the airport. There will be no rhyme or reason either, they will just pop at random, so you'll need to check every single circuit in the airport and manually switch them back on.

This is of course not ideal, so you do a controlled restart. This means physically walking the entire airport and turning off all the distribution boards or LV panels as stipulated in your emergency startup procedures - assuming of course that these are in place, current, applicable and that your staff are all competent and trained to the levels of knowing what to do.

This process, also known as load shedding, will take hours, and many staff. Once you've turned everything off, you can swap over to the secondary incoming supply, and then slowly start reinstating all the circuits one at a time so you're not hammering the hell out of a supply that likely hasn't been used in years.

Once you've reinstated all the power, you then need to check the entire electrical infrastructure, as you'll have loads of switches that will need manual resetting, and most likely all the generators will still be running as these often require manual shutdowns after activation.

This is just the engineering side of things. Now the other teams will need to jump in to action. The security teams will need to bring their CCTV systems online, as well as their access control kit (door locks, swipes, main PCs, etc). The cleaning crew will need to do a full tour of the airport to make sure that any leaks or spillages are cleaned up (sensor taps are spectacular for staying on in the event of a power failure for example), the UKBA will need to get all their systems online, body scanners, passport scanners, PCs, the lot. This again will take hours.

All of the above makes a ton of assumptions, most notably that everything starts up perfectly and no further works are required, which for a system the size of Heathrow will be the same odds as winning the lottery. Not to mention how all the ancillary systems respond to power outages. I'd have thought that things like body scanners might not like having their power randomly killed, but then these might have their own local UPS systems, who knows.

The above is a very, very broad guesstimate of what happened. I don't know their systems, staff, maintenance regimes, etc, so I can only apply what I do know, and fill in the gaps with a lot of assumptions.

Only once the above is all completed can the investigation begin, which is likely what's happening now.

What I will say is that the whole reinstatement might have gone like clockwork, given the scale of the shutdown and the works involved to get all the systems back online, I must admit that turning it all around in that short a time is quite impressive.
 
Jesus wept, poor Jimmy. That’s horrific….can’t believe they showed that on TV for children to see.

I also remember it :D

No doubt this style of advert is no longer acceptable as its deemed "too traumatising" for kids :cry:
 
You simply have no clue as to the systems and processes which underpin the airports power supply, how they operate or any other considerations which need to be worked through before an airport can reopen.
The question is not how they work but whether the systems and processes were adequate and being adhered to.
 
Put down the keyboard chaps, you are making yourselves look rather silly.

You simply have no clue as to the systems and processes which underpin the airports power supply, how they operate or any other considerations which need to be worked through before an airport can reopen.

That's the thing, we don't have to. The airport however should and from the time it took - it sounds as if they'd never practiced this or something completely left field happened. Many of us have worked for many years in IT or Cyber on here, DR and resilience practices are generally lower priority even in CNI sites than they should be.

Bottom line is we should be sensible and wait for the actual report. Not that the press will let this slip.
 
What's the betting all info on relevant procedure was not memorised or available in a hard copy but only accessible using the company computers... currently without power :)
 
@Diddums a few months pass Heathrow has been running on the “backup” substation. And the one that caught fire has now been fixed and works. Do they switch back to this substation? or does the “backup” sub station now become the primary substation?
 
Back
Top Bottom