Heathrow total shutdown

Put down the keyboard chaps, you are making yourselves look rather silly.

You simply have no clue as to the systems and processes which underpin the airports power supply, how they operate or any other considerations which need to be worked through before an airport can reopen.

Actually, I do have a clue. My brother is an airport security specialist. I am fully aware of the procedures they go through and the disaster events they train for and how they train. My observation here is that Heathrow should have anticipated this sort of problem and automated the process of power switchover. It should have been done in the event of a terrorist attack, but also has the benefit that it should have dealt with a basic equipment failure like this.
 
My brother is an airport security specialist.
Dont think that means he has a clue about a consistent delivery of power.
have anticipated this sort of problem and automated the process of power switchover.
How do you anticipate a large scale substation going up in flame, when the likelihood is below 1%?

basic equipment failure like this.
A substation is basic equipment? Lol wut.

It should have been done in the event of a terrorist attack
So during a terrorist attack, lets switch over to the substation, continue operating even with the threat of more terrorism?

Actually, I do have a clue.
Do you? Or is it your brother, who doesn't have a clue?
 
Last edited:
Actually, I do have a clue. My brother is an airport security specialist. I am fully aware of the procedures they go through and the disaster events they train for and how they train. My observation here is that Heathrow should have anticipated this sort of problem and automated the process of power switchover. It should have been done in the event of a terrorist attack, but also has the benefit that it should have dealt with a basic equipment failure like this.

If it had been a simple shutdown of the equipment I’d be more inclined to agree, but seeing as the thing caught fire and burned itself to the ground for reasons we still don’t know I don’t think anyone would suggest simply swapping over power supplies would be a good idea without doing even basic investigation first. You could potentially cause even more damage or take out the next substation, especially if it WAS terror related.

You can’t prepare for every eventuality, but I think getting everything back up and running in 24 hours or so is fairly impressive.
 
If it had been a simple shutdown of the equipment I’d be more inclined to agree, but seeing as the thing caught fire and burned itself to the ground for reasons we still don’t know I don’t think anyone would suggest simply swapping over power supplies would be a good idea without doing even basic investigation first. You could potentially cause even more damage or take out the next substation, especially if it WAS terror related.

You can’t prepare for every eventuality, but I think getting everything back up and running in 24 hours or so is fairly impressive.

All the same people saying "they should have been set up to automatically switch over" would be quite happily posting "why does it automatically switch over, they should have a process for checking the systems" if it was Heathrow's infrastructure that had caused the fault and had then automatically failed over to another supply (or two) and blew that up too.
 
The point is that nothing about this is National Grid's fault; substation fires are an inevitable part of electricity transmission, albeit thankfully uncommon. The only learning from NG once they have shown they were maintaining the S/S in line with allowed maintenance funding, should be training Heathrow's power engineers to properly configure and use their own systems.

If Heathrow's systems did not have redundancy from a specific S/S built into them then someone has definitely dropped the ball. You can't tell me that their continuity plan for a substation outage is to close the airport for 24 hours.

Edit: John Pettigew and his corporate comms. team will be absolutely THRILLED that Thomas Woldbye went to sleep the night of. This is a massive win in terms of deflecting attention and a really stupid decision, which makes you wonder what other stupid decisions Heathrow's top brass have made.
 
Last edited:
Edit: John Pettigew and his corporate comms. team will be absolute THRILLED that Thomas Woldbye went to sleep the night of. This is a massive win in terms of deflecting attention and a really stupid decision, which makes you wonder what other stupid decisions Heathrow's top brass have made.
A stupid decision from the layman's perspective, but a considered decision nevertheless. The most difficult part of airport disruption is restarting ops and getting back up to the normal tempo. Many airports will have it as part of their contingency arrangements to keep the top brass on ice whilst the initial wind down occurs, delegating to a suitably senior and more technically capable subordinate, as it will be entirely out of their hands to influence anything. They can then be wheeled out fresh for the regeneration which may take 48hrs or more.
 
A stupid decision from the layman's perspective, but a considered decision nevertheless. The most difficult part of airport disruption is restarting ops and getting back up to the normal tempo. Many airports will have it as part of their contingency arrangements to keep the top brass on ice whilst the initial wind down occurs, delegating to a suitably senior and more technically capable subordinate, as it will be entirely out of their hands to influence anything. They can then be wheeled out fresh for the regeneration which may take 48hrs or more.

I'd rather my CEO stayed the hell out of the way if I had a major infrastructure issue to sort.
 
A stupid decision from the layman's perspective, but a considered decision nevertheless. The most difficult part of airport disruption is restarting ops and getting back up to the normal tempo. Many airports will have it as part of their contingency arrangements to keep the top brass on ice whilst the initial wind down occurs, delegating to a suitably senior and more technically capable subordinate, as it will be entirely out of their hands to influence anything. They can then be wheeled out fresh for the regeneration which may take 48hrs or more.

I'd rather my CEO stayed the hell out of the way if I had a major infrastructure issue to sort.

You're clearly operational people; it's a bit naïve not to consider the importance of visible leadership from a PR perspective if nothing else.
 
Having worked in the building maintenance game for pretty much all my life, I have worked for many, many building management clients. I'd say that 25% of what they do is productive time, and 75% is arse covering, finger pointing, blagging, hoping and an inordinate amount of faffing. If ever you want to see the pinnacle of incompetence and uselessness, have a look at the facilities management game in your workplace. I'm 99.999% confident that the maintenance company will have either submitted a quote, flagged the risk or done whatever due diligence they need to do to make sure that the responsibility has been dispensed on to the client, which will be Heathrow's management team, who will now have to answer as to why this happened.

Someone's head's gonna roll, mark my words/
 
Last edited:
You're clearly operational people; it's a bit naïve not to consider the importance of visible leadership from a PR perspective if nothing else.

You clearly don't know how many problems they cause with incompetent attempts at micromanagement and promises based upon their whims and uneducated opinions!
 
I think the number of people commenting who clearly have simply never been part of delivery using highly complex solutions is rife.

You can test until you're blue in the face, simply put, eventually you get caught out with a failure. I've worked at Data centres with their own substations (so big they had blast walls) with truly geographically diverse inward power routing, and equally much smaller affairs but with good backup solutions. You can't protect against every single failure - and a system with the scale and complexity of LHR (which is surely absolutely swamped with legacy kit all over the place) can't be underestimated.

Ultimately, sometimes you simply get caught out.

When working at an SME we had a horrendous failure. Our inbound power to one of the DCs failed, and backup didn't kick in. Our UPS ran out after 15 minutes and we were dark at that site for nearly 3 days, with someone eventually getting switchgear parts from Switzerland and it being motorcycle couriered to us.

We'd done numerous tests, never failing them. We'd had independent companies come in and run them to avoid confirmation bias in our testing plans.

Ultimately - what caught us out was a rat (yes, as in with a tail) deciding to crawl into the switchgear and kill itself across two separate banks of switching. You couldn't make it up.
 
I think the number of people commenting who clearly have simply never been part of delivery using highly complex solutions is rife.

You can test until you're blue in the face, simply put, eventually you get caught out with a failure. I've worked at Data centres with their own substations (so big they had blast walls) with truly geographically diverse inward power routing, and equally much smaller affairs but with good backup solutions. You can't protect against every single failure - and a system with the scale and complexity of LHR (which is surely absolutely swamped with legacy kit all over the place) can't be underestimated.

Ultimately, sometimes you simply get caught out.

When working at an SME we had a horrendous failure. Our inbound power to one of the DCs failed, and backup didn't kick in. Our UPS ran out after 15 minutes and we were dark at that site for nearly 3 days, with someone eventually getting switchgear parts from Switzerland and it being motorcycle couriered to us.

We'd done numerous tests, never failing them. We'd had independent companies come in and run them to avoid confirmation bias in our testing plans.

Ultimately - what caught us out was a rat (yes, as in with a tail) deciding to crawl into the switchgear and kill itself across two separate banks of switching. You couldn't make it up.

Ah, the old rat short. Been there before!
 
When working at an SME we had a horrendous failure.

I'll share my favourite story too then.

The client at the time was JP Morgan, their headquarters were at 125 London Wall. I was a mere plumber boy back then. The building had two main busbars rising all the way up the building, and each floor had its own ATS to swap supplies in the event that one failed.

And one did. Friday afternoon of course. The ATS all did their job and nobody was even aware of what was happening apart from the engineering teams. The management team had a Nigerian chap on the cards as a consultant. An extremely arrogant, overpowering, must-have-the-last-work know-it-all idiots I've ever met. He was extremely hated and his ability to lie his way through his entire career was quite commendable.

Anyway, we all head down to the LV switchroom to see what's gone on, and one of the breakers had gone bang and was smoking in the switchroom. All fairly contained at this point. The fire alarm system was a double-knock, so the supervisor radioed security to cancel the alarm and isolate the head. Investigations began.

Mr Nigeria comes down in a panic, sees the smoke in the room and hits the massive red EPO on the wall and in doing so kills the other busbar. Bang, the entire building dead. Trading floors which traded hundred of millions of pounds a day, all dead. The panic and tension was palpable.

It was at this moment I decided to leave, you could hear a pin drop.

Suffice to say we never saw that consultant again.
 
I'll share my favourite story too then.

The client at the time was JP Morgan, their headquarters were at 125 London Wall. I was a mere plumber boy back then. The building had two main busbars rising all the way up the building, and each floor had its own ATS to swap supplies in the event that one failed.

And one did. Friday afternoon of course. The ATS all did their job and nobody was even aware of what was happening apart from the engineering teams. The management team had a Nigerian chap on the cards as a consultant. An extremely arrogant, overpowering, must-have-the-last-work know-it-all idiots I've ever met. He was extremely hated and his ability to lie his way through his entire career was quite commendable.

Anyway, we all head down to the LV switchroom to see what's gone on, and one of the breakers had gone bang and was smoking in the switchroom. All fairly contained at this point. The fire alarm system was a double-knock, so the supervisor radioed security to cancel the alarm and isolate the head. Investigations began.

Mr Nigeria comes down in a panic, sees the smoke in the room and hits the massive red EPO on the wall and in doing so kills the other busbar. Bang, the entire building dead. Trading floors which traded hundred of millions of pounds a day, all dead. The panic and tension was palpable.

It was at this moment I decided to leave, you could hear a pin drop.

Suffice to say we never saw that consultant again.

Absolutely wild. Who on earth does something like that. I do find people like that do eventually get found out, its just a matter of time.

I worked somewhere with another (sort) of similar thing in the sense it involved power and smoke. I was driving in one Saturday afternoon and got a call 'We've got no power' ridiculous I thought.

Arrived at the office, lights on, zero issue, clowns. Went up to the data centre... all I could hear was the ticking of metal.... and it was *hot*. Ah.

I recalled we had some 'non disruptive' power testing going on downstairs. I nipped down and as I came out of the stairwell there was the most incredible smell of burning plastic.

When they'd turned the switch off - because the incoming power feed cable was 'too hot to touch' so they 'wanted to check what was going on - it pretty much instantly spontaneously combusted and was a molten mess on the wall and floor.

The funny thing? no fire alarm triggered.

They did a hell of a job to get the power back on again in 2 hours, and by mid afternoon we had all systems back online. Beyond impressive.

Maybe we need a thread of 'the worst P1s of my life'
 
I was flying back from Miami when this happened, we got redirected to Madrid as many airports can't service a380s, 8 hours slumming about the airport before corporate travel got me a flight to Gatwick, 12 hour total delay in getting home.
 
I was flying back from Miami when this happened, we got redirected to Madrid as many airports can't service a380s, 8 hours slumming about the airport before corporate travel got me a flight to Gatwick, 12 hour total delay in getting home.

My friend's mother in law from Rio was diverted to Madrid too, landed Friday night and they put her in a 5 star hotel and free amenities and food. She had a great 24hrs at the hotel I was told lol
 
Just watching a bit of the questions in parliament.

Apparently the transformer site that caught fire had 3 transformers, 2 active and the backup (by the sounds of it any one is enough but they use 2 so a failure of one doesn't cause an outage and it gives time to switch in the backup).

Heathrow does indeed have 3 connections to the grid, but they're powering different parts of the airport, and importantly they are not all the same capacity, they are something like 40MW, 30MW, and 70MW, the fire happened at the biggest one that powered one terminal fully, partly powered a second and powered hundreds of other buildings/parts of the airport including passenger tunnels, fuelling systems, and importantly key parts of the more advanced safety systems (active fire monitoring etc, normal fire safety was still working but by the sounds of it they have active fire detection* that required advanced systems across the site), and some of the automatic valves for the refuelling facilities (which i'm guessing fail "safe" and shut all the valves).
So by the sounds of it, if the main supply went down it might not have been a case of the other two incoming supplies having enough power to simply switch over, as they were not the same capacity each as the one that went down, at the very least it sounds like they would have had to have broken the local power network in half connecting half to each of the remaining supplies.




*The guy mentioned that the likes of the fire alarms would still sound, but i'm guessing they might have thermal imaging/thermal sensors on equipment as well as the "break glass" and smoke detector systems.
 
Back
Top Bottom