Heathrow total shutdown

I remember reading years ago how the IRA had plans to blow up a number of critical substations around London, it could have backed out The City of London for months , guess it shows how susceptible the power grid is!
 
To add a bit of context, this is our HV ring at the natural history museum. The four red lines at the top are the incoming supplies from UKPN (formerly the London Electricity Board). These are both live at all times and aren't connected to each other. The "open point" circled in red, is what we move around to service certain substations whilst keeping the affected areas to a minimum. We move it by opening and closing switches. Each time we do this, it involves writing a switching schedule, and this needs approval from a senior authorized person. Then in order to carry out the switching itself we need the HVAP himself, a safety person and all the ops stuff needs to be managed (me). It's a LOT more complicated than simply flicking some switches.

If one of those supplies fails, we lose all the power on that one side, but we can close the open point and power the whole museum off the other supply, and then we still have some backup generators for the life critical systems in the event that both fail.

If one fails, we need to determine why it failed first, which involves speaking to UKPN. If the fault is on their end, we simply switch over and we're all singing and dancing. If the fault was cause by us (or the science museum, victoria and albert museum or the imperial college, they're all on the same HV ring) then we need to isolate the fault before switching over. Best case scenario you're looking at 4-6 hours of management, comms and diagnostics before this can be done, worst case scenario you find a significant fault which needs to be repaired, that can take weeks or even months depending on the fault, the cause and the insurance implications.

MENtlnA.png



This is my bread & butter, ask away if you have any questions :)
 
I flew back into Heathrow from Hong Kong yesterday afternoon. Glad we got in before all this kicked off! (especially as we are due to fly out on holiday next week - thankfully from Gatwick on the outbound leg)
 
To add a bit of context, this is our HV ring at the natural history museum.

If one of those supplies fails, we lose all the power on that one side, but we can close the open point and power the whole museum off the other supply, and then we still have some backup generators for the life critical systems in the event that both fail.

This is my bread & butter, ask away if you have any questions :)

I remember being a BT Martlesham Heath back in the 1990s.

One day we turned up at our building (SSTF) which had battery backup in the roof for the servers. A chair was propped up to keep the security door open and all the power on the site was out. The site has two power entry points on either side of the site for redundancy.

It turns out that the site's two power supplies then, several km upstream) then combine into one and a single line then goes on. Someone managed to shove a JCB bucket through the single power line resulting in a total outage.
 
You would have thought that it would all be automated.
Don't think you can do that. If something on your site causes an upstream trip, switching to a different upstream is just going to trip that one too. Hence the need to investigate the cause of the trip before taking corrective action.
 
You would have thought that it would all be automated.

It can be - typically sites / facilities where power is critical and loss of power is a major issue would have two electricity feeders, each with their own transformer(s) to step down the power (assuming there's no on site power generation). If there's an issue on one feeder / transformer it should automatically switch to the other, but can depend on exactly where the issue is.

As someone said previously, the substation appeared to have two transformers in close proximity and damage occured to both - if they were meant to provide redundancy they should have fire walls between between them......see NFPA 850 ;)
 
I screams of all eggs in one basket, as others have mentioned why not have two incoming power supplies at opposite ends of the site. This is just ridiculous that Heathrow has had to close because of a single point of failure.
 
Thanks, why don't you have a pre-written switching schedule rather than having to recreate it each time?

Because the nature of these places means there's never 100% managed control. The client could have another contractor in doing some works and they bump something, or a switch can trip and trigger a backup supply without anyone knowing, etc. A lot can happen without knowing about it, so we need to validate everything every time before starting. On top of this, there are staff changes, and even then, due to the infrequency that this kit is worked on, the engineers need refreshing each time. Thing is with this kind of kit, when it goes wrong, it's always spectacular, there's no such thing as too much caution.

This is one of the less explosive reactions to an incorrect switching operation:



EDIT:

This is the video they show us on the training courses as there's so much wrong here it makes for great training material:

 
Last edited:
Back
Top Bottom