Times when you wanted to cry

...some numpty had plugged a cable directly from one wall port, to another.

I might be getting the wrong end of the stick here, but someone took out your entire network by patching two switch ports together?

Some idiot this in my second year of secondary school, 12 years ago, bringing down the whole (admittedly much smaller) network. Can you seriously still cause such a catastrophic problem, over a decade later, on significantly more expensive hardware installed by professional engineers, by making the same stupid mistake?
 
ah reminds me of the old novell token ring at college... instant end to the lesson, take a termination off.. :D
 
My boss dropped out primary production print server out the back of the pool car, then we had to spend the whole weekend trying to get a new motherboard for it... needlesss to say he paid for that one...

Stelly
 
Office move arounds are the biggest nightmare when IT are the last people to be informed. One time senior management organised a desk move over a Saturday, and I was to come in on the Sunday to patch telephone extensions, so about 30 minutes total work. To my disgust I came in on the Sunday and there was a mound of network cable on the office floor, with a note next to it saying they were "too tired" to cable each desk. Took me hours to sort the mess.
 
I might be getting the wrong end of the stick here, but someone took out your entire network by patching two switch ports together?

Some idiot this in my second year of secondary school, 12 years ago, bringing down the whole (admittedly much smaller) network. Can you seriously still cause such a catastrophic problem, over a decade later, on significantly more expensive hardware installed by professional engineers, by making the same stupid mistake?

It is alarmingly possible in most networks. The people who installed them don't deserve the word professional though (unless it's professional idiot). It's relatively simple config to prevent this happening with modern switch hardware and even if it does it shouldn't destroy your entire network...
 
You may not all be aware but somehow someone managed to dig through a major BT backbone 32m below ground level causing massive disruption to services in east london (so i'm told). We have a facility on the essex/london border which managed to escape the effects of this. However at 4:55 last night our leased line went down (which i have skillfully worked around cos i'm a bit of a genius like that) and not long after the voice ISDN30 wend down too.
To my knowledge no one has gone digging through any more cables so one assumes that someone somewhere made a bit of a foe-par during repairs. That or the traffic was routed over kit that just couldn't handle it...

So far it's been down for 16 hours, that's going for a record even for BT :)
I wouldn't normally want to cry about BT ballsups, I'm used to them. But at 5 to 5 with home time in sight... i did that time.
 
You may not all be aware but somehow someone managed to dig through a major BT backbone 32m below ground level causing massive disruption to services in east london (so i'm told). We have a facility on the essex/london border which managed to escape the effects of this. However at 4:55 last night our leased line went down (which i have skillfully worked around cos i'm a bit of a genius like that) and not long after the voice ISDN30 wend down too.
To my knowledge no one has gone digging through any more cables so one assumes that someone somewhere made a bit of a foe-par during repairs. That or the traffic was routed over kit that just couldn't handle it...

So far it's been down for 16 hours, that's going for a record even for BT :)
I wouldn't normally want to cry about BT ballsups, I'm used to them. But at 5 to 5 with home time in sight... i did that time.

http://forums.overclockers.co.uk/showthread.php?t=17995630

I'll give BT a pass on this one, not their fault and they've got a hell of a job.

1800 inaccessible fibres to resplice once they've managed to run fibre round the problem in surface ducts. They've been giving good updates and keeping us informed. We lost a BT central platform (with maybe 850 users on) through the outage and a few other circuits but as I actually bothered to design a resilient network, everything is still working fine.
 
We haven't recieved any updates :/ I left the talking to our Director of IT while I set about bypassing the problem. He had to wrestle an answer out of them. They initially tried to fob us off with "out system is reporting a problem with End User Equipment.... Which is utter balls cos copper NTEs don't show alarm lights if you take the X.21 off.

I appreciate it's not their fault it broke and they have a big job but it takes no effort at all to ring up and say we're sorry we need to take the line down to do maintenance. What annoys me most is that they break out network and leave us to find out, then when we report it broken they try to tell us it's our problem when they almost certainly know otherwise.
 
We haven't recieved any updates :/ I left the talking to our Director of IT while I set about bypassing the problem. He had to wrestle an answer out of them. They initially tried to fob us off with "out system is reporting a problem with End User Equipment.... Which is utter balls cos copper NTEs don't show alarm lights if you take the X.21 off.

I appreciate it's not their fault it broke and they have a big job but it takes no effort at all to ring up and say we're sorry we need to take the line down to do maintenance. What annoys me most is that they break out network and leave us to find out, then when we report it broken they try to tell us it's our problem when they almost certainly know otherwise.

There were 70,000 residential lines down, police and emergency services without normal links. I think they had better things to do than phone to be honest.

You can argue it either way but if you rely on a circuit and don't have automatic sub second failover then that's not the carriers fault, every circuit will have a fault one day and you need to be ready for it. It's not even expensive any more.

Now fair enough we're an ISP and hand BT the best part of £500k a month for all the various services we have, they're normally pretty middle of the road, they usually phone when a LES/WES goes down otherwise not so much. I deal with worse carriers unfortunately (cough, global crossing - are you listening?).

I cannot see a better way they could have dealt with this, they've given regular status updates by email, there's a conference call for service providers three times a day so far this week with updates. They've investigated all alternatives and brought appropriate mobile kit. They can't realisitically phone every man and his dog with a 2Mb circuit when this scale of incident happens...
 
There were 70,000 residential lines down, police and emergency services without normal links. I think they had better things to do than phone to be honest.

You can argue it either way but if you rely on a circuit and don't have automatic sub second failover then that's not the carriers fault, every circuit will have a fault one day and you need to be ready for it. It's not even expensive any more.

Now fair enough we're an ISP and hand BT the best part of £500k a month for all the various services we have, they're normally pretty middle of the road, they usually phone when a LES/WES goes down otherwise not so much. I deal with worse carriers unfortunately (cough, global crossing - are you listening?).



I cannot see a better way they could have dealt with this, they've given regular status updates by email, there's a conference call for service providers three times a day so far this week with updates. They've investigated all alternatives and brought appropriate mobile kit. They can't realisitically phone every man and his dog with a 2Mb circuit when this scale of incident happens...

They have automated ways to do it, SMS or Email that could deal with 70,000 users within hours if they could be at all bothered. The lack of contact doesn't bother me as much as the denial that anything was wrong. They clearly have massive issues in the area and still claimed our kit was causing us to have no Leased line, no ADSL and no ISDN30.
This is the same MO we always get, hence we've bought RF units to replace them on the principal that if it should break, failover WILL work and we can fix the issue in a timely fashion.
 
I've nicked this off another forum.

A network manager finds that for some reason someone has decided to move one of his servers.

server1.JPG


Does in deed look like an odd place for a server to be. Especially one that weighs about 30kg.

Lets look at another shot.

server2.JPG


Now, I think it might be a good thing is doesn't have a UPS.
 
Last edited:
Lol Those cabinets only have a 25KG loading, some evil is prolonging it's life up there.
if you look closely in the first pic you can actually see the top is further away from the wall then the bottom :D
 
They have automated ways to do it, SMS or Email that could deal with 70,000 users within hours if they could be at all bothered. The lack of contact doesn't bother me as much as the denial that anything was wrong. They clearly have massive issues in the area and still claimed our kit was causing us to have no Leased line, no ADSL and no ISDN30.
This is the same MO we always get, hence we've bought RF units to replace them on the principal that if it should break, failover WILL work and we can fix the issue in a timely fashion.

what would be the point of emailing 70,000 odd end users who more than likely dont have a connection anyway? becides theyd just be adding to any existing congestion as caused by the network outage.

the outage did appear to effect odd areas, maybe the tests mr callcenter support person did ran into issues caused by the outage in east london and gave him the impression it was your kit.

in all fairness to BT i think they had it fixed in a pretty good time concidering what they were up against..
 
Back
Top Bottom