Show Us Your Racks

DRZ

DRZ

Soldato
Joined
2 Jun 2003
Posts
7,453
Location
In the top 1%
262625_561912348834_1554590927_n.jpg


Mostly populated with stuff like this:

397011_561912378774_1022282633_n.jpg


But also some of this:

154448_561756725704_2012499155_n.jpg

(work in progress shot, they are now fully racked/built and the racks reassembled fully)
 

DRZ

DRZ

Soldato
Joined
2 Jun 2003
Posts
7,453
Location
In the top 1%
Shwing!!!!

Yeah, they are pretty decent switches.

Without going into too much detail for obvious reasons, the basic layout is those two 7010s (with the just-released Supervisor 2E) with a number of 5548UPs utilising FabricPath. Off those 5Ks hang a bunch of 2232PPs for 10GbE DC access ports.

I also have 20-30 3750Xs stacked around the place which aggregate into a VDC.

The WAN side of things aggregates into a 4507R+E which takes part in our larger BGP community.

Elements of this are already in production and the performance uplift is absolutely staggering. Bandwidth has never really been an issue for us, generally speaking (10Gb mainly for cable consolidation above anything else) but the latency improvements from some of this kit has to be seen to be believed.
 
Soldato
Joined
18 Oct 2002
Posts
4,040
Location
Somewhere on the Rainbow
Nice Kit.

We looked at the 7k's but they just didnt offer us enough more than the 6509's we put in to justify the price. Nice bits of kit though, plus we would have had to reinforce one of the floors to handle the weight! Running the 6509's with Nexus 5548 and 2232's also.
 

DRZ

DRZ

Soldato
Joined
2 Jun 2003
Posts
7,453
Location
In the top 1%
6509s are no longer in the DC division, they are part of Borderless Networks. Sign the right NDAs and Cisco will tell you where the 6500 is going... Fantastic switches for services and for huge campus LANs but in the DC space there are too many relevant features either there, coming or NDA-protected but exciting in the 7K platform (although you'll need the 2/2E to harness those). VDCs removed the requirement for another layer of switches so it was a trade off between VSS and another layer of switches and vPC / VDCs. vPC and VDC won out for us (and we don't use any of the services offered in blades for the 6500, or at least we don't use Cisco for those services...)

I guess you're not doing too much FCoE or planning to use OTV?

EDIT: 265KGs per switch, even 2 in a rack shouldn't overload your floor?
 
Associate
Joined
12 Mar 2006
Posts
376
6509s are no longer in the DC division, they are part of Borderless Networks. Sign the right NDAs and Cisco will tell you where the 6500 is going... Fantastic switches for services and for huge campus LANs but in the DC space there are too many relevant features either there, coming or NDA-protected but exciting in the 7K platform (although you'll need the 2/2E to harness those). VDCs removed the requirement for another layer of switches so it was a trade off between VSS and another layer of switches and vPC / VDCs. vPC and VDC won out for us (and we don't use any of the services offered in blades for the 6500, or at least we don't use Cisco for those services...)

I guess you're not doing too much FCoE or planning to use OTV?

EDIT: 265KGs per switch, even 2 in a rack shouldn't overload your floor?

Ahh the Data center world, with more acronyms than you can shake a stick at. *Off to Google I go* :(

Care to explain VSS/VDC?

EDIT: Ahh VSS, yes I've looked up on that before. Multi-chassis Etherchannels with active/active supervisors
 
Last edited:

DRZ

DRZ

Soldato
Joined
2 Jun 2003
Posts
7,453
Location
In the top 1%
VSS is more or less as you said, multiple switches appear and are configured as one stack. vPC is the Nexus equivalent (virtual Port Channel) but the switches need individual configuration. Implementations of HSRP/VRRP are VPC-aware now

VDC (Virtual Device Context) is switch virtualisation. Each VDC has its own configuration as if it was a separate switch (down to logins etc) and "owns" a number of ports in much the same way as a VLAN would. I'll be running BGP between my VDCs and connecting them at L3 - you actually do need to cable them together as if they were switches, you can't logically link them in the backplane. In some ways that is a shame, in other ways that is fantastic.
 
Soldato
Joined
18 Oct 2002
Posts
4,040
Location
Somewhere on the Rainbow
When we were speccing things up the Nexus 7k was the new kid on the block and didn't offer as much as it does now. We were able to swap the supervisor cards in the 6509 to the 2T's before the order was being built and were the first customer ship in the uk. We are doing FCoE across the Nexus kit we have and it works well for us, no plans for OTV though. To be fair we don't even really use the full features of the 6509's, even they are a bit overkill (they replaced a pair of 6500 catalysts running CatOS which were 12-13 years old and even they were hardly breaking a sweat). I know the 6509 isn't in the DC portfolio now but unless you are a big data shifting organisation and a service provider then they are still more than capable for a lot of medium/large organisations. The issue with the weight was it's an "upstairs" server room with an old raised floor, would have meant structural work on supporting roof and replacing the raised floor supports and tiles.

Not sure there are any more NDA's Cisco could ask me to sign, I think they already own both kidneys and 1 lung...... :D
 
Soldato
Joined
18 Oct 2002
Posts
4,040
Location
Somewhere on the Rainbow
Should ask, do you have any issues with heat and your Nexus 5548 or 2232's? The company who helped the lads install it recommended leaving a 1u gap between them in the rack as when fully loaded they can get very hot.
 
Soldato
Joined
30 Dec 2004
Posts
4,681
Location
Bromley, Kent
We don't have heat issues (5548s and 2248s) - certainly nothing we or the device has noticed. One of our core 5548s did crash the other day with an NX-OS bug though... Time to upgrade the estate!!

- GP
 
Last edited:
Soldato
Joined
10 Jan 2010
Posts
5,319
Location
Reading
Looks like the DC I was working at a couple of weeks ago... Apart from you couldn't take a phone in the building without an alarm going off... PITA all that biometric security.

Spent my day in the lab working on a dozen N2Ks for top of rack switches. Not sure if I'll be touching the 7ks or not, think there's about 4 for this customer.

Plenty of equipment........ but ran out of power cables haha :D

Should ask, do you have any issues with heat and your Nexus 5548 or 2232's? The company who helped the lads install it recommended leaving a 1u gap between them in the rack as when fully loaded they can get very hot.
Can't say ours run too hot at all.

I opened a rack earlier with the 2 7Ks in though, now that was hot!
 
Last edited:
Soldato
Joined
18 Oct 2002
Posts
4,040
Location
Somewhere on the Rainbow
Just had the guys in today to do an iOS and device discovery from a partner, no doubt it will tell us we are miles out of date on versions. Just in the process of 'negotiating' with Cisco for another upgrade to two old catalysts, same VSS design going in, but got 10 racks to kit out on top, we've been putting 2232's in pairs in the top of rack and running a fair bit of FCoE across them, not looking forward to licensing the 5548's though!

Anyone played with the 3548's yet?
 

DRZ

DRZ

Soldato
Joined
2 Jun 2003
Posts
7,453
Location
In the top 1%
They are mostly the bog standard 600mm racks. The end racks are much bigger to take the 7ks though.

AC units are chilled water in-row RCs. We get quite a lot of free cooling out of them owing to the fact the UK is quite cold. PUE is very low and I'm working on getting it lower!

5548s are fine temp wise (be sure to order with the correct airflow!). I'd say the delta temps are pretty low, much cooler than the newer rack mount servers which put out very hot, very slow moving air. I guess that's what having 384GB of RAM in 1U will do though.
 

DRZ

DRZ

Soldato
Joined
2 Jun 2003
Posts
7,453
Location
In the top 1%
Just had the guys in today to do an iOS and device discovery from a partner, no doubt it will tell us we are miles out of date on versions. Just in the process of 'negotiating' with Cisco for another upgrade to two old catalysts, same VSS design going in, but got 10 racks to kit out on top, we've been putting 2232's in pairs in the top of rack and running a fair bit of FCoE across them, not looking forward to licensing the 5548's though!

Anyone played with the 3548's yet?

I presume you're just licensing the FETs and gaining the full benefit of FCoE on the 2232s? Certainly a good strategy to cluster your FCoE stuff around as few 2ks as possible, although if FCoE is ubiquitous you might not have much choice.

All that need for block level storage... Must be a lot of database stuff?
 
Soldato
Joined
18 Oct 2002
Posts
4,040
Location
Somewhere on the Rainbow
Yeah just licencing the 5548's for FCoE which allows us to use the 2232's in the racks. We have also been trialling doing away with our FC switches for the SANs and just running it all over the Nexus kit, works a treat so far (cant tell you the full ins/outs, i just sign the cheques! :D).
 
Associate
Joined
9 Jan 2010
Posts
739
Location
Sunny Brizzol
My new little home 16U cabinet set up that I built which is still a WIP but so far looks like and consists of the following kit:

LEFT SIDE
. Virgin Media "SuperHub" set in Modem only mode (I'm on 60Mb package)
. APC Power Switch (turns power on/off to 8 outputs via a web page GUI)
. Draytek Vigor 2955 Firewall/Router with DMZ mode enabled
. Dlink managed 16-port Gigabit Switch with dual SPF uplinks
. Couple of HP blanking plates

RIGHT SIDE
. VMware ESXi 5.1 server, i5 CPU, 16GB RAM, Dual NIC running 6 x VM's
. 14TB NAS (10TB usable space) i3 CPU, 16GB RAM, Intel Dual teamed NIC

ON TOP
. 2 port Belkin USB KVM
. 17" Dell Monitor
. Small USB Keyboard & Mouse
. Canon Network Printer/Scanner/Fax machine
. APC Back-UPS Pro 900 UPS

There's a full build log here of the servers and cabinet if you are interested - http://forums.overclockers.co.uk/showthread.php?t=18453923

nas50.jpg


nas53.jpg
 
Back
Top Bottom