How much breaks when vCenter Server is not running?

Traffic between two ports on a 2K absolutely go via the parent 5K. You configure them on the 5K as ports as if they were line cards. Which is essentially what they are, remote line cards.

The L2/L3 performance thing is on the Cisco site somewhere. I'll see if I can find it later. You're limited by how many 2Ks you can connect as well, dropping from 24 to 16 (used to be lower in previous software releases I think). There are other things like having to configure everything twice (which you can do with some config sharing stuff but I'm not entirely certain about that).

What you end up with is a massive bulk of fibre spanning your aisles, hooking back up to 5Ks, which in turn uplink to the 7Ks or whatever is doing your L3 stuff. Probably not a 6500 these days as they are in the Borderless Networks space, not the DC space. Regardless, the effective lack of in-unit switching is almost a crippling limitation of the 2Ks. It requires you to at least double the uplink bandwidth you were planning for your switches if your layout dictates a lot of in-rack traffic. It is this sort of thing that takes the Nexus range up a level in terms of planning and down a peg in terms of scale, especially if you're comparing it to end of row arrangements where backplane bandwidth figures are enormous.

ToR vs EoR is a big old argument...
Talk about assumptions being the source of all evil. I just assumed the 2000s were capable of switching traffic on their own, but you are absolutely right (I had to go off and read about it). The consensus seems to be it's not that big of a deal, because in the real world you generally don't have masses of traffic going east/west, but rather most of it is north/south, in which case the Nexus 2k not being able to switch traffic on its own is not a big problem.

It still changes my perception significantly, as my assumption always was that the Nexus architecture was an amazing decentralised way of running your network.
 
Talk about assumptions being the source of all evil. I just assumed the 2000s were capable of switching traffic on their own, but you are absolutely right (I had to go off and read about it). The consensus seems to be it's not that big of a deal, because in the real world you generally don't have masses of traffic going east/west, but rather most of it is north/south, in which case the Nexus 2k not being able to switch traffic on its own is not a big problem.

It still changes my perception significantly, as my assumption always was that the Nexus architecture was an amazing decentralised way of running your network.

:)

In our case, east to west is a big consideration and we have essentially designed around the problem by putting traffic flows across switches wherever possible and maxing the uplink bandwidth. You have the choice of doing static port pinning to reserve northbound bandwidth but I stuck with LACP to eliminate some risk.

Our big hitters in the form of ESXi and XenServer sit on opposite 2Ks to the SAN. FCoE connected SQL on a different pair again, etc. I've slightly offset the extra cost with the fact that 2Ks are a storage port licence multiplier, meaning I don't have to worry at all about my FCoE port count.

The 5Ks are doing what they do best - ultra low latency, high throughput L2 switching. They are aggregated along with all the legacy gigabit stuff into a pair of 7k VDCs. We then have a core VDC which does all the routing. Nothing is stressed, everything behaves and the performance is pretty magnificent. 7K VDCs are a whole world of gotchas and caveats with the various types of line card and port allocations and so on. Tread carefully!

Like I think I said before, design it properly around the caveats and you'll have a stellar LAN :)
 
Oh and to add that with the top 2232pp fex you're limited to an effective maximum 160gbps backplane. Given the fex can top 640gbps of bandwidth, you're oversubscribed from the 8th port onwards, or 4:1

I'm not able to directly contrast that with an end of row design off the top of my head, but I'm not sure there's enough in it (other than maybe cost per port) for the same port count and oversubscription rate. You'd get 128 ports per 5548 assuming you used the expansion module to uplink. 5596s would do more obviously, I think two maxed 5596s would equal a fully stocked 7010 and then you're into east/west traffic vs budget discussions.
 
Not sure I've ever seen a NIC overheat no matter what the brand :eek:

Me either until this happened :eek:

The massive heatsinks on these 10GE cards always did concern me!

I work with hundreds of HP servers, and have never seen a NIC overheat, so I would suggest it is a one-off hardware malfunction (could be the thermal sensor is faulty for all you know). It is far better to operate with all HP components, as they are tested and supported together, e.g. you can boot off a single ISO and update every single firmware on the system, drivers are released in tested bundles, you will only need a single ESXi driver bundle (the one from HP), etc.

That's what we're aiming for, albeit the Dell road. We do a lot of HP too and so far I'm preferring their approach as it's a bit more open so to what works together. Out of interest, what NICs are you using? Assuming you're using rackmounts and not Gen8's with NC560's or something..

:)
The 5Ks are doing what they do best - ultra low latency, high throughput L2 switching. They are aggregated along with all the legacy gigabit stuff into a pair of 7k VDCs. We then have a core VDC which does all the routing. Nothing is stressed, everything behaves and the performance is pretty magnificent. 7K VDCs are a whole world of gotchas and caveats with the various types of line card and port allocations and so on. Tread carefully!

That's what our network team is working on - we're getting a new room later this year and they're looking at getting the full 7/5/2k design in. Currently we've just got 2ks > 5ks > 4500, trying to keep the Nexus storage only but failing due to us pushing on with VMWare (or trying to..) :)

Wasn't aware of the VDC stuff, that's quite cool! Out of interest (and probably getting a bit OT), where did you learn about the Nexus stuff? Was it a course or just a lot of reading?
 
Lots of reading, lots of doing. We bought a couple of 5Ks to plug a critical need, very much a trial by fire scenario but if you know your networking and you know IOS fairly well you can quickly translate that to NX-OS. Obviously the unique things you need to learn like VDCs and vPC. FCoE is a cakewalk if you know the faintest thing about FC :)

I got distracted and started studying for the CCIE Data Center track but I want to concentrate on the CCIE R&S first.
 
Back
Top Bottom