Quite a bit of FUD in this thread... dvSwitches aren't going to solve world hunger, but they aren't the devil, either. A user has an issue with what to me sounds like incompatible/unsupported cards -- this has nothing to do with whether dvSwitches are suitable or not.
What is reinforced by the various points of view (some valid, some misguided) is that virtual infrastructure requires careful planning, and it is critical to use supported hardware and software.
Not sure you're aiming that at me, but my opinion is based on running enterprise-class environments. I don't want my vCenter going belly up (for any reason) to cripple anything. I could nuke my vCenter box right now and perhaps with the sole exception of Veeam, none of my production systems would be impacted. This guy can't say the same thing (sadly).
dvSwitches clearly have benefits to someone, or no-one would use them. I see them --as a lot of other things in my work-- as incremental benefits to our lives as sysadmins, that designed, implemented and managed properly, are "a good thing". In my case, I value the simplicity of managing a single entity across many hosts, rather than having to manage each host individually.
The original issue appears to be faulty/incompatible/unsupported cards that are causing the entire environment to go belly-up -- how is that the fault of dvSwitches? I'm not defending the use of dvSwitches, but this particular environment needs urgent attention, and it's not the dvSwitches that need the attention.
How a CNA failure could cause such cluster-wide failure is the point. That failure should be confined to a single host (unless he is unlucky, of course) which HA should be able to tolerate.
I don't know how you've taken the meaning of my posts, but I'm not saying he (or anyone) should abandon dvswitches but I'm saying that there are things you
can do with them that you
shouldn't. I'd categorise management access, vCenter and
storage traffic as something you shouldn't do (at the moment).
Interesting feedback. In the environment where I had 5k/2k/1kv I honestly don't know if the 5ks had L2 or L3 line cards, so I can't comment. I haven't worked there for a year, so can't ask the Networks guys that set it up. We did plenty of benchmarking, and were blown away by the performance (e.g. vMotions were completing in less than 10 seconds).
Not sure about your comments re. can't "sprinkle them about" because they are not switches. What do you mean by them not being switches? Traffic between two hosts on a single 2000 on the same VLAN will not go up to the 5000 and back down, does it? Careful consideration is a given in any case, but otherwise I don't follow.
My view as a non Networking professional, is that the Nexus architecture is a massive improvement from the old Catalyst 6500 system of centralised giant switch with Cat5e running everywhere. I much prefer the "top of rack" approach to networking, to keep the cabling within the datacentre to a minimum. Put another way, the Nexus system is like a distributed 6500, with the 2000s as the 6500 blades and the 5000s as the 6500 supervisor blades, but located near the equipment they are connecting.
Traffic between two ports on a 2K absolutely go via the parent 5K. You configure them on the 5K as ports as if they were line cards. Which is essentially what they are, remote line cards.
The L2/L3 performance thing is on the Cisco site somewhere. I'll see if I can find it later. You're limited by how many 2Ks you can connect as well, dropping from 24 to 16 (used to be lower in previous software releases I think). There are other things like having to configure everything twice (which you can do with some config sharing stuff but I'm not entirely certain about that).
What you end up with is a massive bulk of fibre spanning your aisles, hooking back up to 5Ks, which in turn uplink to the 7Ks or whatever is doing your L3 stuff. Probably not a 6500 these days as they are in the Borderless Networks space, not the DC space. Regardless, the effective lack of in-unit switching is almost a crippling limitation of the 2Ks. It requires you to at least double the uplink bandwidth you were planning for your switches if your layout dictates a lot of in-rack traffic. It is this sort of thing that takes the Nexus range up a level in terms of planning and down a peg in terms of scale, especially if you're comparing it to end of row arrangements where backplane bandwidth figures are
enormous.
ToR vs EoR is a big old argument...