Does anyone here do hyperconverged?

Soldato
Joined
31 Dec 2003
Posts
4,655
Location
Stoke on Trent
Just wondering if anyone's bought into any hyperconverged infrastructure or been tempted to?

I am currently in a weird job where we get sporadic "pockets of cash" which could lend itself well to plugging in a new stack of hyperconverged to boost the capacity......but it's a big shift from a traditional stack of servers plus San

Had a look at nutanix and it does seem cool.
 
Associate
Joined
28 Feb 2008
Posts
472
Location
Northamptonshire
In my previous job, we were a pretty early adopters of Nutanix, in 2013. We were a reseller, but also used it for our VDI clusters and a server cluster. At the time I was in 3rd Line Support and commissioned the environment.

Hyperconverged, in my opinion, is a double edged sword. It's nice to have, convenient, simple to use environment that can scale effectively. However, people get hung up on HCI, and "must have it" even when it doesn't make financial sense.

I work in Pre-Sales and the amount of times I see a major requirement for "Hyperconverged Infrastructure" or "All Flash" is crazy. Especially when running an audit on the existing workloads shows a small, very static environment with a few thousand IOPS.

Nutanix is a great system, it has come on leaps and bounds since I was looking after the production environments, it's a market leading product, but I've seen so many bad implementations and people having to double their investment, because they "had to have it". I can also say the same for other HCI platforms, so I'm not picking fault with the concept, it just needs to be done properly.
 
Soldato
Joined
18 Oct 2002
Posts
4,898
We’ve looked at it and from a technical perspective it’s quite attractive to us and it potentially suits our business model (we sell fully managed hosted services). Finance people always want us to work out a cost per Gb of RAM, cost per vCPU and cost per Gb of storage. That’s difficult when you sunk £120k on a SAN 2 years ago for 100 customers and now your running 150 with extra capacity now and you’ll need to buy a new one next year, or maybe you could get another year out of it. How much does a blade chassis bay cost? Technically it’s 1/14 of the cost of the chassis - but what if I’ve only got 8 blades populated at the time? If all my chassis is full and I get a new customer who needs 2 blades how do we account for the cost of a new chassis? It’s a real ongoing headache for us and it just gets worse as we grow.

Move it Azure or AWS? That’ll be £200k a month!

The problem is, it’s just too expensive and the numbers don’t stack up. Service Provider licensing is particularly expensive for Nutanix.
 
Soldato
Joined
18 Oct 2002
Posts
4,533
Have done countless hyper-converged deployments over the years - they're definitely becoming a lot more common over the past two years in particular. Have dealt primarely with Scaleio and VSAN based solutions (VXrail / Dell Ready Nodes / Cisco UCS).

I've not really encountered any issues with any of the solutions. All have performed similar to a traditional converged deployment. I wouldn't hesitate with recommending either model. They both have their place. It's not a huge shift either; I'd argue that hyper-converged greatly simplifies a datacentre - no requirement for dedicated storage teams in many cases, reduced network complexity (retire FC switches, simplified switch configs, etc), scale out designs, etc.

Can't speak for MS / Nutanix / HP solutions though....
 
Soldato
Joined
9 Dec 2007
Posts
10,492
Location
Hants
MS HV with S2D has been good to us so far. Running several clients off-prem infrastructure and (knock wood) not really skipped a beat.
 

img

img

Associate
Joined
23 Mar 2005
Posts
1,024
depends if you consider vendor lock in an issue as we have been offered better price initially for kit. it also depends on how much storage with compute makes sense for your workloads. personally it made more sense to get with seperate as we can add whoever is cheaper for 1u compute nodes now, blades make no financial sense anyway compared to 5 years ago.
 
Soldato
Joined
15 Sep 2009
Posts
2,895
Location
Manchester
We do CI rather than HCI for the most part with Flexpod, but we also have a number of vSAN deployments out there with UCS Blades, we love both solutions - although as a VMware fanatic I give the edge to vSAN, but NetApp has grown on me over the last few months massively.
 
Soldato
Joined
25 Oct 2002
Posts
2,622
I've been using Nutanix for a few years now and overall it's very slick. The issue I have with it is when you need to do maintenance which involves taking a node offline your entire cluster is then reduced to a degraded state where something else stopping at the same time is going to result in your cluster shutting down. It makes performing what would traditionally be trivial maintenance tasks feel extremely risky. The result of a cluster shutdown is that all access to the storage is halted with VMs themselves continuing to run without any access to their disks; when the issue is resolved the cluster automatically resumes storage access and VMs have access to their disks again to varying degrees of success (when this has happened our Windows VMs typically just continue as if nothing had happened, our Linux VMs have usually remounted their disks as read-only and require reboots and fsck running to get them going again).

From our experience then it may have been better to build multiple smaller clusters with HA services split across them rather than a single large cluster, or if you do go with a single large cluster then look at increasing the redundancy factor to 3 so that you can sustain up to 2 components being offline (but I imagine either option would increase the costs significantly).
 
Associate
Joined
30 Jul 2007
Posts
1,248
IMO one of the big advantages of VMware VSAN is you can procure storage with the latest technologies (eg pcie 4 NVME) as soon as they come to market with out the delay or premium that storage vendors take to adopt or add in mark up.
That translates to the better performance / price.
All flash VSAN requires 10Gbit between the hosts mind you.
VSAN degraded performance (eg when a host is out) is not as robust/predictable as traditional san storage, but then for many cases the headroom is large enough that its irrelevant.
 
Associate
Joined
3 Oct 2007
Posts
795
Just starting a deployment of cross site VMWare VSANs now.
Our existing SAN is filling up and on extended support, so spending money on additional disk enclosures feels very much like pouring money down the drain.

The VSAN is going in with the understanding that we also need to actually progress our long kicked about Cloud Strategy. If we half-arse going to cloud, then the VSAN investment will have been a lot of wasted money and effort, and there will be ~£600K being spent on replacement SAN's in a couple of years time.

Any of our infrastructure servers that we expect to reman onsite (Patching/AV, Domain Controllers, etc) will be the first to go to the VSAN - after that an assesment of space usage and future prospects of each system will be done before migration.
 
Soldato
Joined
15 Sep 2009
Posts
2,895
Location
Manchester
Just starting a deployment of cross site VMWare VSANs now.
Our existing SAN is filling up and on extended support, so spending money on additional disk enclosures feels very much like pouring money down the drain.

The VSAN is going in with the understanding that we also need to actually progress our long kicked about Cloud Strategy. If we half-arse going to cloud, then the VSAN investment will have been a lot of wasted money and effort, and there will be ~£600K being spent on replacement SAN's in a couple of years time.

Any of our infrastructure servers that we expect to reman onsite (Patching/AV, Domain Controllers, etc) will be the first to go to the VSAN - after that an assesment of space usage and future prospects of each system will be done before migration.

We've deployed multiple vSAN Stretch-Clusters and they're fantastic, we're currently building out a full on SDDC with full VMware including NSX-T can't wait to get my hands dirty with it all. vSAN also has its place in high-cloud usage businesses, I deployed a stretch cluster across two sites with about 8 nodes to run a lot of stuff which wasn't necessarily suitable for cloud migrations, and ran VDI on them too.
 
Back
Top Bottom