Pooled VDI

Yes that was my understanding. In a situation where 100 VMs are built from a master image, you would have to patch the master image and then recreate all 100 VMs (weekly?)

RDP / RemoteFX / 1gb network to clients

We are using some R&D money to buy a dell R740 server (2x12core, 196gb ram, amdx2 gpu)

In a HyperV cluster environment, I assume you point the RDS platform to the hv cluster name rather than the individual VDI hosts?
 
The main reasons I'm looking into VDI

1. Hardly any front line IT staff left within the department. The ones left are not there by choice
2. Desktop machines are 10 years old. £650 dell OptiPlex vs £250 VDI terminal with 5yr warranty+multimedia pack
3. Infrastructure ready, just need a few high spec servers and we can deploy tomorrow
4. The two year windows 10 support cycle will kill us, so VDI would be a huge time saver
5. User profile disks available to win10 machines, we already use them for RDS
6. VM snapshots reverting back to the master image would be a life saver in the event of a 0day virus attack (all servers and storage can be back within an hour, desktops cant)
7. Collections would be excellent for us, we have vastly different builds per department and some of them are quite hard to configure.
8. Roaming laptops. We have thin client laptops where users can roam between departments (say temp staff). The laptops can have the collection as the department they are working in, then the next day the laptop can be moved to another department and instantly has that departments collection on.

My plan is to create a new SCCM gold image based on 1803, and within an SCCM task sequence configure it to deploy the required software and config based on what department is going to use it. We'll use this setup to build 10 master images for the 10 VDI collections. That way, we have the ability to quickly rebuild/update everything.

I'm keen to roll out a small scale test to see in the real world what it's like. If it's not great, I'll give up on the idea but I want to have a go. My test servers ready with 10 VMs running on it and seems ok. Silly me left the rdweb open and users started logging in and using them lol

assume 1 vcpu, 4gb dynamic ram is ok for usual internet/office based work?

My only sticking point at the minute is what to do about AV on the pooled clients. We have SCCM CB running, which fully supports pooled clients, but not sure best settings to use for endpoint.
 
Last edited:
I saw this today. Worth a read.
http://www.theregister.co.uk/2018/05/02/why_bother_with_virtual_desktops/

From my experience, Proper VDI is never cheaper than desktops. It'll seem to work fine on an old server with 10 clients, but if you were going to buy that server new, it'll need lots of cores, 256GB+ RAM and SSD's - That's a £7-£9k Server before any software and licensing.

Damn, that's everything I wanted to hear....great link

I have 15k per server to spend, storage is already sorted via compellent san

My thinking was R740 - 2 x 12 core 3ghz xeons, 192gb ram, amdx2 gpu.....that came in within budget with a little room to spare so can add more ram which is never a bad thing with vdi, and perhaps lowering the clockspeed and getting more cores like perhaps 2 x 2.6ghz 16core. It's not just the cost savings which is driving this, but the lack of a fully manned IT department which struggles with supporting our estate.

Hopefully (and ill need to check) we are already covered in terms of our licences as we buy a user package via our MS reseller which covers us for sccm/rds and I think vdi.

I'm going to order that server soon as a bit of r&d

Happy to report my findings if anyone is interested

:edit: Think our reseller can squeeze in a 2 x 22 core xeon and 256gb ram
 
Last edited:
The compellent is a mix of flash and 10k disks. To start with, we'd only get 20-30 VMs running from one server just to see how it scales.
Software will be built within the master image. The regular desktops are using AppV but the VDI ones wont.

We're starting small to see how it goes.
 
I have a few thoughts which spring to mind when reading your questions

1. Staff savings....there are virtually no staff to manage the desktops, but there are a few within the infrastructure team who can look after the backend. Having vdi sessions roll back when users logout would be great here.
2. Cost savings isn't much of a priority but it does seem easier to secure capital investment on servers/storage over desktop machines for some strange reason?
3. Everything you have listed is already taken care of. Most of what you asked is all automated so deploying apps, creating build images is just a click of a button. New network, new sans, new blades for rds
4. We have a new SCCM 1802 platform which in fairness is working really well
5. Compellent san flash/10k.....we have an order in for a second shelf with 10k disks but will look into its current performance and see about perhaps using a spare 15k san or buying a new compellent with all flash for vdi
6. We have a 50/50 mix between thin client and full windows desktop.....to be honest, id be happy with 50% thin client, 25% vdi, 25% local pc. I'd say approx. 700-1000 vdi terminals in total (with maybe up to 60% online at the same time)
7. I'd say a max of 10 collections (hopefully a lot less when we really start looking into it)
8. True, but we are working with the igel dev team in Germany to create us firmware so we can profile the terminals to a specific collection. This is working, but they need to integrate it into the firmware as it's additional code we had to stick in whilst testing

Additional points. Remote working will be left as-is through our rds gateway/Fortinet ssl gateway. The only way users will be able to access vdi is through vdi terminals. vdi collections will have rdweb disabled.
I want to use my R&D budget to buy two servers as a trial. :) and if its not right I'll use them for our hv cluster

I think a second compellent san will be ordered within the next 12 months anyway, as I want to configure our physical servers (only have about 10) to boot from the current san. This fits in nicely with our DR/BC plans.

Currently we have a blade server running 20 vdi sessions, from our standard windows 10 sccm build. Looks ok, cpu doesn't look high and the san doesn't look to have changed much either. Things will no doubt change when we scale up, but the server it's on is quite old (iirc, 2x6core xeons, 128gb ram)

I'm in a meeting today with a few outside consultants. I'll go over our entire estate and see what they recommend.
 
Last edited:
Back
Top Bottom