Pooled VDI

You generally wouldn't patch individual desktops, but the gold pattern and then update the pool from the GP.

Performance is often better in VDI for disk subsystems, ram and network etc... but the user experience can be destroyed with overly aggressive security policies and also it depends on the application in question and the protocol thats in use to present the desktop to the end user, PCoIP vs RDP vs ICA vs HTML5 all have different characteristics and even more so depending on latency, and available bandwidth.
 
Yes that was my understanding. In a situation where 100 VMs are built from a master image, you would have to patch the master image and then recreate all 100 VMs (weekly?)

RDP / RemoteFX / 1gb network to clients

We are using some R&D money to buy a dell R740 server (2x12core, 196gb ram, amdx2 gpu)

In a HyperV cluster environment, I assume you point the RDS platform to the hv cluster name rather than the individual VDI hosts?
 
We've had VDI for around 6 years, it supplied about 500 Win7 desktops to various locations. We now now decommissioning and replacing with standard desktops.

We patch the golden image every 3 months (unless a major security flaw is found) and use agentless antivirus.
 
We've had VDI for around 6 years, it supplied about 500 Win7 desktops to various locations. We now now decommissioning and replacing with standard desktops.
Interested to know what happened, why are you reverting to physicals?
 
Interested to know what happened, why are you reverting to physicals?

When we first installed VDI, we had a very scrappy desktop infrastructure, it wasn't well managed with SCCM etc, but now we have a decent, fully managed desktop that everyone is happy with. Also the current VDI infrastructure is EOL and replacing it with another enterprise solution would cost much more than rolling out 500 new desktops.
 
When we first installed VDI, we had a very scrappy desktop infrastructure, it wasn't well managed with SCCM etc, but now we have a decent, fully managed desktop that everyone is happy with. Also the current VDI infrastructure is EOL and replacing it with another enterprise solution would cost much more than rolling out 500 new desktops.
Solid reasons.
 
The main reasons I'm looking into VDI

1. Hardly any front line IT staff left within the department. The ones left are not there by choice
2. Desktop machines are 10 years old. £650 dell OptiPlex vs £250 VDI terminal with 5yr warranty+multimedia pack
3. Infrastructure ready, just need a few high spec servers and we can deploy tomorrow
4. The two year windows 10 support cycle will kill us, so VDI would be a huge time saver
5. User profile disks available to win10 machines, we already use them for RDS
6. VM snapshots reverting back to the master image would be a life saver in the event of a 0day virus attack (all servers and storage can be back within an hour, desktops cant)
7. Collections would be excellent for us, we have vastly different builds per department and some of them are quite hard to configure.
8. Roaming laptops. We have thin client laptops where users can roam between departments (say temp staff). The laptops can have the collection as the department they are working in, then the next day the laptop can be moved to another department and instantly has that departments collection on.

My plan is to create a new SCCM gold image based on 1803, and within an SCCM task sequence configure it to deploy the required software and config based on what department is going to use it. We'll use this setup to build 10 master images for the 10 VDI collections. That way, we have the ability to quickly rebuild/update everything.

I'm keen to roll out a small scale test to see in the real world what it's like. If it's not great, I'll give up on the idea but I want to have a go. My test servers ready with 10 VMs running on it and seems ok. Silly me left the rdweb open and users started logging in and using them lol

assume 1 vcpu, 4gb dynamic ram is ok for usual internet/office based work?

My only sticking point at the minute is what to do about AV on the pooled clients. We have SCCM CB running, which fully supports pooled clients, but not sure best settings to use for endpoint.
 
Last edited:
I saw this today. Worth a read.
http://www.theregister.co.uk/2018/05/02/why_bother_with_virtual_desktops/

From my experience, Proper VDI is never cheaper than desktops. It'll seem to work fine on an old server with 10 clients, but if you were going to buy that server new, it'll need lots of cores, 256GB+ RAM and SSD's - That's a £7-£9k Server before any software and licensing.

Damn, that's everything I wanted to hear....great link

I have 15k per server to spend, storage is already sorted via compellent san

My thinking was R740 - 2 x 12 core 3ghz xeons, 192gb ram, amdx2 gpu.....that came in within budget with a little room to spare so can add more ram which is never a bad thing with vdi, and perhaps lowering the clockspeed and getting more cores like perhaps 2 x 2.6ghz 16core. It's not just the cost savings which is driving this, but the lack of a fully manned IT department which struggles with supporting our estate.

Hopefully (and ill need to check) we are already covered in terms of our licences as we buy a user package via our MS reseller which covers us for sccm/rds and I think vdi.

I'm going to order that server soon as a bit of r&d

Happy to report my findings if anyone is interested

:edit: Think our reseller can squeeze in a 2 x 22 core xeon and 256gb ram
 
Last edited:
How many clients, how many of those servers? Is your compellent all flash? Ghz isn't so important, they don't need 3ghz, snappy VDI its all about RAM and IOPs.

Application deployment is also a real issue, you can't use Software Center to deploy, they all have to be streamed or built into the golden image.

Yeah, definitely dont do it to save money. There were rare times when our VDI totally failed, and that means a whole building down
 
The compellent is a mix of flash and 10k disks. To start with, we'd only get 20-30 VMs running from one server just to see how it scales.
Software will be built within the master image. The regular desktops are using AppV but the VDI ones wont.

We're starting small to see how it goes.
 
The main reasons I'm looking into VDI

1. Hardly any front line IT staff left within the department. The ones left are not there by choice
2. Desktop machines are 10 years old. £650 dell OptiPlex vs £250 VDI terminal with 5yr warranty+multimedia pack
3. Infrastructure ready, just need a few high spec servers and we can deploy tomorrow
4. The two year windows 10 support cycle will kill us, so VDI would be a huge time saver
5. User profile disks available to win10 machines, we already use them for RDS
6. VM snapshots reverting back to the master image would be a life saver in the event of a 0day virus attack (all servers and storage can be back within an hour, desktops cant)
7. Collections would be excellent for us, we have vastly different builds per department and some of them are quite hard to configure.
8. Roaming laptops. We have thin client laptops where users can roam between departments (say temp staff). The laptops can have the collection as the department they are working in, then the next day the laptop can be moved to another department and instantly has that departments collection on.

1. Staff effort savings will be along way down the road, if ever.
2. I can guarantee that VDI will not be cheaper in the short-medium term and unlikely to be cheaper in the long run
3. That's a vast oversimplification. How are you going to handle user profiles, data migrations, desktop policy, application packaging, build creation and testing etc etc?
4. Debatable whether you would save that much vs a properly implemented SCCM infrastructure.
5. Where are the disks? It doesn't take much to impact VDI desktop performance
6. To an extent. You are unlikely to have a completely virtual desktop estate - the cost/effort of turning those last 10% of "unusual" users onto VDI can become prohibitive
7. That fact alone would put me off. The big gains from VDI come from having fewer builds/collections, not more.
8. Well yes, but it's not tied to device, it's user.

We went into VDI 6 years ago and at peak had about 2000 users on it - we are going back to physical now that data centre hardware is reaching end of life:

- You don't do VDI to save money. Anyone who tells you that is lying. Even in a perfect scenario (99% utilisation, small application stack) the savings are going to be limited to fewer desktop engineers.
- Data centre hardware to run this kind of thing is NOT cheap. If you need the flexibility and have small application stack then it can make sense but buying £500 desktops is much cheaper especially with the power of today's CPUs - office apps can run well for many years and PCs are cheap to repair/replace.
- Remember that changing desktop builds will require a recompose - can your SAN handle this? Do you have a window where this is possible or are your quiet windows used to perform SAN backups? Can the SAN handle the recovery from a complete outage where VDI is tyring to create hundreds of desktops? What is the business impact of all the other data on that SAN being slower while this happens?
- Application stack makes a HUGE difference. Our organisation has over 500 applicaitons in use in various combinations by 2000 users. Packaging and app virtualisation was a nightmare. Low hanging fruit is easy but even in 5 years we never finished all of them and you WILL find some that won't work. At all.
- Bear in mind implementation will all be in tandem with your current setup. You have to carry our Business As Usual all the time the same engineers will need to package, build and test VDI desktops
- How are you going to handle VPN, off site working, Apple devices, data centre outages etc? VDI alone required a £100k ugrade for our network team.

In short - my advice is don't go there unless you NEED the flexibility and can afford the outlay/upheaval involved. SCCM can do all you want with far less outlay.
If you're sure it's the route for you, then you have to have 100% buyin from ALL the company and that will mean some users having to compromise. Even a small set of exceptions will destroy any security or administration gains.
 
Last edited:
I have a few thoughts which spring to mind when reading your questions

1. Staff savings....there are virtually no staff to manage the desktops, but there are a few within the infrastructure team who can look after the backend. Having vdi sessions roll back when users logout would be great here.
2. Cost savings isn't much of a priority but it does seem easier to secure capital investment on servers/storage over desktop machines for some strange reason?
3. Everything you have listed is already taken care of. Most of what you asked is all automated so deploying apps, creating build images is just a click of a button. New network, new sans, new blades for rds
4. We have a new SCCM 1802 platform which in fairness is working really well
5. Compellent san flash/10k.....we have an order in for a second shelf with 10k disks but will look into its current performance and see about perhaps using a spare 15k san or buying a new compellent with all flash for vdi
6. We have a 50/50 mix between thin client and full windows desktop.....to be honest, id be happy with 50% thin client, 25% vdi, 25% local pc. I'd say approx. 700-1000 vdi terminals in total (with maybe up to 60% online at the same time)
7. I'd say a max of 10 collections (hopefully a lot less when we really start looking into it)
8. True, but we are working with the igel dev team in Germany to create us firmware so we can profile the terminals to a specific collection. This is working, but they need to integrate it into the firmware as it's additional code we had to stick in whilst testing

Additional points. Remote working will be left as-is through our rds gateway/Fortinet ssl gateway. The only way users will be able to access vdi is through vdi terminals. vdi collections will have rdweb disabled.
I want to use my R&D budget to buy two servers as a trial. :) and if its not right I'll use them for our hv cluster

I think a second compellent san will be ordered within the next 12 months anyway, as I want to configure our physical servers (only have about 10) to boot from the current san. This fits in nicely with our DR/BC plans.

Currently we have a blade server running 20 vdi sessions, from our standard windows 10 sccm build. Looks ok, cpu doesn't look high and the san doesn't look to have changed much either. Things will no doubt change when we scale up, but the server it's on is quite old (iirc, 2x6core xeons, 128gb ram)

I'm in a meeting today with a few outside consultants. I'll go over our entire estate and see what they recommend.
 
Last edited:
Take a good look at per user cost - it's unlikely to be very cost efficient at only 25% of your workforce.

We used existing desktop computers for access - no point in buying VDI terminals if you have a desktop on the desk as it needs only launch a window so will last until you have a hardware fault, which could mean 20 years. It also means more flexibility in that anyone can use that "seat" in the office.

Creating build images and app packages for VDI and keeping them up to date is very different from SCCM. We had one engineer full time on just that and one more on general VDI administration. This is on top of the server team looking after the back end.

Don't underestimate the server team effort required for updating - it's vastly more complex than server infrastrucure updating as all components need to be compatible. In fact this admin overhead is one reason we bought Nutanix for a very small (under 50) group of users for whom VDI advantages outweigh the downsides/cost (all the hardware/update headaches are now out of our hands).

Don't underestimate how much users have to shift their thinking with non-persistent desktops.

Check software licensing - some suppliers still don't account for VDI properly and will charge based on how many users it is available to rather than believe your figures about how many use it. Adobe in particular I believe are real ****s about it, though our desktop team would know better.

With today's servers you're unlikely to run in to CPU issues. Ram and disk performance is king.
 
The biggest issue with VDI is your IT maturity. How experienced and mature are you and your team? When so many users are concentrated on such a small patch of infrastructure, any misstep is catastrophic and has enormous consequences. Nobody that I know (including myself) who has deployed VDI thinks much of it. It's fine, but there are no clear wins, just pros and cons like anything else.
 
I know I sound very down on it, I actually like various bits of it. As above, it's simply another technology, that has up and down sides.

The biggest advantage of VDI for us is that we started using AppV and beyond the small group of VDI users we have left this is the only bit we're keeping.
 
Back
Top Bottom