Some noob Server questions

Soldato
Joined
22 Feb 2014
Posts
2,819
Hi,

Although computer literate and more than able to setup and configure standard windows computers, I have never really been "into" Servers and as such my knowledge is limited.
That said I have managed to setup the below.

Currently running a Dell PE2900 with 2008 R2 (not virtualised)
It is used for AD, DHCP, DNS and file/printer sharing.

I have purchased a 2nd hand R720 from eBay, and when it arrives I would like to set it up this time with hyper-v to learn more about that.
It includes 2 x 300GB SAS drives, and I plan to swap the drives from the current server over (if suitable)

Current setup is OS is on RAID1 using 2 x 76GB SAS 15k drives
All files/backups/swap file/server backup are on a RAID5 array made up of 4 x 300GB 10k SAS Drives.
then I have a 300GB drive as a hot spare and a spare 146gb drive as hot spare for the OS (mainly because it was lying around doing nothing)


As I have never virtualised before I am unsure how to partition drives best for it. Or how you normally arrange drives for that.
I have read up about SATA/NL-SAS and SAS and understand the differences, however still not sure what I need as storage in this case. I am currently using 50/60% of the RAID 5 array so it would be nice to be able to increase this, but new drives are (budget-wise) definitely out of the question.
I could pop a 2TB SATA drive in for some non-important data that isn't accessed regularly
Also not sure if I need 10k or 15k SAS drives, i know the benefits of the faster drives are better IOPS, but do I really need better IOPS, what benefit does that give me for my above uses ?

I have just found some cheapish* new sealed 600gb 15k drives on eBay, I would guess with no warranty.
 
Generally, the first bottleneck you will encounter in home lab virtualisation is IOPS.

If you're running the services you listed on a single virtualised server then you can expect the performance will be roughly the same as what you currently get - maybe less a smidge for virtualization overhead.
If you're looking to split those services out to their own individual servers, then you might start to encounter performance issues. A saturated disk could just manifest as slow performance, and if it gets really severe, CPU maxxing out, and virtual drives disconnecting.

It's impossible to really predict how many servers is too many, but as you haven't stated how much memory is in the server that might be your limit before you bump up against any disk performance issues.
 
2 x hex Core E5-2640
48GB Ram
8 x 3.5" Hot swap bays

how do you normally install the different VMs ?
Do you have 1 boot drive and partition it for however many VMs you want to run or do you have seperate drives, or does vm-ware/hyper-v handle that ?
Will 75GB be enough or should I be using a larger drive now ?
I assume you always run RAID1 for the boot drive as I am now ?

I also assume that all VMs could be setup to access the storage drive and all run their own share from it if I so wished.
Would I benefit from 15k drives on the storage partition?
Currently the pagefile is on the storage drive, with a VM am I going to have an individual page file for each virtual OS ? I guess if this is the case I would probably want to consider increasing the storage size as 3 or 4 48gb page files are going to eat into my storage somewhat.

I am not looking to split the services on the current machine to different VMs (at least for now anyway), I am looking to gain experience actually installing windows under a VM environment in the first instance, and then perhaps using a seperate VM to play around with settings and not worry about breaking things.
Maybe at some point have a mess with running an exchange server.
 
My experience is limited but unless your VMs require huge amounts of RAM and drive space then I really don’t see any issues with the server itself. However, I’d throw an SSD into the equation to store the VMs on. The difference in speed and general responsiveness is very noticeable over a HDD, no matter how many RPMs they can do.
 
I've only ever used VMWare, so your mileage may vary with HyperV

how do you normally install the different VMs ?
Do you have 1 boot drive and partition it for however many VMs you want to run or do you have seperate drives, or does vm-ware/hyper-v handle that ?
Will 75GB be enough or should I be using a larger drive now ?
I assume you always run RAID1 for the boot drive as I am now ?

Current accepted wisdom is to install and boot the VMWare ESXi Hypervisor off of a USB stick or SD Card - It's a little slow to boot, but it all sits in memory so once booted it makes no difference to performance

You could keep your current RAID configuration and have a 75GB datastore for your OS virtual disk and the RAID5 datastore for everything else.
A disk in a virtual environment is just a file so it's down to you where you place the disks - OS drive on your faster RAID1 disks seems sensible to me.
Our standard OS drive is 40GB and is ample in 90% of cases, 75GB should be fine to start, and if it isn't it's very easy to migrate the virtual disk to another datastore

I also assume that all VMs could be setup to access the storage drive and all run their own share from it if I so wished.
Would I benefit from 15k drives on the storage partition?
Currently the pagefile is on the storage drive, with a VM am I going to have an individual page file for each virtual OS ? I guess if this is the case I would probably want to consider increasing the storage size as 3 or 4 48gb page files are going to eat into my storage somewhat.

Pretty much, the storage drive would just become a Datastore and you can create virtual disks within it for as many servers as you can support.

Any increase in overall IOPS will help, but I'd probably go with what you have - ff you try to run a 2nd server and start having issues it might be worth the investment.

In virtual environments as in Physical, if you end up actually having to use a paging file your performance is going to absolutely tank - so much so that VMWare actually includes the ability to use an SSD as a page file caching drive to help alleviate the problem in environments where more memory has been allocated than is actually available.
 
I would seriously look at those 73GB SAS drives debate replacing them with small SSDs.
By running VMs you are going to increase contention in IO (even if it only manifests during boot and shutdown).
And SSDs handle contention so much better than mechanical drives, even if they are 15K rpm ones.
 
Server arrived and setup is progressing slowly. Windows updates on 2012R2 and 2008R2 taking an absolute age :(
I originally set it up with 2 x 300GB SAS drives in RAID1 and have installed Hyper-v and all the VM files on there too
I didn't understand how this worked before I set it up, it is now much clearer, the VM actually just becomes a file on the drive.

I have managed to install OMSA on the hyper-v host as well so that I can access hardware information.

The next step is that I have installed 4 x 600GB SAS drives in a RAID-5 array which is currently formatting with diskpart.
How do i pass this through to a VM so that it can be used as a VM.
Would I be right in thinking that I need to create a virtual drive on it, then add that virtual drive to the vm ?
And once I do pass it through, I assume that no other vm will be able to access any files created on it?


Then moving back to something further up the thread that someone mentioned, seperating the different roles.
What is the general best practive in terms of seperating server roles ?
Should i have individual VMs for DHCP/DNS/AD/File server ?


Sorry for the noob questions this is a learning curve for me. :)
 
Server arrived and setup is progressing slowly. Windows updates on 2012R2 and 2008R2 taking an absolute age
I'd advise downloading the latest 'cumulative securtiy and quality update' and install that first - it'll likely supersede a lot of updates so you have to install a lot less patches overall - The install mechanism tends to hog a whole CPU core to itself, so if you've only given them a single CPU general performance will tank during updates.

How do i pass this through to a VM so that it can be used as a VM.
Just to clear up some VM jargon, Pass-through in a virtual sense is when you are passing direct control of a physical device through to a virtual machine - whether it be a graphics card, tv capture device, or as you've suggested, a Storage adapter.
There are lots of reasons to want to pass video devices through, but a lot less for storage devices - and I don't think you actually want to do that in this case, as it would mean the drives attached to the storage adapter can only be used by a single machine.

As far as I know, in HyperV attached storage will be accessible like any drive in a windows installation. So you can just format the drive give it a drive letter and it's available for use - Again I'm a VMWare user so am unfamiliar with the exact process.

What is the general best practice in terms of seperating server roles ?
For a production environment, best practice in my opinion is resilience first, then separation. No point separating all the roles neatly on to their own servers and then the first time the DNS server goes down you lose access to everything any way.
If the number of servers was absolutely set you're better off having DHCP and DNS services combined and on 2 servers.

In a lab environment I see no issue with chucking it all on a single server.
Fileshares definitely shouldn't be on a domain controller. For testing it'll be fine, but I wouldn't be hosting the families media on it.
 
Server arrived and setup is progressing slowly. Windows updates on 2012R2 and 2008R2 taking an absolute age
I'd advise downloading the latest 'cumulative securtiy and quality update' and install that first - it'll likely supersede a lot of updates so you have to install a lot less patches overall - The install mechanism tends to hog a whole CPU core to itself, so if you've only given them a single CPU general performance will tank during updates.

How do i pass this through to a VM so that it can be used as a VM.
Just to clear up some VM jargon, Pass-through in a virtual sense is when you are passing direct control of a physical device through to a virtual machine - whether it be a graphics card, tv capture device, or as you've suggested, a Storage adapter.
There are lots of reasons to want to pass video devices through, but a lot less for storage devices - and I don't think you actually want to do that in this case, as it would mean the drives attached to the storage adapter can only be used by a single machine.

As far as I know, in HyperV attached storage will be accessible like any drive in a windows installation. So you can just format the drive give it a drive letter and it's available for use - Again I'm a VMWare user so am unfamiliar with the exact process.

What is the general best practice in terms of seperating server roles ?
For a production environment, best practice in my opinion is resilience first, then separation. No point separating all the roles neatly on to their own servers and then the first time the DNS server goes down you lose access to everything any way.
If the number of servers was absolutely set you're better off having DHCP and DNS services combined and on 2 servers.

In a lab environment I see no issue with chucking it all on a single server.
Fileshares definitely shouldn't be on a domain controller. For testing it'll be fine, but I wouldn't be hosting the families media on it.

In testing environment at the moment but will soon be in production, so might as well set it up with best practice.
Having done a bit of reading it seems most people suggest separating DHCP from the domain controller (but not a necessity) and because AD relies on DNS having them both on the same server, and finally as you have suggested seperate the file share server.
No of servers is not set as the server I bought actually came with a 2008 R2 datacentre COA on it, so I can pretty much install as many servers as I want to.

As far as being able to use the additional storage space, is the "best" way to simply create a virtual HDD on it and just let the VM use that ?
 
Back
Top Bottom