Ideas for new server setup

Ok. You need to mount the HDD in the same way that you would mount it in any other Linux OS. You need to create a mount point and an entry in /etc/fstab on the Proxmox host. For instance, for a single HDD formatted as ext4 you might create a mount point of /mnt/storage and a fstab entry like;

/dev/sdb /mnt/storage ext4 defaults 0 0

I would use the UUID of the HDD instead of the device, if possible though. Google is your friend. Once that's done, reboot and the drive should be mounted.

Once the HDD is mounted, click on the Datacentre folder in the left-hand pane in Proxmox, click Add, and you can then define a folder on the HDD as a repository for VMs, containers, backups or whatever.

The above is just for a simple single HDD. You may want to look into using btrfs (what I use) or LVM so that you could just add another HDD later on and not have it appear as a separate device.
 
Ok. You need to mount the HDD in the same way that you would mount it in any other Linux OS. You need to create a mount point and an entry in /etc/fstab on the Proxmox host. For instance, for a single HDD formatted as ext4 you might create a mount point of /mnt/storage and a fstab entry like;

/dev/sdb /mnt/storage ext4 defaults 0 0

I would use the UUID of the HDD instead of the device, if possible though. Google is your friend. Once that's done, reboot and the drive should be mounted.

Once the HDD is mounted, click on the Datacentre folder in the left-hand pane in Proxmox, click Add, and you can then define a folder on the HDD as a repository for VMs, containers, backups or whatever.

The above is just for a simple single HDD. You may want to look into using btrfs (what I use) or LVM so that you could just add another HDD later on and not have it appear as a separate device.
Just as general advice, I wouldn't be using BtrFS as storage for VM images, due to fragmentation and huge performance decreases. See: https://btrfs.wiki.kernel.org/index.php/Gotchas

There don't seem to be similar warnings for ZFS from what I have briefly seen, however seeing as it's also a Copy-on-Write filesystem, the same should apply. Indeed, this article does point to a reduction in performance as you'd expect:
http://blog.delphix.com/uday/2013/02/19/zfs-write-performance/

In short, OP, if you want to play around with this, you have two options:

  • Use LVM as as Buffalo2102 stated. XFS and ext4 are common options, with the XFS caveat being that you cannot shrink the filesystem. This is ideal if you require your virtual machines to be portable.
  • If you don't require portability and want to play around with filesystems, you could use VirtFS, often called filesystem passthrough. This may involve having a single normal storage device for /boot, not sure. This would give you all the benefits of BtrFS at the expense of manageability and portability. To say this is a little bit pointless when you're just playing around with VMs is a understatement, and I have no idea if Proxmox supports this in the web UI, however some people are desperate to use CoW filesystems exclusively for whatever reason.
 
Good points lamboman. I don't actually run my VMS from the btrfs volume, they run from the local storage on the SSD. I do store my VM backup images on the btrfs volume though (and elsewhere).
 
What would be the best file system for a RAID array as in the not too distant future I would like to be able to add more disks and create a RAID array without losing data
 
What would be the best file system for a RAID array as in the not too distant future I would like to be able to add more disks and create a RAID array without losing data
RAID is at block level, not filesystem level, so it doesn't make any difference. However, you wouldn't use a CoW filesystem on top of RAID or LVM, as CoW filesystems have these functionalities built in.

Is this just for VM storage? If not, how much data/how many drives do you think you'll end up using? Storage strategy is a deceivingly complicated topic.
 
RAID is at block level, not filesystem level, so it doesn't make any difference. However, you wouldn't use a CoW filesystem on top of RAID or LVM, as CoW filesystems have these functionalities built in.
Is this just for VM storage? If not, how much data/how many drives do you think you'll end up using? Storage strategy is a deceivingly complicated topic.
The setup I'm trying to create is slightly complicated (And I probably haven't explained it well :D)
I'm trying to create a server which can record and stream TV/Media to other devices in the house whilst being able to support future IoT/Security monitoring software and game servers. However I was wondering if virtualisation was the key to segmenting the server to make it easier to manage the setup.

Also I was hoping to add drives in the future just by slotting them in however I realise if I was to go RAID/ZFS etc this may not be achievable :)
 
Virtualisation, containerisation, both areas to explore. Virtualisation will be easier at the expense of heavier resource usage, however if you have a powerful system and expect to see low loads, this will be nothing to worry about.

Purely due to its maturity, a ZFS RAID-Z2 or RAID-Z3 setup (depending on drive counts, common sense like RAID) would probably be best. SSD for VM image storage, use VirtFS to share filesystems to store working data on the host and gain the benefits of ZFS.
 
Virtualisation, containerisation, both areas to explore. Virtualisation will be easier at the expense of heavier resource usage, however if you have a powerful system and expect to see low loads, this will be nothing to worry about.
Purely due to its maturity, a ZFS RAID-Z2 or RAID-Z3 setup (depending on drive counts, common sense like RAID) would probably be best. SSD for VM image storage, use VirtFS to share filesystems to store working data on the host and gain the benefits of ZFS.
Also since Server 2012/R2 don't allow my server to be turned on via WOL as it lacks S3/S4 power states and I want to use a TV tuner to record media to the server would you have any reccomendations of how I could set up a system to do this in a Linux VM in Proxmox as I don't want excess overheads using up resources by running a Windows VM.
 
Also since Server 2012/R2 don't allow my server to be turned on via WOL as it lacks S3/S4 power states and I want to use a TV tuner to record media to the server would you have any reccomendations of how I could set up a system to do this in a Linux VM in Proxmox as I don't want excess overheads using up resources by running a Windows VM.

What tuner?

I use a kworld USB tuner in a Debian Jessie VM running tvheadend. I can then watch tv on any other device using vlc as a front end. I can also set up scheduled recordings from anywhere via the tvheadend web gui.
 
What tuner?
I use a kworld USB tuner in a Debian Jessie VM running tvheadend. I can then watch tv on any other device using vlc as a front end. I can also set up scheduled recordings from anywhere via the tvheadend web gui.
Currently I'm using a cheap Hauppauge Ministick but if I could get the concept to work (Centralised media device/TV streamer via XBMC etc) then I would upgrade to a tuner with DVB-T2 support if possible
 
Currently I'm using a cheap Hauppauge Ministick but if I could get the concept to work (Centralised media device/TV streamer via XBMC etc) then I would upgrade to a tuner with DVB-T2 support if possible

This table http://www.linuxtv.org/wiki/index.php/DVB-T_USB_Devices shows which devices are supported natively by the Linux kernel and some that can be used with installation of additional drivers. If your tuner doesn't appear on the list, it doesn't necessarily mean that it won't work. You really need to find out what chipset it is based on.

Like most of these things with Linux, you need to do some research. Some say it's a pain but I enjoy the challenge and the learning. And that's the great thing about using virtual machines - if you mess it up, just remove the VM and spin another one up.
 
This table http://www.linuxtv.org/wiki/index.php/DVB-T_USB_Devices shows which devices are supported natively by the Linux kernel and some that can be used with installation of additional drivers. If your tuner doesn't appear on the list, it doesn't necessarily mean that it won't work. You really need to find out what chipset it is based on.
Like most of these things with Linux, you need to do some research. Some say it's a pain but I enjoy the challenge and the learning. And that's the great thing about using virtual machines - if you mess it up, just remove the VM and spin another one up.
Excellent stuff, to be honest I don't mind messing with it a bit but for the TV tuner I would prefer one that works OOTB to make it easier configuring other areas of the VM :)
 
Okay a quick update as I'm confused on where to go.
I've spent a while messing with Proxmox and have found that it's pretty good but doesn't offer the easy file storage I need (Quickly add/remove storage etc) so I was wondering if anyone else had any ideas as I'm stumped. Would UNRaid be a good option as that would allow for VMs and quick expansion of storage arrays?
 
Last edited:
Okay a quick update as I'm confused on where to go.
I've spent a while messing with Proxmox and have found that it's pretty good but doesn't offer the easy file storage I need (Quickly add/remove storage etc) so I was wondering if anyone else had any ideas as I'm stumped. Would UNRaid be a good option as that would allow for VMs and quick expansion of storage arrays?
When you say quickly add/remove storage, do you mean just mounting/unmounting a USB stick for example? In the command line you can just do the following:

lsblk - list block devices
sudo mount /dev/sdX /mnt - mount device
umount /mnt - unmount device

You may need to create another mount point if Proxmox uses /mnt for permanent storage (it shouldn't, it's technically incorrect, that's what /media should be used for nowadays).
 
When you say quickly add/remove storage, do you mean just mounting/unmounting a USB stick for example? In the command line you can just do the following:
lsblk - list block devices
sudo mount /dev/sdX /mnt - mount device
umount /mnt - unmount devic
You may need to create another mount point if Proxmox uses /mnt for permanent storage (it shouldn't, it's technically incorrect, that's what /media should be used for nowadays).
I was thinking more along the lines of expanding storage pools e.g. increasing size of a RAID array easily (I know this is not possible so I was wondering if there was a similar feature like this in a different O/S)
 
I was thinking more along the lines of expanding storage pools e.g. increasing size of a RAID array easily (I know this is not possible so I was wondering if there was a similar feature like this in a different O/S)
This is what I was talking about when it comes to ZFS - it allows you to add drives to your pool, however you do of course require potentially lengthy rebuilds.

Another option that functions at file level, as opposed to block or filesystem level, is Snapraid (http://www.snapraid.it/).
 
This is what I was talking about when it comes to ZFS - it allows you to add drives to your pool, however you do of course require potentially lengthy rebuilds.
Another option that functions at file level, as opposed to block or filesystem level, is Snapraid (http://www.snapraid.it/).
They both look like they would satisfy the set requirements but I also need to try and make this useable by a other people (Just the NAS side) but I would like to keep the virtualisation side of things as well :confused:
 
They both look like they would satisfy the set requirements but I also need to try and make this useable by a other people (Just the NAS side) but I would like to keep the virtualisation side of things as well :confused:
Sorry, I was just throwing ideas at you.

ZFS is your best choice. Snapraid isn't a live solution - it'll run a job at intervals. Good for static data, but little use for your requirements and probably not worth the hassle.

https://pve.proxmox.com/wiki/Storage:_ZFS#Administration suggests that ZFS is purely a command line solution, unsurprisingly. When you say that it'll need to be useable by other people, what do you mean? I'm guessing just Samba/CIFS access, in which case the underlying filesystem has no effect either way.

As previously mentioned, use a standard filesystem for virtualisation, and ZFS for all direct data storage (i.e. not VM images). Sharing would work as previously mentioned (shared directories via VirtFS).

Honestly, your requirements exceed those of the majority of home NAS users. It might be easier to get yourself comfortable in the command line and configure this manually rather than using Proxmox - heck, install CentOS with a GUI and use virtmanager via VNC just like you would on your desktop if you don't want to touch the command line unless absolutely necessary.

I've personally found these pretty-GUI solutions fairly frustrating unless you use them exactly as intended. You'll likely find the same to be true.
 
Back
Top Bottom