Associate
- Joined
- 9 Jun 2004
- Posts
- 1,502
I am guessing you have Proxmox installed on a SSD, with a separate HDD for additional storage?
YepI am guessing you have Proxmox installed on a SSD, with a separate HDD for additional storage?
Just as general advice, I wouldn't be using BtrFS as storage for VM images, due to fragmentation and huge performance decreases. See: https://btrfs.wiki.kernel.org/index.php/GotchasOk. You need to mount the HDD in the same way that you would mount it in any other Linux OS. You need to create a mount point and an entry in /etc/fstab on the Proxmox host. For instance, for a single HDD formatted as ext4 you might create a mount point of /mnt/storage and a fstab entry like;
/dev/sdb /mnt/storage ext4 defaults 0 0
I would use the UUID of the HDD instead of the device, if possible though. Google is your friend. Once that's done, reboot and the drive should be mounted.
Once the HDD is mounted, click on the Datacentre folder in the left-hand pane in Proxmox, click Add, and you can then define a folder on the HDD as a repository for VMs, containers, backups or whatever.
The above is just for a simple single HDD. You may want to look into using btrfs (what I use) or LVM so that you could just add another HDD later on and not have it appear as a separate device.
RAID is at block level, not filesystem level, so it doesn't make any difference. However, you wouldn't use a CoW filesystem on top of RAID or LVM, as CoW filesystems have these functionalities built in.What would be the best file system for a RAID array as in the not too distant future I would like to be able to add more disks and create a RAID array without losing data
The setup I'm trying to create is slightly complicated (And I probably haven't explained it wellRAID is at block level, not filesystem level, so it doesn't make any difference. However, you wouldn't use a CoW filesystem on top of RAID or LVM, as CoW filesystems have these functionalities built in.
Is this just for VM storage? If not, how much data/how many drives do you think you'll end up using? Storage strategy is a deceivingly complicated topic.
Also since Server 2012/R2 don't allow my server to be turned on via WOL as it lacks S3/S4 power states and I want to use a TV tuner to record media to the server would you have any reccomendations of how I could set up a system to do this in a Linux VM in Proxmox as I don't want excess overheads using up resources by running a Windows VM.Virtualisation, containerisation, both areas to explore. Virtualisation will be easier at the expense of heavier resource usage, however if you have a powerful system and expect to see low loads, this will be nothing to worry about.
Purely due to its maturity, a ZFS RAID-Z2 or RAID-Z3 setup (depending on drive counts, common sense like RAID) would probably be best. SSD for VM image storage, use VirtFS to share filesystems to store working data on the host and gain the benefits of ZFS.
Also since Server 2012/R2 don't allow my server to be turned on via WOL as it lacks S3/S4 power states and I want to use a TV tuner to record media to the server would you have any reccomendations of how I could set up a system to do this in a Linux VM in Proxmox as I don't want excess overheads using up resources by running a Windows VM.
Currently I'm using a cheap Hauppauge Ministick but if I could get the concept to work (Centralised media device/TV streamer via XBMC etc) then I would upgrade to a tuner with DVB-T2 support if possibleWhat tuner?
I use a kworld USB tuner in a Debian Jessie VM running tvheadend. I can then watch tv on any other device using vlc as a front end. I can also set up scheduled recordings from anywhere via the tvheadend web gui.
Currently I'm using a cheap Hauppauge Ministick but if I could get the concept to work (Centralised media device/TV streamer via XBMC etc) then I would upgrade to a tuner with DVB-T2 support if possible
Excellent stuff, to be honest I don't mind messing with it a bit but for the TV tuner I would prefer one that works OOTB to make it easier configuring other areas of the VMThis table http://www.linuxtv.org/wiki/index.php/DVB-T_USB_Devices shows which devices are supported natively by the Linux kernel and some that can be used with installation of additional drivers. If your tuner doesn't appear on the list, it doesn't necessarily mean that it won't work. You really need to find out what chipset it is based on.
Like most of these things with Linux, you need to do some research. Some say it's a pain but I enjoy the challenge and the learning. And that's the great thing about using virtual machines - if you mess it up, just remove the VM and spin another one up.
When you say quickly add/remove storage, do you mean just mounting/unmounting a USB stick for example? In the command line you can just do the following:Okay a quick update as I'm confused on where to go.
I've spent a while messing with Proxmox and have found that it's pretty good but doesn't offer the easy file storage I need (Quickly add/remove storage etc) so I was wondering if anyone else had any ideas as I'm stumped. Would UNRaid be a good option as that would allow for VMs and quick expansion of storage arrays?
I was thinking more along the lines of expanding storage pools e.g. increasing size of a RAID array easily (I know this is not possible so I was wondering if there was a similar feature like this in a different O/S)When you say quickly add/remove storage, do you mean just mounting/unmounting a USB stick for example? In the command line you can just do the following:
lsblk - list block devices
sudo mount /dev/sdX /mnt - mount device
umount /mnt - unmount devic
You may need to create another mount point if Proxmox uses /mnt for permanent storage (it shouldn't, it's technically incorrect, that's what /media should be used for nowadays).
This is what I was talking about when it comes to ZFS - it allows you to add drives to your pool, however you do of course require potentially lengthy rebuilds.I was thinking more along the lines of expanding storage pools e.g. increasing size of a RAID array easily (I know this is not possible so I was wondering if there was a similar feature like this in a different O/S)
They both look like they would satisfy the set requirements but I also need to try and make this useable by a other people (Just the NAS side) but I would like to keep the virtualisation side of things as wellThis is what I was talking about when it comes to ZFS - it allows you to add drives to your pool, however you do of course require potentially lengthy rebuilds.
Another option that functions at file level, as opposed to block or filesystem level, is Snapraid (http://www.snapraid.it/).
Sorry, I was just throwing ideas at you.They both look like they would satisfy the set requirements but I also need to try and make this useable by a other people (Just the NAS side) but I would like to keep the virtualisation side of things as well![]()