New server: To Proxmox or not to Proxmox.

Associate
Joined
3 May 2018
Posts
604
Coming to the end of a rather nice time spend hiding from the high price market with DELL eWaste machines I decided to buy something made this decade.

Going out: Dell Optiplex i7-4490 16Gb DDR3.
Built now: Asus PrimePlus B550, Ryzen 5600G, 32Gb DDR4 3600.

Storage will just move over, but most of the "online" spinners in the USB enclosure can move into the case and go native SATA.

I did upgrade however, after an SSD failure was the last straw, to dual NVMe 1Tb boot drives in RAID1.

Initially I have just installed Ubuntu Server on just the new boot drives. Seamlessly installed with software RAID1 and LVM volumes.

But then YouTube recommended me an online course in proxmox and I got to thinking.

The server is about 90% containerized already. The only native services are file shares and ssh. It's other main load is running dev VMs. I an a career developer, but at home I spread myself really thin and it's more managable to build VMs for specific dev environments. Besides I can then access them from any PC in the house, even the living room TV, which has been used to fix some lighting automation while watching netflix.

There are "custom" message bus microservice applications (my home automation) which might not play nice with Proxmox. It's currently docker-compose, but I am considering moving it to k8s via "kind". That is a pretty complicated automated docker setup process....

... but then again, people do keep reminding me that I can always just run Ubuntu Server as a VM and do my custom non-proxmox docker foolery there. In fact a likely 'easiest path' migration to proxmox would infact be to just run Ubuntu Server VM and set it up as a clone of my actual server and... take it from there.

So what are the downsides?
 
Associate
OP
Joined
3 May 2018
Posts
604
Oh dear. First red flag. No RAID during install.

Went digging as to why. The answer...

"We don't think you should use it. Because we use some really stupid O_DIRECT mechanism by default which causes irrelevant miss match between mirrors in garbage sectors."

If I wanted to be told HOW to run my server, I would use Apple.
 
Associate
OP
Joined
3 May 2018
Posts
604
I installed Proxmox on it and had a play last night. It's not bad. Not as annoying as I thought. A lot easier than VirtualBox.

The container hosting is, well, very minimal. So a "DockerHost" VM is fine.

No need for ECC RAM. While it is a server, it's not mission critical and a bit flip in some memory is unlikely to cost me millions of pounds, so no.
 
Soldato
Joined
29 Dec 2009
Posts
7,175
Oh dear. First red flag. No RAID during install.

Went digging as to why. The answer...

"We don't think you should use it. Because we use some really stupid O_DIRECT mechanism by default which causes irrelevant miss match between mirrors in garbage sectors."

If I wanted to be told HOW to run my server, I would use Apple.

Correction, no mdraid :) ZFS is still possible, and certainly recommended if you're not using ECC.
You can always install Proxmox VE on top of Debian too if you want more control over partitions.
 
Last edited:
Associate
OP
Joined
3 May 2018
Posts
604
Yea I set it up with ZFS mirror on the boot pair.

I went ahead and trialed it for a few evenings. Basically installed a bunch of VMs, setting up a trial server. I think I like it.

What I'm less sure of is, how much hassle it will be if and when it fails for some reason. I suppose it does have more backups than Im used to, but it's recovering from a ProxMox boot failure which sounds less appealing. Mirrors only mirror disc corruption and mistakes.

Proxmox containers took me a bit by surprise, not what I was expecting.coming from docker/k8s. To them a container is just a VM really with kernel virtualisation rather than hypervisor. It has a disk, persistence and you can ssh to it and install stuff. "Heavy weight containers?" My 16Gb system runs 25 containers. However, I don't think it would run 25 containers of the Proxmox style.

Last night got completely derailed. I started with my dockerhost and decided it would be nice if I could have a container registry, rather than launch docker from the development folder (sin!), it would be nice to put some basic process in place. That rapidly spiralled into "OH, wait, did you say GitLab has a self-hosted instance?", so I was still sitting at 1am trying to get my own gitlab setup in docker.

I have installed:
WIndows 10 Pro Dev VM w/ test software
Linux Ubuntu Server - Dockerhost inc. git lab.
Linux Ubuntu Server - Fileserver

The thing is, as soon as the case arrives the disks are being wiped and I'm starting again. I have free run to play and break it until Parcel Force decide to deliver the case they have had for 2 days now.

Storage. The hardest part of designing this part is the split between cheap, big, noisy, power hungry and slow drives (HDD), fast, expensive, small, silent drives (SSD). The M.2 storage is faster again.

So I figured I would split the pools that way, as SSD Caching is not worth attempting, too many risks.

Pool disks:
local-zfs - 1Tb M.2 ZRAID mirror - All VM boot disks and live data disks.
fast-zfs - A pool of about 2Tb of SSDs (basically the largest of the available SSDs, the smallest go to thin clients for boot disks).
slow-zfs - A pool of the"online" HDDs about 14Tb.
vault-raid1 - 2Tb RAID1 pair of 2Tb HDDs - Specific, explicit, directory/rsync based backups.
local-backups - 2Tb single HDD - local-zfs backups of boot/root/vm-disks etc.

The only backups from the slow-zfs are specific files/folders covered in the rsync backups. If I loose that pool entirely I loose a LOT of replacable media. Anything of my origin is hopefully in the vault.

I can then create volumes based on the "type" or "category" of stuff and assign them to either fast or slow pool. As I invest in more SSDs I can slowly move volumes from the slow-zfs to the fast-zfs.

One thing I need to explore, maybe tonight is "power saving". I need to check if I have a pool on a spinning disk and I remove/umount it or shot the VM using it down that it can be hdparm powered down.
 
Associate
OP
Joined
3 May 2018
Posts
604
I believe TrueNAS Scale can host VMs and docker containers.
I might have a play with TrueNAS while I have the server is "lab mode".

Do you need to run other virtual machines?

Just use Ubuntu Server natively.

To go that approach I can just do a "lift and shift" of the current server OS. There docker is managed manually with docker-compose (developer style) and VMs run in a headless VirtualBox.

It's just much less "integrated" than something like pmox. Creating VMs, allocating diskspace for them, using XLaunch to run VBox config wizard, making the RDP work at the right resolution etc , kinda puts you off just "spinning one up for a quick test". In pmox you can spin up a vm from a clone, test something and delete it again a lot easier and a lot faster.

The whole idea is to embrace a bit more abstraction/virtualisation at the lower levels. VMs are part of that, but also the dynamic disk pools aspect.

In work I am best described as a "power consumer" of such highly virtualised environments. I use K8S, GitHub dev ops, AWS etc. in enterprise enviornments. (In one recent case a native hadoop / spark cluster with over 5000 nodes running on over 3500 blades, totalling (for the quota slice I could see) over 12,000 cores and 400Tb of memory. I ran an SQL query for some test data and it took 5 minutes to execute. The query report however said the total CPU time was .... just over a year. Storage accessed in petabytes)

Right now work is slow and I'm not sick to the back teeth looking at the overly secured versions of all these set ups, so I can stomach a bit of "home lab" play.

The anchor I keep having to attach to myself here is "Home server", NOT LAB! I can play to my hearts content with stuff on a grander scale on a DIFFERENT BOX. The server I am building here should sit in the corner and collect dust and be forgotten about, once set up it should do it's job with minimal intervention and ZERO 3am "Ah damn it" moments.

I can follow up with a lab, maybe, maybe if there is capacity on the same hardware in VMs, but I have other options for "lab" work which requires less horse power than an in service server.
 
Soldato
Joined
14 Jun 2004
Posts
5,454
In work I am best described as a "power consumer" of such highly virtualised environments. I use K8S, GitHub dev ops, AWS etc. in enterprise enviornments. (In one recent case a native hadoop / spark cluster with over 5000 nodes running on over 3500 blades, totalling (for the quota slice I could see) over 12,000 cores and 400Tb of memory. I ran an SQL query for some test data and it took 5 minutes to execute. The query report however said the total CPU time was .... just over a year. Storage accessed in petabytes)
BUT can it play Crysis....

thats a lot of power there.

just play around. you might also want to consider XCP-NG
its like Xen but free and has centralised management.

one down side to proxmox is the lack of good centralised management options.

have you looked at https://www.jenkins.io/ & terraform i guess you have.
 
Associate
Joined
16 May 2008
Posts
2,488
Location
Bristol
I run my proxmox VMs on consumer grade NVME SSDs without any redundancy, but use Proxmox Backup Server to do a daily backup to a fat external SATA drive.
It's cheap and simple, if a drive goes I'll lose at most a days work which is fine for my use case.
 
Soldato
Joined
14 Jun 2004
Posts
5,454
proxmox is one of the few hypervisors making hardware passthrow some what easy, as well as hiding from the guest os its a virtual machine.
example from 3 years back
gives a good amount of flexibility on hardware use as well
 
Back
Top Bottom