lol i was mainly after a system to store all my files in witch a have qite a few TB in the 100s and don't want them roting away lol, every thing else was after thought, i have a windows 10 pc thats 24/7 shareing files that why i was asking about VM server and wanted to set sum thing up that stops adds at router that why i said like PI hole, will probly useing a hell of a lot of WD gold HDD's, i brought up TueNAS as it's first one that come to mind and i like the UI, i pay the power bill , i only had the idea to pick up a xeon coz there so cheap on ebay
what i'm looking for in a server
to be able to have a lot of HDD's
i was thinking raid 6
i will be sorting music program file, video files every single retro game ever to come out ect,
yes i wont to beable to copy them to server then play them off
ive never used plex my self proably not use that
the windows pc just share files
ECC ram
don't realy have budgit
I'm not sure if you understand the implications here, so let me be absolutely clear: Running large disk arrays on premises will cost you a small fortune in power both directly and indirectly. Using the published UK average electricity price, it's £248/yr +VAT +CCL to run 100w of local server 24/7, and that's going up significantly again later in the year.
Before you go any further please understand that your average disk is pulling 5-7w spun up idle, add 50% for heavy load, so lets say you build a modest system with 24 drives, ignoring the obscene price of server chassis in recent years, that's (for simplicity's sake) 175w of idle drives, chuck in 60w for an idle system ('cheap' Xeon servers on bay can be anything unto 3-6x that) and then a pair of 4i HBA's, you're likely looking at 250w+ just to sit idle and run nothing. To put that in context, your electricity bill is now £620.50 +VAT + CCL higher each year, that's without actually doing any form of media management (honestly the fact you don't seemingly use media management/front end horrifies me). Lets say you discover the joys of media automation and start running some background services and now your CPU/IO is actually being used, 60w becomes 110w and your bill is now £744.60 +VAT + CCL. Now add in the cost to purchase the hardware, the spare drives, the UPS, spare drives and administration costs in dealing with backups (RAID is
NOT a backup), the upfront could easily run into thousands and the ongoing costs (which will go up significantly again later this year) are going to be getting on for a grand a year. You also have the noise and the need to cool them in the summer - my office hit 26c this week with only a laptop running, if the rack was on, i'd need to drag the AC unit into service and have it running 24/7, again thats not cheap.
So have you considered if you can better achieve this differently? I have off-site servers using remote unlimited storage that cost me less per month can powering up my rack for a few days, zero set-up costs, hardware is maintained same day if it dies, and zero up-front costs, This includes backup and versioning. Ofcourse I am limited to my connections upload in getting my data to the off-site storage, but pulling it can be as high as line speed (read near gigabit for me, so other than extra latency, not really significantly different than reading from a local NAS, especially if it needs to spin up a drive).
If you choose to ignore the above, then read on...
For as long as I can remember it's been about £/MB or GB or TB and nobody cared about power or cost per bay. Now (and more so going forward) they are getting more important. Fewer higher density drives can be more favourable than smaller drives with a lower £/TB. Most people have a random selection of drives based on whatever was the best £/TB at the time they needed more space. So something like UnRAID works well here. You use the largest drive for parity and then the rest are storage. You can then take unto one drive failure with no data loss. If another drive dies, you only loose the data on that single drive and the rest are easily mounted/read. If you want dual parity like R6, then you add two parity drives. In read terms, you're limited to the network speed unless you have dropped the coin on 2.5/10Gb. Going R6 requires you to have matching drives and you're still limited to the same network bottleneck. ZFS gives you better data security, but again expanding VDEV's is a BALL ACHE at best and again you need drives of the same capacity really.
If you want to go the data integrity route, ZFS is superior, BUT you really, really need to understand what you're buying into and the expansion path, it will hurt you down the line if you don't and exit strategies tend to be expensive/time consuming. RAID 6 is pretty pointless from a performance perspective in a home environment and if you want dual parity, UnRAID - you obviously have a separate 3:2:1 backups anyway. The files you mention tend to be small and your clients are generally going to be limited to gigabit without going 2.5/10Gb. ECC is almost always pointless in a home environment, it's use was often justified by a selective quote from a NAS distro developer, the full quote unsurprisingly gave important context that's often ignored because people haven't actually read it, just regurgitated it because thats what someone told them, so it must be true. The money can often be better spent on much more beneficial things like a UPS that will usually be used more... I can count on two fingers in the last two decades how often ECC events have happened in my own experience, let alone that anything being useful (1 finger at best).
All ASROCK X/B based AMD boards supported ECC, with other OEM's it was hit and miss despite the chips supporting them, as pointed out above it's chip dependant though. Some AMD boards will even run without a GPU at all as I found in the quest to free up some slots.