Early threadripper worth it for Unraid now?

Soldato
Joined
24 Jan 2006
Posts
2,624
Currently have a Dell T20 Xeon 1225V3 16GB ECC DDR3 running my unraid with 6 x 4 TB 3.5" HDD + 4 x 2.5" SSD for cache and specific uses... e.g. VM's etc.

Have a quad Gigabit Intel Lan (pfsense) + flashed LSI raid card with 2 x SAS for 8 SATA drives so can run 12 drives in total. More would be better as currently not all of the drives are in the main array. I expect I'd need a discrete GPU so I need at least 3 PCI-E x 8 slots, ideally more.

Server used for Plex, VM's, gaming servers etc.

I'd like to move to a more spacious case and a have a little more CPU grunt and memory so I have less start / stop with the VM's and game servers.

Gen 1 threadripper looks very cheap right now, especially the 1920X. Paired up with a low end X399 board and some ECC ram looks like a great platform for a new server build. The 2920X is around £100 more expensive for around 10% extra performance, so not sure it is really needed.

I assume this would give me around 3700X / 3800X performance, though significantly lower on single core.

A 3700X + X570 would be more expensive and be limited to 2 dram channels with a lot less PCI-E lanes. I could get a cheaper B450 x 470 board but these lose some of the SATA and often have realtek lan. As ECC is generally not much faster than 2666Mhz, bandwidth could be an issue for the VM's.

Any other options?
 
I asked myself a similar question only a few weeks ago. TBH TR prices have further to go, new silicon is imminent and that will push prices even further.

For context, I run 3 servers, 2 Unraid and an ESXi, the main Unraid box is a Ryzen 1700, 32GB, ASUS C6H with an H310, M2000 GPU and Intel P1000 2T. My plan was to consolidate (almost) everything to one box.

For me (and your usage may be different), the difference in single core performance between Ryzen and TR tipped it towards Ryzen, it’s more power efficient, has a upgrade path to 16c/32c parts and is due another refresh at least. ECC is supported in hardware if you need it (depends on BIOS). A cheap x370 board has 8xSATA and 1xNVMe (some have two, but usually 2nd is SATA), the H310 gives another 8 ports, SSD drives to the onboard ports for TRIM support and SATA6 speeds, mechanical drives to the H310, both it and the PT1000 2T can happily run in 1x slots (may require modification to open end of slots). That plus a 1700 or 2700 are a lot cheaper than TR, ECC support is present in the chipset (BIOS dependant) and the upgrade path is clear to a Ryzen 3xxx/4xxx. I have suffered no ill effects or run out of PCIe lanes. You can buy a board + 1700 CPU and cooler for well under £200 used, you pay quite a lot more for TR and you aren’t getting massively more for your money with your stated usage.

Storage wise I’d suggest carefully considering why you need lots of drives or if you would be better with using fewer larger drives and/or cloud based storage for things like media if you have a reasonably fast connection. I personally quite like Supermicro chassis, 3U/4U options have decent storage options via backplane.

Memory bandwidth wise, it really depends on what you are doing, but nothing you’ve said sounds like high memory bandwidth is a major concern.
 
For me (and your usage may be different), the difference in single core performance between Ryzen and TR tipped it towards Ryzen, it’s more power efficient, has a upgrade path to 16c/32c parts and is due another refresh at least.

I'm confused by this statement, TR as we know has the best binned silicon after perhaps some select epyc chips so by default carries the highest clocks and is the most efficient silicon in the stack unless we are talking about load tdp?. For example the highest clocked best performing 1st gen 8core/16thread part on single core performance is the 1900x. I don't particularly disagree with anything you have said otherwise but platform wise for the usage even considering future cpu upgrades the TR platform does have significant advantages over ryzen if running many storage or pci-e connected devices. I won't lie the platform is still pretty pricey though, great for VM work which is what I bought into it for.
 
I’m talking about real world usage in a mixed environment at the price point of a 1920x that the op referenced, that means even some of the 3xxx are in play. Op mentions the 1920x, a first gen 12c/24t part at 3.5Ghz (4Ghz Turbo) and 180w TDP (ouch). While you can find some benchmarks that will favour those extra cores, a 2700x (3.7Ghz stock, 4Ghz turbo, 110w) usually has it beat, the higher core clock and pulling 70w less under full load are clearly more efficient. Cinebench may suggest otherwise, but it’s only relevant to Cinebench etc. Even using my normal workload of transcoding, a 1920x pulls 20K of CPU mark, a 2700x pulls just under 17K, the former has 50% more cores/threads, why do you only get 17.6% more performance at stock? Single core performance also suffers in TR, 2024 vs. 2185. The final nail is price, I can pick up a top end x370 board for £80 (£78 for the TaiChi I grabbed in Sept), you can easily double that for x399, then chuck in slow quad channel RAM for TR vs Ryzen using faster RAM, the benefits of Ryzen operating with faster RAM are well documented. Game wise, it’s always been Ryzen, TR is not suited to that kind of workload.

TR makes sense where you have a genuine need for high core count density, aren’t power constrained and/or absolutely require the extra PCIe lanes. Nothing op has said suggests that would apply here. Perhaps your VM work differs from my own, but when I sat down and ran the numbers (and was fortunate enough to be able to use a TR based server to ‘play’ on), it just didn’t live up to its billing, let alone what it would have cost me to build one vs. something more efficient. YMMV.
 
Lots to think about, thanks Avalon.

I currently have 6 x 4TB in the array, 2 x parity + 4 storage. I'd like replace the 2 parity drives with 8TB or greater, that would give me the other 2 x 4TB straight back into the pool.
As time goes by I'd then slowly retire drives replacing with bigger ones to add storage. I'm not so keen on really large drives as it already takes 12 hours for a parity check with 4TB drives.
8 drives is about the max I want to go to with this type of system.
The other drives, from the board are cache x2 or VM / scratch drive for a VM
When I had a quick look at cloud storage, it seemed a bit of a grey area, not really clear in some cases what 'unlimited' means or storage providers I've never really heard of that could disappear overnight.
I'd probably still want to keep 2 copies even if it was copied to the cloud as it the provider goes bump and I pull out my local copy I'm back to one copy.

The controller I have is an LSI rebrand, possibly HP though it was listed as a PCI-E 2 x8, I'd have thought with anything less than PCI-E 2.0 x4 it would be a bottle neck as 1 PCI-E 2.0 lane is 500MB/S while a single HDD can hit north of 150MB/s these days.
When it's doing a parity check, all drives are accessed together. The Lan card is x4 PCI-E 2.0 though it would be unlikely I fully load more than 1 or 2 ports with internal file transfers.

I had considered 2700 / 2700X though the cheap deals on motherboards no longer seem to exist. A 370 Taichi is double the price you paid, and a 470 Taichi is dearer still.
In these cases the 1920X + motherboard is around £100 more expensive than 2700X + mid range board. There are some cheaper X370 about, but I'd need to look at quality and ECC compatibility which is sketchy for a lot of brands and with the inclusion of NVMe drives you're lucky to get 2x PCI-E x8 + one or more x1


Plenty to think on.
 
Cloud storage = google, they kind of own this market and I get the feeling that they’re not going anywhere anytime soon. As to ‘unlimited’ the current largest account I personally know of is measured in PB’s, not TB’s, so while they obviously do have some limits (400k files per team drive) and 750GB/day upload, 10TB/day download on standard drives, it’s constantly expanding at a rate that exceeds usage and the limits are Not bad for £8/m, but other options exist. Have a look at the unraid forums, its a well documented process and has been for years.

Board wise, take my C6H as an example, it has an Intel I211-AT onboard, 3 physical x16 slots, though realistically that’s 2x3.0 x8 slots and 1x2.0 x4 if everything is populated. It has 8 usable SATA6 ports and an M2 NVMe slot. You said you only want to run 8 drives plus SSD’s (it gets fun at 20+), obviously you run the SSD’s on the native controller as the LSI won’t pass TRIM commands unfortunately, the HP H220 did iirc and was PCIe 3.0, your PCIe2 card is ideal in the PCIe2 x4 slot, no bandwidth limit with 8 mechanical drives, also as the drives are SATA and not SAS, they run SATA 3 speeds. That leaves two physical x16 slots plus the x1 slots (which share with the 2.0 x4 slot). Either way you’re able to run a GPU at full speed (if required - C6H boots without GPU) and have up to 16 drives and NVMe SSD with Ryzen. I had a quick look and could find x370 boards from £70 and x470 boards for under £90, you can also get B450 boards with intel NIC(s) and 4 memory slots from ASRock/Asus/GigabyteMSI etc and a quick look shows they’re under £60. Ryzen prices seem to have bounced back slightly, but bargains can still be found, particularly on the MM.

Nothing that’s been raised so far suggests you will benefit in any obvious way by going with TR which is what you asked, if you want one or have some future scenario that you’ve not mentioned that may benefit, or get a stupidly cheap deal, then have at it. Your money, your choice :)
 
So after a bit more digging and a timely Gamers Nexus video published yesterday I'm now agreed that threadripper is not good value for my needs so no rush to buy now.
I have been looking for suitable AM4 motherboards. Plenty of inexpensive boards but very few with any evidence of ECC support, Intel LAN and reasonable PCI-E , X8 X4 X4 , X8 X8 X4 etc.
Cheapest seems to be ASUS PRIME X370-PRO but only 1 Trancend ECC kit on the Zen QVL list, nothing on the Zen+ QVL list and very mixed forum comments mostly from 2017. CH6 has no ECC support, which leads to the X370 X470 Taichi as no other potential ECC boards have more than X8 X8 X1 as it seems to be mostly Asrock and a few ASUS

The Asrock Rack X470D4U looks interesting if a little expensive, no more so than a threadripper board though. X8 X8 X4, ECC, 6 SATA, 2 M2, 2 Intel NIC, IPMI. Only for Zen+ or Zen 2 which is fine.
No unnecassary frippery so should be good for power consumption.


I've looked a google storage, google drive now seems to be replaced by google one with tiers at 2TB £8 per month, 10TB £80 per month, 20TB £160 per month.... so ouch.
There is a 'unlimited' G-suite for £8pm but the small print says 1TB.

Anyhow, thanks for the detailed replies on TR, a different perspective helps.
 
I’m talking about real world usage in a mixed environment at the price point of a 1920x that the op referenced, that means even some of the 3xxx are in play. Op mentions the 1920x, a first gen 12c/24t part at 3.5Ghz (4Ghz Turbo) and 180w TDP (ouch). While you can find some benchmarks that will favour those extra cores, a 2700x (3.7Ghz stock, 4Ghz turbo, 110w) usually has it beat, the higher core clock and pulling 70w less under full load are clearly more efficient. Cinebench may suggest otherwise, but it’s only relevant to Cinebench etc. Even using my normal workload of transcoding, a 1920x pulls 20K of CPU mark, a 2700x pulls just under 17K, the former has 50% more cores/threads, why do you only get 17.6% more performance at stock? Single core performance also suffers in TR, 2024 vs. 2185. The final nail is price, I can pick up a top end x370 board for £80 (£78 for the TaiChi I grabbed in Sept), you can easily double that for x399, then chuck in slow quad channel RAM for TR vs Ryzen using faster RAM, the benefits of Ryzen operating with faster RAM are well documented. Game wise, it’s always been Ryzen, TR is not suited to that kind of workload.

TR makes sense where you have a genuine need for high core count density, aren’t power constrained and/or absolutely require the extra PCIe lanes. Nothing op has said suggests that would apply here. Perhaps your VM work differs from my own, but when I sat down and ran the numbers (and was fortunate enough to be able to use a TR based server to ‘play’ on), it just didn’t live up to its billing, let alone what it would have cost me to build one vs. something more efficient. YMMV.

I think I sit in the camp where core count for what I needed to run was high as was the IO and cpu load on some database servers where id be spinning up entire VM estates. I needed a rig where I could comfortably play with 20+ tb estates during the day as well as fire up a game in the evenings. I ended up with 1950x, 64gb (8x8gb) g.skill 3466, 6tb nvme storage (3x 2tb Firecuda Drives), 48tb spindal storage and it seems to go from anything from 1 to 4gpu (Right now it's got 3 in it, Radeon 7, A Vega64 and an RX550). A lot of what I run on mine is more corporate IO heavy workloads such as indexers like IDOL 10, lots of big data analytics, stuff like that.

As for the op if you can get away without TR as Avalon says and you don't need that core count then for sure the am4 platform seems a more reasonable shout. If you see yourself needing masses of fast storage (the ch6 for example cannot run 2xnvme at full speed whereas x399 would be more than happy running 3 of them) and GPU's and it's genuinely only £100 more for an x399 like the taichi then your not exactly losing out by going x399.
 
Last edited:
So after a bit more digging and a timely Gamers Nexus video published yesterday I'm now agreed that threadripper is not good value for my needs so no rush to buy now.
I have been looking for suitable AM4 motherboards. Plenty of inexpensive boards but very few with any evidence of ECC support, Intel LAN and reasonable PCI-E , X8 X4 X4 , X8 X8 X4 etc.
Cheapest seems to be ASUS PRIME X370-PRO but only 1 Trancend ECC kit on the Zen QVL list, nothing on the Zen+ QVL list and very mixed forum comments mostly from 2017. CH6 has no ECC support, which leads to the X370 X470 Taichi as no other potential ECC boards have more than X8 X8 X1 as it seems to be mostly Asrock and a few ASUS

The Asrock Rack X470D4U looks interesting if a little expensive, no more so than a threadripper board though. X8 X8 X4, ECC, 6 SATA, 2 M2, 2 Intel NIC, IPMI. Only for Zen+ or Zen 2 which is fine.
No unnecassary frippery so should be good for power consumption.


I've looked a google storage, google drive now seems to be replaced by google one with tiers at 2TB £8 per month, 10TB £80 per month, 20TB £160 per month.... so ouch.
There is a 'unlimited' G-suite for £8pm but the small print says 1TB.

Anyhow, thanks for the detailed replies on TR, a different perspective helps.

So basically you’re saying if I’d made a YouTube video you’d have taken my advice sooner :D

What’s the usage case for ECC? I ask as someone who has specified it in business for 25? years and seen near zero examples of it logging an error and fixing it. Even the often (mis)quoted post from the FreeNAS team doesn’t actually suggest it’s required, merely it’s another thing that can be used to further reduce risk, if you’re dropping a few grand on hardware, UPS, licences etc. then it’s a nice to have. In reality memory failures are rare, conditions where bit-flip occurs and it’s fixed by ECC are even rarer in my personal experience. On business critical data where my livelihood and ability to trade is on the line, I specify it, for everything else redundancy and 3 backups is generally ample, but YMMV.

GSuite Business still offers unlimited for under £8/m, it’s been that way for years and the single user clauses have never been enforced. You need a domain, but they range from free to a few quid a year.

I think I sit in the camp where core count for what I needed to run was high as was the IO and cpu load on some database servers where id be spinning up entire VM estates. I needed a rig where I could comfortably play with 20+ tb estates during the day as well as fire up a game in the evenings. I ended up with 1950x, 64gb (8x8gb) g.skill 3466, 6tb nvme storage (3x 2tb Firecuda Drives), 48tb spindal storage and it seems to go from anything from 1 to 4gpu (Right now it's got 3 in it, Radeon 7, A Vega64 and an RX550). A lot of what I run on mine is more corporate IO heavy workloads such as indexers like IDOL 10, lots of big data analytics, stuff like that.

As for the op if you can get away without TR as Avalon says and you don't need that core count then for sure the am4 platform seems a more reasonable shout. If you see yourself needing masses of fast storage (the ch6 for example cannot run 2xnvme at full speed whereas x399 would be more than happy running 3 of them) and GPU's and it's genuinely only £100 more for an x399 like the taichi then your not exactly losing out by going x399.

Completely different criteria to the op, as I said if you actually have a need for high core count (and let’s not forget Ryzen is at 16c/32t) and additional PCIe lanes, then a case could be made for TR. People who that usage scenario applies to generally don’t run UnRAID and because they are usually replicating work environments at home, are already intimately familiar with the hardware requirements.

The C6H point on NVMe doesn’t strike me as accurate, the Ryzen’s discussed bring 20 PCIe 3.0 lanes to the party at 985MB/s per lane, the x370 chipset adds x4 PCIe 2.0 lanes at 500MB/s each. ASUS produce the Hyper M.2 x4 which is a quad NVMe PCIe 3 card, it’ll run 4 NVMe drives at full speed in an x16 slot and the single onboard makes 5 in total, all running at full speed. Now as most people realised a long time ago, the headline speed of NVMe vs AHCI is largely irrelevant in most normal usage scenarios, so it’s theoretically possible to run a pair of Hyper M.2 adapters with 4 NVMe drives each and they top out at roughly 1.97GB/s per drive in x8 mode. If you really like NVMe storage you could throw another single PCIe adapter in the other slot and again have 2GB/s on x4 PCIe 2.0 lanes making a total of 10 NVMe drives. Ryzen doesn’t support NVMe RAID in the same way TR does, but for something like UnRAID or any other software based solution it’s irrelevant anyway. Oh and you still have 8 onboard SATA for either spindles of AHCI SSD’s.
 
So basically you’re saying if I’d made a YouTube video you’d have taken my advice sooner :D

What’s the usage case for ECC? I ask as someone who has specified it in business for 25? years and seen near zero examples of it logging an error and fixing it. Even the often (mis)quoted post from the FreeNAS team doesn’t actually suggest it’s required, merely it’s another thing that can be used to further reduce risk, if you’re dropping a few grand on hardware, UPS, licences etc. then it’s a nice to have. In reality memory failures are rare, conditions where bit-flip occurs and it’s fixed by ECC are even rarer in my personal experience. On business critical data where my livelihood and ability to trade is on the line, I specify it, for everything else redundancy and 3 backups is generally ample, but YMMV.

GSuite Business still offers unlimited for under £8/m, it’s been that way for years and the single user clauses have never been enforced. You need a domain, but they range from free to a few quid a year.



Completely different criteria to the op, as I said if you actually have a need for high core count (and let’s not forget Ryzen is at 16c/32t) and additional PCIe lanes, then a case could be made for TR. People who that usage scenario applies to generally don’t run UnRAID and because they are usually replicating work environments at home, are already intimately familiar with the hardware requirements.

The C6H point on NVMe doesn’t strike me as accurate, the Ryzen’s discussed bring 20 PCIe 3.0 lanes to the party at 985MB/s per lane, the x370 chipset adds x4 PCIe 2.0 lanes at 500MB/s each. ASUS produce the Hyper M.2 x4 which is a quad NVMe PCIe 3 card, it’ll run 4 NVMe drives at full speed in an x16 slot and the single onboard makes 5 in total, all running at full speed. Now as most people realised a long time ago, the headline speed of NVMe vs AHCI is largely irrelevant in most normal usage scenarios, so it’s theoretically possible to run a pair of Hyper M.2 adapters with 4 NVMe drives each and they top out at roughly 1.97GB/s per drive in x8 mode. If you really like NVMe storage you could throw another single PCIe adapter in the other slot and again have 2GB/s on x4 PCIe 2.0 lanes making a total of 10 NVMe drives. Ryzen doesn’t support NVMe RAID in the same way TR does, but for something like UnRAID or any other software based solution it’s irrelevant anyway. Oh and you still have 8 onboard SATA for either spindles of AHCI SSD’s.

The point I was making with the ch6, which incidentally i do think is a great board, was that without the use of additional hardware such as the PCI-e card that there are certain limitations in terms of what's available on the board itself. On top of that if you're going to go and buy a PCI-e card that carries Asus tax then your completely negating that £100 cost saving you made going AM4.

At the end of the day I think a fairly decent case can be made for either platform and both will do the job without issue, for me at £100 difference I think I'd still take TR4 because ultimately it is the better more flexible platform from a pure I/O perspective.
 
The point I was making with the ch6, which incidentally i do think is a great board, was that without the use of additional hardware such as the PCI-e card that there are certain limitations in terms of what's available on the board itself. On top of that if you're going to go and buy a PCI-e card that carries Asus tax then your completely negating that £100 cost saving you made going AM4.

At the end of the day I think a fairly decent case can be made for either platform and both will do the job without issue, for me at £100 difference I think I'd still take TR4 because ultimately it is the better more flexible platform from a pure I/O perspective.

Personally I swore i'd never buy another ASUS product years ago, the company has been a joke for at least a decade. I needed something shortly after launch and a friend had reviewed a C6H (op would believe them - they make youtube videos :D ) and owed me a favour... TBH i'm pretty sure they got a laugh out of knowing how much I hated ASUS as they now seemingly try to shoehorn ASUS products into my life at any opportunity.

At £100 I can perhaps see the argument, but not when the actual difference which is at least twice that. Hyper m.2 x4 is £41.30, I paid £161 for a x370 TaiChi and Ryzen 1700, the cheapest x399 board I can see today on a well known auction site is £199, the cheapest UK based seller of a TR 1920x is £249.07 and that ignores cooling and RAM or ongoing costs. In op's case it's worse than that, they haven't stated they want or need extra NVMe, so £161 vs £448.07, even today a x370+2700x is only £216.16 BIN so a £231.91 premium for what we both seemingly agree will give op no real performance benefit and possibly performs worse in his scenario, it just doesn't add up.
 
I guess ECC is a personal choice. I do have a WHS2011 server running on a 10 year old AM3 motherboard with 4GB of DDR 2 ram, it was used with XP MCE for a couple of years then I went with WHS and moved the box out of the lounge.
It's been running ever since though for the last couple of years it's just been running backup processes for other machines and is firewalled off everything else. With some more drive bays I'd either virtualise of move to an alternative backup solution.
It doesn't have ECC and the world hasn't ended yet.

The case in my head is:
The server will run 24/7 with rare reboots. Unraid often gets over 180 days uptime.
Likely to be in use for 6 or more years, possibly 10 like WHS.
The cost of unbuffered ECC Ram isn't that much more expenisve. I found new samsumg 16GB ECC UDimms @ £75 - £90 depending on speed.
This server will have a least 32GB on a smaller process node. More oppertunity for failure.
Faster desktop Ram at XMP settings may be more likely to fail in a long service life.
Going with slower (non XMP) Ram negates the majority of any performance advantage of non ECC memory
A 'server' or premium board is likely to be more reliable.

The majority of the duties are non critical.
These days I prefer the box to just run quietly in the background with no issues.
Given the cost of Case, PSU, Drives etc ECC isn't that much extra.

More a question of why not?

If it saves me a day of troubleshooting or restoring 20TB from backups over it's life, I'd consider it money well spent.


As for other capabiliites, It's mainly a NAS that runs VM's a few local game servers and Plex (with supporting apps) so I don't need super fast storage speed or masses of number crunching.
The existing 4 core (no HT) CPU is feeling a bit tight with the VM's I run, 2 of them are windows, so more cores and 32/64GB of Ram should set me up for a good while.
 
Back
Top Bottom