Plex server with old xeon c602

Soldato
Joined
29 Dec 2002
Posts
7,260
I'm not denying any of that per-se (Power figures will vary per system), simply saying you need to look at the bigger picture in terms of hardware features :)

Even taking into account a £100 difference in power costs per year (13p / kwh is high, but still..), you're conveniently forgetting the new option will cost £300 - £400 more upfront.
If we then assume a lifespan of 5 years (optimistic IMHO) & similar price differentials each time you upgrade, over a 10 year period you're looking at a cost of £20 p/a extra or there about.

To me that's worth it :)
Again though, this by no means suits everyone and makes vast amounts of assumptions.
Do your own sums on the costs / benefits.

13.3p/kw all in is cheap - up till a few months ago it was the cheapest combined deal including tax/standing charge in the market available to me (Hint: it's the comparison value, not the unit cost), but it does vary by region, if you live next door to a power station, your transit costs are lower.

As to £3-400 more you seem to be conveniently ignoring the new cost of your hardware, used parts to used parts, my chip was £64, the x99 board £68, 8GB RAM was £50 and a cooler/case/PSU brought the total to £220ish (though most were from my parts shelf from previous builds so in effect I only paid out for the CPU/Board/RAM), it saves just under £110/yr (assuming you do nothing with yours and I run Plex and other related dockers) vs your DinoServ, so unless they paid you several hundred pounds to take it away and run it at idle for 5 years, or you really, really love IPMI/ECC, then i'm going with 1366 being dead to anyone who doesn't get free power/hardware. Then again, if you actually paid anything for the box and pay your own power bills, thats just silly :D
 
Soldato
Joined
18 Jan 2006
Posts
3,098
Location
Norwich
Unit comparison value is another kettle of fish again, and misleading as hell to start quoting when you're talking about total cost figures :rolleyes:
It's conveniently forgetting that the difference between two servers is *always* going to be the unit price if we assume both are running 24 /7 (As the standing charge will only be paid once no matter how many kw/h used, and can really be ignored as other domestic usage will already cover it, assuming the server is a luxury)

My £3-400 figure was based upon a current-ish Ryzen system with used parts, which was what all the power argument was over.
Some basic figures from stuff available over the last couple of months on the MM:
  • Mid-range AM4 board- £100
  • Ryzen5 chip- £150
  • 32gb DDR4, mid-range stuff with no fancy heatspreaders etc and no clocking needed- Assume £50 - 100 per 8gb pair, so £200 or so all-in
That gives us a reasonable used figure of about £450 or so.
We can assume that PSU, case, disks etc. are equivilant between the two systems.

The 1366 chip, board and 16gb of RAM cost me £70 a year ago.
I swapped the RAM for 24gb ECC, which cost me a further £40. (Also got rid of the original 16gb for £40, so cost neutral, but ignore that for the minute......)
Total cost £110.

That gives me a figurative saving of £340. Even rounding the Ryzen prices down to give £300, I'm already in the ballpark stated without buying new. Swap the RAM for fast stuff, not the cheap midrange and we're well over it.....
 
Soldato
Joined
29 Dec 2002
Posts
7,260
I think you may have missed the point slightly on two quite important aspects:

When comparing power efficiency the unit price would be the same if both servers were installed in the same location, that’s how you compare them. The unit price is therefore largely irrelevant as they both use the same value. Unless you’re the kids of person who thinks you evaluate efficiency by how much a tenner worth of leccy will get you vs someone who is on a compleatly different tariff.

You wouldn’t generally build a mid-high end Ryzen system for Plex, you would (as I pointed out several posts ago) use an intel chip with decent iGPU that supports hardware transcoding, as it’s the more even efficient and quiet/cool solution.

Also you keep comparing your used DinoServ with new hardware, how much were Supermicro knocking them out for back in the day? It was into 4 figures from memory, but as it was about 8-9 years back I wouldn’t swear to an exact figure and you still don’t know what CPU’s you have - I picked something efficient to be nice. Also you seem to have missed the op’s requirements and focused on your own, does the op need more than 4-8GB to run basic Ubuntu+Plex? If so i must be doing it wrong, my remote VPS only has 4GB and runs a heck of a lot more than Plex. I also run PMS on a Pi3, offline, with properly curated media it’s fine, so why are you pushing a spec that’s inappropriate? It’s almost as if you’re doing it to skew the numbers :D

For reference I paid £230 for one of my R1700’s with a B450 not long after launch (used), the C6H build isn’t really comparable as it (may have) included parts from review samples that were ‘loaned’ after initial review for ‘long term’ testing. The i5 6500 (used) build I just put together for someone was a £175 build (Z270A Pro, 8GB Crucial with heat spreaders and a 240GB SSD), it’s capable of more transcodes than your DinoServ and uses even less power than my Xeon, if you actually use it as intended (eg as a Plex server), it likely pays for itself in savings in the first year and is cooler/more quiet/efficient/easier to live with.

For Plex use, a DinoServ makes absolutely no financial sense unless it’s free and/or you don’t pay or really hate whoever does pay the power bill. It may make fractionally more sense for lab use, but other (better) options exist for that also.
 
Soldato
Joined
18 Jan 2006
Posts
3,098
Location
Norwich
You're spectacularly missing my entire set of points once again.....

Not once have I said that the 1366 board is ideal for any given usage, but rather I'm pointing out that it's got an entirely different performance level and set of features to those you insist on comparing it to whenever you're abusing it.
If we compare it to the modern equivalent with a similar (About 10% greater, as I've repeatedly pointed out) performance profile, your claimed disadvantages are far less, as I've demonstrated above.

Your i5 build was marginally cheaper than the example costings, but that was only a very quick fudge (and I'd guess you got a very good deal on at least one component).
Bump the RAM to 32gb, and we're at close to the level I posted, so I think the figures were fair.


FWIW now I'm at home and can get to a terminal, my chip is a single hex Xeon x5670.
CPU Mark:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+X5670+@+2.93GHz&id=1307

Taking the i5 from you above:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-6500+@+3.20GHz&id=2599


I'm hitting ~10% more peak performance.
If we assume adding a further 16gb of RAM (to match my board) costs £100, your solution has cost me about £200 more for less performance.
Going by your figures on electric, it'll take me at least another year to even break even. If we assume I keep the setup unchanged for 5 years, that's another £300 total.
It's therefore cost me ~£60 p/a extra over 5 years for 10% greater performance, hardware RAID, ECC RAM, IPMI etc.

I'll still take that deal thanks, and for that matter IMHO the true figure will be much closer to the £20 p/a I mooted.

Whether it's right for the OP is another kettle of fish entirely, but 1366 = bad is a far too simplistic view.
Take a balanced cost-benefit analysis view of your needs
 
Soldato
Joined
29 Dec 2002
Posts
7,260
If you don't use ECC, especially with unraid, bad memory can silently corrupt your data and parity won't help you.

So ECC is important to me.

True, but please refer to the first rule of mass data storage... RAID is not a backup.

You're spectacularly missing my entire set of points once again.....

Scroll up to the first post, the first word in the title is 'Plex' and the second word is 'server'. If your reply isn't specifically related to running PMS on a server, then i've got to ask which of us has 'spectacularly' missed the point? That brings us to....

Not once have I said that the 1366 board is ideal for any given usage, but rather I'm pointing out that it's got an entirely different performance level and set of features to those you insist on comparing it to whenever you're abusing it.
If we compare it to the modern equivalent with a similar (About 10% greater, as I've repeatedly pointed out) performance profile, your claimed disadvantages are far less, as I've demonstrated above.

So this whole time you've ignored the op's requirements and that my replies specifically relate to those requirements, and Plex usage, a program that can use hardware transcoding to more efficiently do transcoding and therefore doesn't need to run full power or make use of the 2K/1080 AV transcode metric? Well that explains part of why you keep posting nonsense. For a moment I thought you were the founding member of InGen or a card carrying fully paid up Denver the Last Dinosaur fan or perhaps had 'Not the Mamma' tattooed somewhere strange, heck you could even be John Hammond's illicit love child. Either way, 32nm 1366 based Xeon's are 10 y/o horribly inefficient technological dinosaurs (hence DinoServ) that require nye on 300w to deliver what a modern system can do in 20% of that (or half your idle power consumption) and still have CPU power in reserve due to the iGPU doing the heavy lifting, something your Xeon simply can't do.

Source: https://www.anandtech.com/show/3817/low-power-server-cpus-the-energy-saving-choice/7

Your i5 build was marginally cheaper than the example costings, but that was only a very quick fudge (and I'd guess you got a very good deal on at least one component).
Bump the RAM to 32gb, and we're at close to the level I posted, so I think the figures were fair.

Stop. It's a Plex server build for 2 remote users and some local playback, the op doesn't need more than 8GB (and that's a pretty generous number, 4GB would do), just because you want to try and justify your pov, that doesn't mean the op should put 32GB in or that we need to be 'fair', 24GB+ of it will likely never be used. Price wise that build is at market rates, prices were about right, board was possibly £10 cheaper than it should be on the used market, RAM was at retail, I could have done it £20 cheaper if I had time, but I ended up moving it on to a friend for Rift usage and Christmas is the wrong time to start messing about with deliveries other than Prime.

FWIW now I'm at home and can get to a terminal, my chip is a single hex Xeon x5670.
CPU Mark:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+X5670+@+2.93GHz&id=1307

Taking the i5 from you above:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-6500+@+3.20GHz&id=2599

I'm hitting ~10% more peak performance.
If we assume adding a further 16gb of RAM (to match my board) costs £100, your solution has cost me about £200 more for less performance.
Going by your figures on electric, it'll take me at least another year to even break even. If we assume I keep the setup unchanged for 5 years, that's another £300 total.
It's therefore cost me ~£60 p/a extra over 5 years for 10% greater performance, hardware RAID, ECC RAM, IPMI etc.

I'll still take that deal thanks, and for that matter IMHO the true figure will be much closer to the £20 p/a I mooted.

Whether it's right for the OP is another kettle of fish entirely, but 1366 = bad is a far too simplistic view.
Take a balanced cost-benefit analysis view of your needs

You remember how I was nice and picked energy efficient processors to compare to? The X5670 is even more power hungry and again with the random and unescimsary RAM? So, basically you would like me to ignore the op's question, in essence the whole purpose of the thread and the advice I provided based on those requirements and just say nice (but untrue) things about your 1366 'Denver' and his buddies? Well as I said at the start, unless you get free power, DGAF about whoever pays the bill or perhaps need to heat a room (and are likely already hearing impaired), then a 1366 based Xeon is a great choice... that's about as positive as I can be about 10 y/o hardware that should be taken out and shot to put it out of the worlds misery. Would a hug make you feel any better? The x5670 isn't awful in raw CPU mark terms, but it sucks hard from a power perspective, a low end chip with recent iGPU will eat it for breakfast in Plex transcode terms and it's just a really poor choice at this stage in the game (which funnily enough is exactly what I said and we were discussing). 123w idle, 299w load vs something that will do the same workload at half your idle power figure and have lower CPU utilisation as well as a re-sale value in 5 years, who wants a 15 y/o Xeon or the £716.52 (idle) and £1,746.60 (max) power bill over 5 years, that's assuming power prices don't go up every single year between now and then... Who am I kidding? You've got more chance of me saying nice things about Denver :D

Now go ahead and post further nonsensical comparisons with 256GB of RAM added to the i5 if it makes you feel better, it's 'stolen by Christians' time after all and good will to all men... but NOT 1366 Dinosaurs :D
 
Soldato
Joined
18 Jan 2006
Posts
3,098
Location
Norwich
I really don't know where you're pulling the power figures for the X5670 from, but as far as I can see they're for a dual X5670 system, not a single CPU system....
Take off at least a third from the figures, probably much closer to half to get the real figures, which correlate with what I've been saying throughout the thread.
Reddit thread with a bunch of Dual X5670 figures in:
https://www.reddit.com/r/homelab/comments/86q1dg/r710_power_usage/

Another Reddit thread comparing a Ryzen5 with an X5650, with some numbers in the linked Google spreadsheet:
https://www.reddit.com/r/Amd/comments/76uaod/ryzen_5_1600_upgrade_from_a_xeon_x5650/
Remember with this one we can dump a decent amount off the power consumption for the power hungry Radeon graphics, and they then roughly marry up with my numbers....


Further as a side point as you're now pushing hardware transcoding as the ultimate panacea, depending on the final transcode destination, HW transcoding can be noticably worse quality.
TBQH this really depends on how close you look and the quality of the final TV / renderer, but it's not the cure-all panacea you think it is either.
https://www.reddit.com/r/PleX/comments/74g3a8/just_a_reminder_that_hardware_acceleration_may/
Plenty more around the internet if you care to go digging on software vs hardware encoding.


Again though, I've never said that a 1366 Xeon or for that matter 24gb of RAM is necessarily ideal for any given use case, but rather that the feature set and performance levels are entirely different to that of the low-end consumer kit you insist on pushing when making the comparison....
Oddly enough, when I compare a set of similar specs / performance but in the current consumer kit you prefer, the price comparison is roughly as I make it.....
All you're really doing at this point is demonstrating that once again you're conveniently ignoring every point I make, please stop :rolleyes:
 
Soldato
Joined
29 Dec 2002
Posts
7,260
Have another read of what I posted, I clearly referenced exactly where the power numbers came from and provided you with the link, specifically Anandtech which clearly shows the power numbers are for SINGLE CPU’s (with 24GB no less).

The reddit post you reference has 150w idles, that’s 27w (or about the idle of the i5) higher than the numbers I used to produce those running costs, so in fact make the calculation even less favourable, peak is 50w lower though. Given the likelihood of the server sitting idle the majority of the time in this scenario, that’s worse as higher isles are guaranteed to cost you more.

The other reddit post you link to is a compleatly different CPU (x5650 not the X5670 you have or we’ve been discussing all along) and again has a 150w idle, i’m not sure why you're now trying to compare a lower power CPU to a Ryzen that I haven’t referenced, that’s a bit random even for you.

Now you’ve obviously tried to do some research on hardware transcoding, but dig a little deeper. Firstly direct play will pretty much always be superior to transcoding, by its very nature transcoding can’t add quality. However you seem not to be familiar with HW transcoding and it’s quirks, for example (and I did point this out earlier in the thread), Plex specifically state newer iGPU’s, a 6th gen iGPU is significantly better (and basically identical other than h265 decode) to 7th gen. Also you have overlooked intel’s Linux driver history and VAAPI support being upgraded in newer drivers - the quality issues are usually based on old hardware and/or old drivers. Screen dump/zoom may look worse, but have you seen the horrors of modern broadcast standards in the UK? As you don’t seem to like clicking links here is the source and the quote:

https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/

‘The following are required in general for Hardware-Accelerated Streaming, regardless of your operating system:

  • A recent Intel CPU meeting these requirements:
    • 2nd-generation Intel Core (Sandy Bridge, 2011) or newer (we recommend 5th-gen Broadwell or newer for the best experience; Sandy Bridge, in particular, is known to sometimes have poor visual output on some systems)’
I’m not saying HW transcoding is the second coming of Christ, but modern iGPU transcoding is night and day compared to the old hardware you are trying to suggest provides noticeable degradation. NVEnc for example is quite popular (Quadro users or unlocked 9/10 series with Ubuntu drivers). Can you tell the difference if you sit and do screen grabs/zoom in? Yes. Will you notice sat at a normal distance? Probably not, and if the media is curated properly anyway, you won’t need to transcode anyway.

As to the consumer kit point, I push an appropriate spec for the op’s needs, by your own admission something you aren’t doing, so that begs the question why are you pushing a solution you admit is unsuitable for the op’s stated usage? Is it just because you’re butt hurt that someone in the internet said your 1366 build is a power guzzling dinosaur? :D
 
Last edited:
Soldato
Joined
18 Jan 2006
Posts
3,098
Location
Norwich
Read the methedology and all the Anandtech report if that's you want to treat as gospel...
It's a fully populated dual CPU board, and oddly enough the numbers then marry up exactly :rolleyes:

The hardware configuration page doesn't make that the clearest in the world from a brief look but that's the case if you read the whole thing.


Neither for that matter did I say hardware encoding is bad, try reading the post instead of trolling.
Even the current Intel iGpu drivers have a degree of quality loss relative to the software encoded equivilant, which you're tacitly admitting.... This heavily depends on the scaling algorithms selected for the software sode of things and the final render quality.


I'm not pushing anything. The OP needs to take a balanced view depending on his actual requirements, not what some random on a forum thinks he should have.
In order to do that, I'm pointing out the fallacy of a lot of the myths being peddled here, ans suggesting the OP makes his own choice based upon the actualities, not scaremongering.

You on the other hand are simply demonstrating that you don't know enough about the options you're dismissing out of hand.
 
Soldato
Joined
29 Dec 2002
Posts
7,260
Even if we use the power numbers for the other systems, it’s still a 150w idle 24/7, they still suck even more than the ones I used at the start and are 5x the idle of the i5 I suggested.

As to transcoding you said ‘can be noticeably worse quality’ - I agreed (apprently somehow tactically in your opinion?) that transcoding in general will provide lower quality and HW transcoding on old hardware with old drivers is noticeably worse, which is why I advocated a 6th gen chip which has the additional hardware profiles and as we’re talking about this now and not over a year ago improved drivers that mitigate the historic issues.

This is a thread about a Plex server, op wants to do two remote transcodes max and a local direct play, that leaves one simple question:

Are you recommending your spec with its 150w idle as something you are recommending for the op?

If not then shush, the whine of butt hurt 1366 ownership is almost as annoying as the noise of cooling the darn things - something I do have direct knowledge of years ago (i’ve run dual CPU builds since the C300a on a P6DBE). It’s dead Jim.... Let it go, let it go... Do you need a hug? ;)
 
Associate
Joined
28 Jan 2005
Posts
1,836
Location
Lymington
I went from a 2 x 2680 v2 Xeon setup to a Ryzen 2700x. I have two libraries, one is 4K HDR and is not shared, the second is 1080P. In terms of raw power, the Xeons were faster, a total overkill though. Since switching, the office is noticeably cooler and my electricity bill is lower. My upload speed is only 50mbps and I have a 10mbps cap, 5 x stream max, the 2700x handles this without breaking a sweat. Trying to transcode 4K HDR is not possible so it's pointless trying. Keep a 60GB 4K HDR remux for yourself and serve up a 4 GB Web-DL to the peasants, that is what I do.
 
Caporegime
Joined
18 Oct 2002
Posts
25,289
Location
Lake District
How much less are your bills?

Just worked out that our dual 6 core Xeon at work only consumes about 4.1p an hour and that's on business rates (and including an n36l), really doesn't make sense changing it for anything better.
 
Soldato
Joined
5 Oct 2009
Posts
13,839
Location
Spalding, Lincs
How much less are your bills?

Just worked out that our dual 6 core Xeon at work only consumes about 4.1p an hour and that's on business rates (and including an n36l), really doesn't make sense changing it for anything better.

My dual 6 core xeon (x5670) server with 7 drives idles at 120w its really not all that bad. Doesnt really go past 160w in use.
 
Soldato
Joined
29 Dec 2002
Posts
7,260
How much less are your bills?

Just worked out that our dual 6 core Xeon at work only consumes about 4.1p an hour and that's on business rates (and including an n36l), really doesn't make sense changing it for anything better.

Generally, if you already have the hardware doing a job and it’s under utilised, it’s a no brainier to run additional services on it (assuming they won’t impact the other services), be that a virtualised Plex set-up or something else, especially in a business environment (read: run it till it dies or can’t do the job/impacts the business or has a significant saving by upgrading).

That’s a very different scenario to starting from scratch and choosing to buy power hungry, hot, noisy and comparatively ancient hardware just because it’s cheap. That said, if you are deaf and need to heat your property and don’t mind higher power bills and want a Plex Server and literally only have £30-60 to spend on all of the above, then a R710 isn’t the worst thing you could buy to tick those particular boxes.
 
Soldato
Joined
29 Dec 2002
Posts
7,260
Not a fan tbh - if it still exists and has it's own client software in 12 months time, then it may be worth consideration.

In the meantime however its been interesting providing feedback/testing directly to the Emby dev(s) regarding improving Live TV for my use case.

That did make me smile.

So you've used it then?

Perhaps if a certain project team didn't serve notice on Christmas Eve for code violations or deliberately break client's for JF servers, the reworking of the GPL client both projects used would be ready for JF. The irony wasn't wasted on some in that Emby went closed source to avoid dealing with it's own non compliance with the licence it used. Either way, please just keep distracting them, as less drama benefits everyone ;)
 
Don
Joined
19 May 2012
Posts
17,188
Location
Spalding, Lincolnshire
That did make me smile.

So you've used it then?

Perhaps if a certain project team didn't serve notice on Christmas Eve for code violations or deliberately break client's for JF servers, the reworking of the GPL client both projects used would be ready for JF. The irony wasn't wasted on some in that Emby went closed source to avoid dealing with it's own non compliance with the licence it used. Either way, please just keep distracting them, as less drama benefits everyone ;)

Nah I haven't used it, but I've kept up with the drama on reddit.

Certainly don't agree with the shift from open source to closed source (including the previous breach of GPL), but equally don't disagree with them choosing not to support a forked server with their clients.

Jellyfin may turn out to be great, but I can't see it at the minute. Emby's strategy/development is certainly flawed as well, but imo it's a huge step better than Plex with their various clients that are all completely different (e.g. still no grid view tv on most clients), and their "forcing" of premium media (Tidal & webshows) rather than actually improving the core product
 
Soldato
Joined
29 Dec 2002
Posts
7,260
That's my problem, i've done XBMC (now Kodi), Plex, MediaBrowser and Emby. Each has moved in a direction that reached my cringe threshold other than Kodi (the 'partner' status for Vu - who ignore GPL as and when it suits them and have done for years - is an ongoing irritation for an organisation that makes so much of it's open source credentials, but South Korean companies aren't known for observing licensing or patents).

Plex in effect forced many people to emby, the logging, the remote authentication, the pushing of additional premium partners channels without an opt out, and the upcoming move to re-selling subscriptions to yet more 3rd party services and the rolling out/pulling of TV grid view and Cloud services. In contrast emby was (at the time) open, community driven software that didn't feel like selling your soul to satan a little more with each update. Then came the device count restrictions, followed by the GPL license issues. Having been burnt earlier by Plex, this had alarm bells ringing.

My issue isn't what emby did, clearly they are entitled to, but how and when they chose to do it. Everyone expected client compatibility to be broken sooner rather than later, but emby felt that they needed to do so on Christmas Eve. What I found interesting was it was a very trivial fix for JF to get round, a move that would have potentially escalated hostility, JF dev's chose not to. Instead they've pushed on with the Corodova GPL client which solves the problem long term, builds for most platforms are available and progressing rapidly.

Either way, we have another option now and although it's very early days, it's one that has made some very positive steps and avoided being drawn into drama for the sake of it. Lets just hope it stays that way :)
 
Back
Top Bottom