• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** AMD ThreadRipper ***

So when are the motherboards going to be available? I thought I was waiting for an ek block but it looks like the boards will be a longer wait :(

Zenith was in stock at OCUK today but sold out very quickly. ASRock Pro Gaming is in stock at another retailer. Many sites are expecting stock tomorrow for Gigabyte Gaming 7 boards with ASRock Taichi shortly after.

EK blocks are shipping on 18th (customer as well as retailers), so if you haven't ordered direct through EK, I guess you'll be getting the block around 22nd/23rd.
 
Interesting point made by someone else on another forum today.

Its basically a decision between raw performance and multitasking capability for intel and amd.

Products like threadripper the strength is as stated in that video, to be able to do background tasks whilst gaming without harming the gaming experience. However the cost of been able to do that is that peak performance is lower because adding cores adds TDP, heat and complexity which in turn reduces the ability for maxed out per core performance. This guy claimed if we had single core chips today we could hit 6ghz with less power usage than a typical 4 core chip.

Sadly when we rely on reviewers they usually only benchmark high budget games (which tend to support multiple threads better than lower budget games which usually support 2 cores at most), and we always see things like cinebench running which doesnt reflect what 99% of consumers use their pc for. However is usually relevant to people like streamers and reviewers since they do things like encoding youtube videos and streaming whilst gaming.

Now for me personally I do multitask, however not to the point where background tasks need lots of grunt or I even care about. I consider 2-4 core cpu's been the sweet spot, 1 core is too low, as one runaway process can make the OS unresponsive, I consider anything over 8 cores pretty silly for the consumer market but then I remember products like intel's enthusiast chips and amd's threadripper are not aimed at the mainstream market.

As an example if they benchmarked any of the PC FF13 ports, they would likely find zero benefit between a 2 core chip and 4,8,12,16 core chip's given the games are all single threaded for rendering and a secondary backup thread for housekeeping tasks. Most indie games dont utilise more than 2 cores, and indie games are 95% of the steam library. Most desktop apps do not have great multi core capabilities.

On the flip side big budget games from the big publishers such as blizzard, activision, microsoft and EA tend to utilise more cores and these games tend to be what gets benched. I do occasionally play a big budget game, but its only a small fraction of my gaming time, I think the last big budget game I played on the PC for any large length of time was dragon age inquisition, and that did load up all 4 of my cores fairly balanced, which for me is unusual to see from a game. :)
 
The thing is, with lower budget games they may well only be optimized for 1 or 2 cores, these games are usually not very graphically taxing on your gpu. Your very unlikely to see much of a difference between say a 7700k and a ryzen 1600, unlike the AAA titles that are much more demanding on the cpu and gpu, especially if you are using a 1080p< and or high refresh rate display.

The truth is the majority of people are likely going to be unable to distinguish between a 7700k and a Ryzen in the real world unless you are one of those who requires very high refresh rates.

Look at it like this, a 4 core + cpu will be able to do everything a 4 core or less will be able to do, whilst giving you much more room to grow into as of when your needs or games/applications require it. It gives you headroom rather than being at the limits of what it is capable of.
 
Games? you simply wouldn't buy Threadripper or SkyLake-X for games, thats is not what they are on the market for.

Just a couple of examples of what they are for.

fgesr.png


Vera_Crypt.png


Excel.png


Hand_Brake.png


Premiere.png


https://www.techspot.com/review/1465-amd-ryzen-threadripper-1950x-1920x/
 
I know, I still don't know why games keep getting discussed in a hedt thread. As long as gaming performance is acceptable (which it is on threadripper - it's still a very good cpu for gaming) if you want to game, then that's as far as the discussion goes.
 
Yeah, threadripper doesn't sound like it's for you.

so you think most consumers desktop usage fits what cinebench is testing?

As you make it sound like I am an odd one out. :)

I Also forgot to mention context switching which is horribly evil.

Basically everytime a task is moved from one cpu core to another, there is a penalty. That penalty can be huge.

On linux and bsd its not too bad, on windows its horrific, because windows will rapidly switch single core tasks between different cores for a reason only microsoft know. This switching adds overhead.

Using FF13 lightning returns as an example, since I did in depth testing on that game, when I parked 2 of my cores, the game ran faster and overall cpu usage was about 150% (core and a half).
With all 4 cores unlocked the game ran a bit slower and overall cpu usage was about 300% (3 cores), basically one and a half cores were tied up by kernel context switching, nothing else. This is not a big problem if the game itself is designed to assign specific tasks to specific cores, what happened is basically as I said earlier the rendering is single threaded and should just be using one core, but the windows cpu scheduler rapidly moves from one core to another, and the task manager will appear to show about 25% usage on each core but also with another 35% or so as kernel usage (context switching), as task manager is only reporting average usage its hiding the spikes.

Playing FF13 LR on a 16 core chip I dread to think what the context switching would be like :p
 
Last edited:
I know, I still don't know why games keep getting discussed in a hedt thread. As long as gaming performance is acceptable (which it is on threadripper - it's still a very good cpu for gaming) if you want to game, then that's as far as the discussion goes.
its brought up because intel keep getting slammed on here for low core counts as if they holding back the market or something.

I am trying to point out anything more than 4 cores for the mainstream market is pointless.

Those benches are mostly irrelevant to the typical gamer or desktop user.

If you buy a cpu just to get high scores on a bench then fair fair enough.

The only benchmark that is perhaps even partially realistic is passmark.

https://www.cpubenchmark.net/desktop.html

Threadripper still top, but the gap is not over inflated, but more realistic.
 
Passmark is synthetic, very synthetic, people also tend to submit overclocked results.

What's "realistic" are real world applications, Adobe, Blender, Handbreak, EXCell, AES Encryption.... thats what most reputable reviewers benchmark.
 
so you think most consumers desktop usage fits what cinebench is testing?

As you make it sound like I am an odd one out. :)

I Also forgot to mention context switching which is horribly evil.

Basically everytime a task is moved from one cpu core to another, there is a penalty. That penalty can be huge.

On linux and bsd its not too bad, on windows its horrific, because windows will rapidly switch single core tasks between different cores for a reason only microsoft know. This switching adds overhead.

Using FF13 lightning returns as an example, since I did in depth testing on that game, when I parked 2 of my cores, the game ran faster and overall cpu usage was about 150% (core and a half).
With all 4 cores unlocked the game ran a bit slower and overall cpu usage was about 300% (3 cores), basically one and a half cores were tied up by kernel context switching, nothing else. This is not a big problem if the game itself is designed to assign specific tasks to specific cores, what happened is basically as I said earlier the rendering is single threaded and should just be using one core, but the windows cpu scheduler rapidly moves from one core to another, and the task manager will appear to show about 25% usage on each core but also with another 35% or so as kernel usage (context switching), as task manager is only reporting average usage its hiding the spikes.

Playing FF13 LR on a 16 core chip I dread to think what the context switching would be like :p

Not to say there aren't some issues with context switching, etc. but sounds like a problem with FF13 - most games will plonk their main logic thread on physical core 0 or 1 and rendering thread if separate on 1 or highest core and Windows won't touch it - there were some issues at one time with rendering threads and SMT but these don't happen any more.
 
I am trying to point out anything more than 4 cores for the mainstream market is pointless.

Just as well threadripper isn't aimed at the mainstream then.

Those benches are mostly irrelevant to the typical gamer or desktop user.

Just as well threadripper isn't aimed at the typical gamer or desktop user then.
 
its brought up because intel keep getting slammed on here for low core counts as if they holding back the market or something.

I am trying to point out anything more than 4 cores for the mainstream market is pointless.

Those benches are mostly irrelevant to the typical gamer or desktop user.

If you buy a cpu just to get high scores on a bench then fair fair enough.

The only benchmark that is perhaps even partially realistic is passmark.

https://www.cpubenchmark.net/desktop.html

Threadripper still top, but the gap is not over inflated, but more realistic.
Remember that SSDs hugely improve user experience, far more than just upgrading a CPU for example? One of the reasons for that is that everything is compressed these days for bandwidth reasons. Decompressing files can be heavily threaded, which could be why some claim extra cores give you that "snappiness" that people always crave. Installing applications should be faster for the same reason.

Extra cores isn't just about heavily threaded applications though, it's about having lots of stuff running at once. Given all the bloat people tend to have on their machine these days it should help a lot. Hell, I try to keep a clean machine and still have around 10 icons in my notification area. :mad:

Also it wasn't just a lack of "moar coars" that was the problem. The other issue was the total lack of advancement in 6-8 years (depending how you look at it) and that is solely due to lack of competition. It's not a coincidence that Coffee Lake will include the first 6c/12t mainstream Intel chip after Ryzen launched and that X299 significantly upped both the maximum core count (well it will eventually) and upped the core count at each price bracket after Threadripper was announced. Even if you ignore core count, AMD provides way better value for money right now - 4c/4c for £100 and 4c/8t for £150 decimates anything Intel has to offer right now. Yes, Intel has faster chips for double the price but most people (I would hope) understand the law of diminshing returns. AMD provides a longer term platform, has chips with better thermals and lower power usage, and on top of that, Intel's cheaper chips aren't overclockable.
 
Passmark is synthetic, very synthetic, people also tend to submit overclocked results.

What's "realistic" are real world applications, Adobe, Blender, Handbreak, EXCell, AES Encryption.... thats what most reputable reviewers benchmark.
not realistic because a typical user doesnt sit there encrypting files and blender? really.

passmark is more realistic because its closer to what real world usage is. Of course its not popular because it doesnt show a great picture for high core count processors. So doesnt get used by reviewers due to the damage to marketing. :)

4c/4c for £100 and 4c/8t for £150 decimates anything Intel has to offer right now. Yes, Intel has faster chips for double the price but most people (I would hope) understand the law of diminshing returns. AMD provides a longer term platform, has chips with better thermals and lower power usage, and on top of that, Intel's cheaper chips aren't overclockable.

You missing the single most important metric, core clock speed. I cannot decide on value till I know that metric. I do agree tho that the low end ryzen chips are good for the market. Not as good as intel chips but much better priced.

In a 6 year period the intel progress has been ok, not great but ok. The problem is they release cpu's too frequently and there is a small jump between each release.

But if you compare sandy bridge to say skylake, there is a definite noteworthy improvement.
 
You missing the single most important metric, core clock speed. I cannot decide on value till I know that metric. I do agree tho that the low end ryzen chips are good for the market. Not as good as intel chips but much better priced.
Intel's base clocks improved a lot after Sandy Bridge IIRC. The Core i5-7400 runs at 3.3-3.5 GHz, compared to 3.1-3.4 GHz for the R3 1200 (which can be overclocked, remember).

In a 6 year period the intel progress has been ok, not great but ok. The problem is they release cpu's too frequently and there is a small jump between each release.

But if you compare sandy bridge to say skylake, there is a definite noteworthy improvement.
Yes but I'd expect a noteworthy improvement after 4 years. You're right that they pumped them out too frequently when they didn't have much justification to do so, just to make more money using "new shiny". Plus new, incompatible chipsets every god damn year is not a way to keep your customers happy.
 
I know, I still don't know why games keep getting discussed in a hedt thread. As long as gaming performance is acceptable (which it is on threadripper - it's still a very good cpu for gaming) if you want to game, then that's as far as the discussion goes.

You and many others have a very misconstrued perception on what market you're talking about. Gaming is very much a focal point. Hell, if it's not then motherboard vendors have things very wrong. Simply because AMD release a CPU with 16 cores, doesn't change the demand on these things. It's posturing to suit the argument or justification.

Believe me, gamers are interested in TR as much as anyone else. What users do you think get samples outside of media? Content creators, mostly. Guess what content? Yep, that's right, gaming ;).

You want to use it as a workstation, that's more than ok - it's excellent for that. Just don't become confused as to what sells. It's certainly not blender.
 
You and many others have a very misconstrued perception on what market you're talking about. Gaming is very much a focal point. Hell, if it's not then motherboard vendors have things very wrong. Simply because AMD release a CPU with 16 cores, doesn't change the demand on these things. It's posturing to suit the argument or justification.

Believe me, gamers are interested in TR as much as anyone else.

Aye I'm not sure why gaming keeps getting shot down as a discussion point for HEDT. You could apply the same argument to the productivity side as well. TR does this better, 299 this and that's as far as the discussion goes. The forums would be very boring then of course.
 
Back
Top Bottom