• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD THREADRIPPER VS INTEL SKYLAKE X

And yet clearly that's not what's going on ^^^^

or rather, a 4ghz 8 core ryzen draws the same power as a 4ghz 10 core intel..

Yes.

You've probably just hit on why these CPU's are not as efficient as the Ryzen chips. to stay ahead Intel are going for outright Mhz rather than power efficiency.
It will be interesting to see how that plays out given that to ppl running a lot of these 50% more power for much less than that in performance makes far less sense.
 
And yet clearly that's not what's going on ^^^^



Yes.

You've probably just hit on why these CPU's are not as efficient as the Ryzen chips. to stay ahead Intel are going for outright Mhz rather than power efficiency.

that's the law of diminishing returns though correct?

although if you look at power efficiency, the more itself is very efficient, I mean if you clock the 10 core at the same clocks as the 8 core they are neck and neck. meaning the 8 core Intel should be slightly lower draw than the 8 core ryzen at identical clocks right?

obviously that goes out the window for all cpus once you start shoving 1.3/1.4v down them.
 
Looking at the 7900X reviews it seems that frequency for frequency (not sure on voltage) they are pretty much the same performance per watt i.e.

6950X @ 4.4GHz 368watt system power
7900X @ 4.6GHz 378watt system power

Yeah so that's a failure, no?

It's a newer (better?) architecture, on a refined/better process (14nm+ vs 14nm-vanilla), yet it has 0 perf/w increase?

Also as I noticed in my edit, a performance DECREASE in gaming? Broadwell-E at 4.4 is faster than Skylake-X at 4.6? That's got to be wrong? Thermal throttle?
 
Its still short the 7900X though I think? bit simplistic but given the performance per core if you added 2 cores to the 1800X you are still almost 4% short of the 7900X? and based on a horrid nasty use of a coefficient representing the efficiency of the process at higher voltage/frequency or core count the 1800X would be at almost 230watt or worse at the same performance as the 7900X.

The problem is Intel 14NM is very mature,what AMD is using isn't as mature so there is probably more room for GF/Samsung to try and improve things on their side,and this is something you have seen AMD do in the past.

Even ignoring all that Threadripper is most likely going to be 12 or 16 cores I suspect,so you could be running a 16 core Ryzen CPU running at 3.0GHZ~4.0GHZ for similar power as the Core i7 7900X.
 
that's the law of diminishing returns though correct?

although if you look at power efficiency, the more itself is very efficient, I mean if you clock the 10 core at the same clocks as the 8 core they are neck and neck. meaning the 8 core Intel should be slightly lower draw than the 8 core ryzen at identical clocks right?

obviously that goes out the window for all cpus once you start shoving 1.3/1.4v down them.

Yes, beyond a certain point Mhz costs you a lot more power than actual compute threrads.

And yes the Intel chip at the same Mhz and threads may well use less power, but that's not what Intel are going for, they know, clearly we now know, Intel need the Mhz because they don't have the value in IPC or threads.
 
Yeah so that's a failure, no?

It's a newer (better?) architecture, on a refined/better process (14nm+ vs 14nm-vanilla), yet it has 0 perf/w increase?

Also as I noticed in my edit, a performance DECREASE in gaming? Broadwell-E at 4.4 is faster than Skylake-X at 4.6? That's got to be wrong? Thermal throttle?

hexus mentioned bios issues, remember NDA is still in place and these are unofficial early reviews.

apparently hexus got a bios update today that changed some of the outliners by a huge margin, so expect those oddities to be sorted by release I would say.
 
The problem is Intel 14NM is very mature,what AMD is using isn't as mature so there is probably more room for GF/Samsung to try and improve things on their side,and this is something you have seen AMD do in the past.

Even ignoring all that Threadripper is most likely going to be 12 or 16 cores I suspect,so you could be running a 16 core Ryzen CPU running at 3.0GHZ~4.0GHZ for similar power as the Core i7 7900X.

Good points.

And also when you look to 2018/2019, Intel will go from their 14nm+ to 10nm, and AMD will go from GloFo/Samsung 14nm to GloFo/IBM 7nm (which is comparable to Intel's 10nm+ ish)

Point being, AMD will make a larger jump than Intel, since Intel's 14 to Intel's 10 is a smaller benefit than going from Samsung 14 to (effectively) Intel 10.
 
And yet clearly that's not what's going on ^^^^

The 7700K is a highly clocked part that is thumping out 15.5 fps per core compared to 10.8625 for the 1800X and 11.27 for the 7900X - if you doubled that up the power properties would go through the roof.

Also as I noticed in my edit, a performance DECREASE in gaming? Broadwell-E at 4.4 is faster than Skylake-X at 4.6? That's got to be wrong? Thermal throttle?

Its also not doing well against the 7700K core for core, clock for clock - maybe to keep the power in check.
 
The 7700K is a highly clocked part that is thumping out 15.5 fps per core compared to 10.8625 for the 1800X and 11.27 for the 7900X - if you doubled that up the power properties would go through the roof.

Yes i know.....

Yes, beyond a certain point Mhz costs you a lot more power than actual compute threrads.

And yes the Intel chip at the same Mhz and threads may well use less power, but that's not what Intel are going for, they know, clearly we now know, Intel need the Mhz because they don't have the value in IPC or threads.

BTW does this remind you of anything ?
 
Well done taking something they muse as a possible reason and 'making' it into a fact. It's tiring.
Didn't know they mused it as a possible reason, just pretty obvious that the TIM is an issue if it's hitting 100 degrees. How much do you want to bet that people will be delidding these to significantly reduce temperatures?
 
Good points.

And also when you look to 2018/2019, Intel will go from their 14nm+ to 10nm, and AMD will go from GloFo/Samsung 14nm to GloFo/IBM 7nm (which is comparable to Intel's 10nm+ ish)

Point being, AMD will make a larger jump than Intel, since Intel's 14 to Intel's 10 is a smaller benefit than going from Samsung 14 to (effectively) Intel 10.

Its kind of a fail since Intel spends so much on its own fabs and has been using 14NM for CPUs since 2014,and AMD is using a secondhand 14NM process GF licensed from Samsung who still screwed it up and it was made for small chips used in mobile phones.

I mean in servers this could be much more significant as the CPUs tend to be clocked much lower,so as long as AMD can make sure the infrastructure is solid here,I do think they have a very competitive design.
 
Last edited:
Didn't know they mused it as a possible reason, just pretty obvious that the TIM is an issue if it's hitting 100 degrees. How much do you want to bet that people will be delidding these to significantly reduce temperatures?

Oh I just deleted my original post because it seemed rude and was not intended as such. :D As for how many will delid: the minority by far (I'd be too scared, and many others would be too I can imagine).

Another review:

http://hexus.net/tech/reviews/cpu/107017-intel-core-i9-7900x-14nm-skylake-x/
 
Its kind of a fail since Intel spends so much on its own fabs and has been using 14NM for CPUs since 2014,and AMD is using a secondhand 14NM process GF licensed from Samsung who still screwed it up and it was made for small chips used in mobile phones.

I mean in servers this could be much more significant as the CPUs tend to be clocked much lower,so as long as AMD can make sure the infrastructure is solid here,I do think they have a very competitive design.

Yeah it'll be interesting if we get to see professional reviews of EPYC vs the server version of Skylake-X. I'm betting EPYC will have higher performance per watt. And it'll be huge if so, since as far as I know that's one of the most important metrics for datacentres. Much like fuel consumption for jumbo jets.

And then, mentioning it again, if AMD can beat Intel at perf/w with this not-so-great GloFo 14nm process, what's Starship going to be like with 48 cores on IBM's 7nm?

Fun times ahead I feel.
 
As for some of the things mentioned above I would like to remind us of the following:

1. Very early reviews;
2. Immature BIOS;
3. Low frequency RAM (2666) used (remember how SKL & KBL flourish with 3000+ RAM in games!);
4. Possibly the lower L3 cache and higher L2 cache aren't liked much by games?
 
As for some of the things mentioned above I would like to remind us of the following:

1. Very early reviews;
2. Immature BIOS;
3. Low frequency RAM (2666) used (remember how SKL & KBL flourish with 3000+ RAM in games!);
4. Possibly the lower L3 cache and higher L2 cache aren't liked much by games?

I'd be quite surprised if a BIOS will fix this. Its not exactly a new arc.
 
I'd be quite surprised if a BIOS will fix this. Its not exactly a new arc.

It will improve things, not sure anything needs fixing. I'm still skeptical about these early reviews and prefer to wait until Monday.

Hexus said: "At the time of writing we can only put this down to a lack of software maturity. In the interests of full disclosure, readers should note that the Core i9-7900X initially scored just 4,015 in the VRMark test and the result climbed to 10,191 courtesy of a new motherboard BIOS. There's clearly still work being done to optimise performance."
 
Back
Top Bottom