• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel 8 Core i7 7820X Unlocked CPU/Processor

Just having high physical IPC means squat. That is why nobody uses your definition for a performance context. No one is misusing the term, they are simply not using your irrelevant definition.

Your definition would mean processors have hardly progressed over the last 30 years.

For example, the original Core architecture did 4 instructions per clock. Yet I would like to see you say with a straight face that somehow is relevant when comparing performance of an original Core chip and current Kaby Lake.

http://www.anandtech.com/show/1998/4

Looking into it, my comments on software optimisation was not relevant in this context. However IPC is instructions per clock. Just because IPC hasn't changed doesn't mean you can misuse the word. Also misuse of the word leads to lack of understanding/mistakes. The number of instructions per clock isn't the only thing that determines processor's performance. You have things like latency between command executions, cache hit rates, speed of data transmission, CPU speed etc.
It why there are reviewers who use the term "Single threaded performance" instead of IPC. A more appropriate term (but still not great) would be instructions per second.
 
So forgetting value for money, referring back to the OP, simply which chip is better for performance and features (regardless of whether some will use them or not)?
 
So forgetting value for money, referring back to the OP, simply which chip is better for performance and features (regardless of whether some will use them or not)?

Taking value for money out of the equation (and thermals / power consumption) benefits the 7820X greatly. Due to its clock speed and Single threaded performance advantage it does pull ahead of the 1800X in most (if not all) applications.

Arguably the easiest way to look at it is a 1800X clocked in at 4.6-5Ghz.

Right now however, I would hold off deciding on any purchase until we see what Threadripper brings to the table.

Either that or pick up a 1700 for under £300, combine it with a decent mid range MB and clock it to 3.7Ghz +.
 
So forgetting value for money, referring back to the OP, simply which chip is better for performance and features (regardless of whether some will use them or not)?

Totally ignoring costs, I would think that the 7820X delidded and tested by 8pack on a top motherboard in a case with a top watercooling kit running at 5.0GHz (I'm assuming that's even possible), is the 'better' chip. All of that could easily top £1,500 however.
 
Looking into it, my comments on software optimisation was not relevant in this context. However IPC is instructions per clock. Just because IPC hasn't changed doesn't mean you can misuse the word. Also misuse of the word leads to lack of understanding/mistakes. The number of instructions per clock isn't the only thing that determines processor's performance. You have things like latency between command executions, cache hit rates, speed of data transmission, CPU speed etc.
It why there are reviewers who use the term "Single threaded performance" instead of IPC. A more appropriate term (but still not great) would be instructions per second.

Its a tricky one - what most people are referring to is the tick over rate of the CPU performing the same highly serialised sequence of operations (usually contained as a function) over and over and a lot of things can affect that including the fact that any given function can vary hugely from any other.
 
Looking into it, my comments on software optimisation was not relevant in this context. However IPC is instructions per clock. Just because IPC hasn't changed doesn't mean you can misuse the word. Also misuse of the word leads to lack of understanding/mistakes. The number of instructions per clock isn't the only thing that determines processor's performance. You have things like latency between command executions, cache hit rates, speed of data transmission, CPU speed etc.
It why there are reviewers who use the term "Single threaded performance" instead of IPC. A more appropriate term (but still not great) would be instructions per second.

I haven't seen anyone else make a mistake yet.

You're the one who claimed Ryzen has higher IPC and just needs to be optimised for. Refuting that SKL and KBL have higher IPC. A completely misleading statement given the posts you were responding to.

Even AMD don't use that redundant definition outside of their engineering team.

R-10b.jpg


We also don't need the patronising comments about what affects CPU performance. We know what they generally are (what do you think my previous post demonstrated?) and hence talk about IPC in terms of clock for clock performance in real applications. I've yet to see someone who fails to understand and apply that.

Also Anandtech, one of the most technical reviewers of processors use IPC in the context most people do.

Only when they dig into the exact architecture in deep dives do they use IPC in any other sense.
 
Last edited:
I haven't seen anyone else make a mistake yet.

You're the one who claimed Ryzen has higher IPC and just needs to be optimised for. Refuting that SKL and KBL have higher IPC. A completely misleading statement given the posts you were responding to.
Are you really going to ignore the fact that in the quoted post I corrected my self on the topic of software optimisation?

Even AMD don't use that redundant definition outside of their engineering team.

We also don't need the patronising comments about what affects CPU performance. We know what they generally are (what do you think my previous post demonstrated?) and hence talk about IPC in terms of clock for clock performance in real applications. I've yet to see someone who fails to understand and apply that.

Also Anandtech, one of the most technical reviewers of processors use IPC in the context most people do.

Only when they dig into the exact architecture in deep dives do they use IPC in any other sense.

I didn't think my comment was patronising but apologies if it offended you. Admittedly when i used the word mistake I had another example I was going to use but it was too relevant. My point was that using instructions per clock rather than single threaded performance, by definition causes people to not consider that the other stuff listed.

Are you trying to make the argument that it is okay, to use words in what is technically an incorrect manner simply because that's what people generally understand/use? If you are then we will need to agree to disagree.
 
Where are the detailed 7820X reviews with temps, overclocking etc..All the reviews ive read are mostly focused on the 7900X with only a passing reference to the 7820X (aside from benchmarks)
 
i7 is £580
1700 is £320

Also watch this before making any decision....



...and yes I am an INTEL fanboy

Good video but have no idea if the presenter is any good.
I do wonder why there's a panic on the number of cores as most desktops for home use especially do not need more than 8 or 10 cores at this time in my opinion, no matter what the use. So winning the number of cores battle is a little pointless I think although I suppose if it's done at little cost (retail price) it doesn't matter. What I mean is, if you do a lot of work at home with heavily threaded applications, 8 or 10 cores must already do it fast enough, is double the core count really going to make a big difference? This is a question not stating fact . As an example, getting some from 60secs to 30 seconds is good, but to half that again by doubling the cores only knocks off another 15 secs not 30. would have course make much more future proof though!
And if AMD were forced down a design path because of their lack of £ compare to Intel which now seems a huge positive, are there any downsides to their approach? I'm, thinking there must be but we'll see.
Not trying to drag AMD achievements down I have to add, it all looks very impressive to me :).
 
Good video but have no idea if the presenter is any good.
I do wonder why there's a panic on the number of cores as most desktops for home use especially do not need more than 8 or 10 cores at this time in my opinion, no matter what the use. So winning the number of cores battle is a little pointless I think although I suppose if it's done at little cost (retail price) it doesn't matter. What I mean is, if you do a lot of work at home with heavily threaded applications, 8 or 10 cores must already do it fast enough, is double the core count really going to make a big difference? This is a question not stating fact . As an example, getting some from 60secs to 30 seconds is good, but to half that again by doubling the cores only knocks off another 15 secs not 30. would have course make much more future proof though!
And if AMD were forced down a design path because of their lack of £ compare to Intel which now seems a huge positive, are there any downsides to their approach? I'm, thinking there must be but we'll see.
Not trying to drag AMD achievements down I have to add, it all looks very impressive to me :).

The Ryzen chips are quite small - around 190MM2 for an 8 core CPU including the chipset(Ryzen is an SOC so realistically can run without a chipset if required) whereas the chip in something like the Core i7 7900X is around 320MM2 plus the chipset. This is the first time in a very long time Intel has had to use a much larger chip to compete with a smaller AMD one.

Edit!!

Its going to be even more significant when the mainstream APUs are released - I will expect they will have less L3 cache(or maybe none),so the mainstream Ryzen CPUs might even be smaller. This is also on a process node which is designed more for mobile devices and cost,not absolute performance.

For AMD this is significant since in the past they were using huge chips to compete even in the sub £100 area(around 250MM2~330MM2),and this will mean they should see much bigger margins now over time - I mean a fully enabled Polaris 10 GPU on the same process is 232MM2.

Second Edit!!

There is also some noise regarding 7NM being actually more optimised towards performance CPUs so we might have not seen the best yet IMHO OFC.
 
Last edited:
The Ryzen chips are quite small - around 190MM2 for an 8 core CPU including the chipset(Ryzen is an SOC so realistically can run without a chipset if required) whereas the chip in something like the Core i7 7900X is around 320MM2 plus the chipset. This is the first time in a very long time Intel has had to use a much larger chip to compete with a smaller AMD one.

Edit!!

Its going to be even more significant when the mainstream APUs are released - I will expect they will have less L3 cache(or maybe none),so the mainstream Ryzen CPUs might even be smaller. This is also on a process node which is designed more for mobile devices and cost,not absolute performance.

For AMD this is significant since in the past they were using huge chips to compete even in the sub £100 area(around 250MM2~330MM2),and this will mean they should see much bigger margins now over time - I mean a fully enabled Polaris 10 GPU on the same process is 232MM2.

Second Edit!!

There is also some noise regarding 7NM being actually more optimised towards performance CPUs so we might have not seen the best yet IMHO OFC.

Supposedly the 7nm is being geared up for 5ghz. People are saying IBM have had a big influence with this node.
 
Good video but have no idea if the presenter is any good.
I do wonder why there's a panic on the number of cores as most desktops for home use especially do not need more than 8 or 10 cores at this time in my opinion, no matter what the use. So winning the number of cores battle is a little pointless I think although I suppose if it's done at little cost (retail price) it doesn't matter. What I mean is, if you do a lot of work at home with heavily threaded applications, 8 or 10 cores must already do it fast enough, is double the core count really going to make a big difference? This is a question not stating fact . As an example, getting some from 60secs to 30 seconds is good, but to half that again by doubling the cores only knocks off another 15 secs not 30. would have course make much more future proof though!
And if AMD were forced down a design path because of their lack of £ compare to Intel which now seems a huge positive, are there any downsides to their approach? I'm, thinking there must be but we'll see.
Not trying to drag AMD achievements down I have to add, it all looks very impressive to me :).

I can't speak for all multi threaded programs, but for 3d modelling, even just for home/hobby user your timescale is off. The difference can be measured in minutes and hours, for the more extreme home users, days.
There's also the other side which is the more processing power you have, the more detail you use. For example you might render in higher resolution or use more samples, or have more complex materials in your scene. However I agree that in certain tasks there is diminishing returns, but people will simply find other ways to use more of that processing power.
 
Supposedly the 7nm is being geared up for 5ghz. People are saying IBM have had a big influence with this node.

IBM developed it, and are in partner ship with Samsung, SUNY and GloFo to produce it.

https://arstechnica.com/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/
Somewhat extraordinarily, due to incredibly tight stacking (30nm transistor pitch), IBM claims a surface area reduction of "close to 50 percent" over today's 10nm processes. All told, IBM and its partners are targeting "at least a 50 percent power/performance improvement for the next generation of systems"—that is, moving from 10nm down to 7nm. The difference over 14nm, which is the current state of the art for commercially shipping products, will be even more pronounced.
 
IBM developed it, and are in partner ship with Samsung, SUNY and GloFo to produce it.

https://arstechnica.com/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/

Problem is there is no real standard for these things and what one calls <x>nm another calls something completely different - for instance most industry observers say that IBM's 7nm and Intels 10nm are largely very close as far as any tangible difference in the resulting product goes while TSMC and Samsung's 10nm are closer to Intel's 14nm than they are to what people would "generally" consider 10nm.
 
Problem is there is no real standard for these things and what one calls <x>nm another calls something completely different - for instance most industry observers say that IBM's 7nm and Intels 10nm are largely very close as far as any tangible difference in the resulting product goes while TSMC and Samsung's 10nm are closer to Intel's 14nm than they are to what people would "generally" consider 10nm.

Aye, but what matters is the performance increases from those. IBM claims a 50% increase from 10nm IBM to 7nm IBM.

With even larger increases from 14nm; we already know that GloFo's use of Samsung's 14nm seems to cap Ryzen at around 4Ghz for now. Even if Glofo only manage a 50% increase from 14nm to 7nm; as opposed to 10nm to 7nm IBM can; that's still a significant potential performance increase for Zen2.
 
Aye, but what matters is the performance increases from those. IBM claims a 50% increase from 10nm IBM to 7nm IBM.

With even larger increases from 14nm; we already know that GloFo's use of Samsung's 14nm seems to cap Ryzen at around 4Ghz for now. Even if Glofo only manage a 50% increase from 14nm to 7nm; as opposed to 10nm to 7nm IBM can; that's still a significant potential performance increase for Zen2.

In the article it states power/performance - a 50% increase in power/performance usually equates to around 20-25% performance increase when biasing for performance or a bigger gain in power reduction when biasing for power efficiency.
 
Last edited:
Back
Top Bottom