• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The 8 core showdown and analysis thread.

Soldato
Joined
23 Apr 2010
Posts
12,218
Location
West Sussex


OK so I have been smashing the data. This thread can serve multiple purposes really.

* To see if an 8 core CPU is a viable proposition to a gaming PC.

* To see core use in games over the past year, to see if things have changed.

* To give a purpose to the forthcoming 8 core Haswell E CPU in a desktop machine.

* To see if the rumours that Xeons are crap for a gaming rig are true.

I'm going to compare an 8 core* AMD FX 8320 CPU clocked to 4.9ghz (that cost £110) with an Intel Xeon 8 core 16 thread CPU (socket 2011) that I also paid £110 for. Then I am going to analyse which games actually make use of all of those threads and how well they load up the CPU.

NOTES.

First up I'm fully aware the Xeon only boosts to 2ghz under load. I can not overclock it, not even in tiny increments via the FSB because even 101mhz makes the PC stick in a boot loop. So before the Intel boys dive in with accusations of comparing totally different clock speeds; there's nothing I can do about it. It's not my fault Intel decide to lock their CPUs at given speeds and then set a price structure for speed.

It's not always about speed and figures. At the end of the day a CPU can be perfectly suitable for a task, even if it does not appear as good as another one. You'd actually be amazed just how little CPU power you need for most of the time.

Heat and power are not a part of this analysis. Simply because I don't care, nor do I want to become embroiled in a stupid argument. This thread is strictly 8 cores only. I don't care about, nor want to know your results with your overclocked 4770k. Remember - 8 cores. I no longer care about clock speeds and IPC. I want to see more cores, being used, at lower prices. What I'd ideally like to see is a 6/8 core CPU by Intel that simply drops into a socket 1150 without the need to buy ridiculous motherboards or ram.

Hey, a guy can dream, right?

OK. So let's get it on then...

Here are the specs to concentrate on. The AMD rig is as follows.

AMD FX 8320 @ 4.9ghz
Asus Crosshair V Formula Z
8GB Mushkin Blackline running at 1533mhz (offsets with the FSB)
Corsair RM 750 PSU
Corsair H100
AMD Radeon 7990 ghz
OCZ Revodrive 120gb running RAID 0
Windows 8 Professional X64 (note, not 8.1 !)

Then onto the Intel rig. Note, this was a rebuild, so components stayed identical barring the board and CPU.

Intel Xeon V2 Ivybridge. 8 core, 16 thread, 2ghz
Gigabyte X79-UD3 motherboard.
8GB Mushkin Blackline running at 1600mhz XMP
Corsair RM 750 PSU
Corsair H100
AMD Radeon 7990 ghz
OCZ Revodrive 120gb running RAID 0
Windows 8 Professional X64 (note, not 8.1 !)

CPU validations.

AMD



Intel



I start with some benchmarks. First up was 3Dmark 11

AMD result



And the Intel



And already strange things happen. The Intel scored a higher physics score (which pertains to the CPU) yet even though the Intel also runs PCIE 3.0 (IB) it loses out overall. Very, very strange.

Then it was on to 3dmark (13) AMD up.



And then it was the Intel's turn.



TBH that's bloody, awfully close. It's actually within the margin of error but I promised myself before I began that I would not obsess over one benchmark and become sidetracked running it over and over again.

OK, so round three, Asus Realbench 2.0.

Interlude.

Asus Realbench is *the* most accurate benchmark I have ever ran in my entire life. Instead of making their own synthetic, unrealistic benchmark they simply took a bunch of programs and then mashed them together. This way the results are actual real world results. As an example, test one is GIMP image editing. Then it uses Handbrake and other benchmarks to actually gain a good idea of what a system is capable of.

This is also the toughest benchmark I have ever ran. I can run Firestrike all day long, but Realbench absolutely tortures a rig to the breaking point.

I ended up having to remove the side of the AMD rig and aim a floor standing fan at it to get it through.

So here is the AMD result.



And the Intel result.



Wow. Now this one truly knocked me sideways. I never expected the AMD rig to win on IPC alone (GIMP). Even an I7 920 runs the AMD close in GIMP, but the AMD absolutely trumped the Intel all the way through.

And this, lads and ladies, is why Asus make very high end boards for these chips. Simply as Bindi (an employee of Asus) points out, the AMDs are actually very good CPUs.

Then it was on to Cinebench, and another surprise..

AMD



And then Intel.



The surprise? not that AMD won. I was actually very impressed with the Intel's performance, given it is clearly running at less than half of the speed it's actually capable of. I hazard a guess that this CPU could actually double that speed if unlocked and overclocked, which does make me a teeny bit excited about Haswell E.

OK so no set of benchmarks would be complete without at least one game. I decided to choose Metro : Last Light. You'll see why later when I get onto the part about core use, but here is the AMD's result.



And the Intel.



And it was finally victory to the Intel. Not by much, but Metro clearly absolutely loves the cores and wants as many as you can throw at it.

Due to this result I decided to keep the Intel. There are other reasons of course, this played a part.



Less than 14 watts idle, and I had real trouble making it use more than 90w under load. Temps are always under 40c no matter what which means the rig is now very quiet.
 
Last edited:
OK so it's time for part two. The comparisons are now over, now it's time to crack on and see what those cores offer us in gaming.

With this part of the analysis I aim to debunk the myths. So, right now I am a mythbuster. Those myths include, but are not limited to -

"Pah. You don't need any more than four cores for gaming. Only a tiny handful of games use that many"

"No games support 8 cores, let alone more"

"Xeons are crap for gaming because they're somehow different inside"

OK, so those are the usual off the cuff comments doing the rounds. Here's my take. Basically I would take a CPU that's clocked to 2ghz with 8 cores and 16 threads over a quad core CPU clocked to within an inch of its life. Games and applications that use the cores tend to spread it around, meaning you get the same sort of performance without using masses of power or generating tons of heat. This is exactly why I decided to put a Westmere hex into my 670 SLI rig. Simply because in all of the latest games (and I mean all of them ) core count produced very similar, or better, results than running a CPU to within an inch of its life, thermal limits and voltage tolerance.

This is also why I chose to keep the 8/16 Intel in the rig, rather than putting back the more brutal AMD. I don't usually care about power use tbh. My bills are more than affordable, even with the stuff I have. However, noise is always a bonus if you can eradicate as much of it as possible.

Now I know I'm probably completely alone in feeling this way, but tbh? it's what we should have been demanding for years. More cores, better threading, lower power consumption to get to the same place, etc etc.

Sadly up until this exact moment Intel have not offered us a massively threaded CPU *with* the ability to overclock it. You could count socket 1366 but TBH this, IMO, was an oversight and a mistake by Intel. Had it not been so they would have left the strap and BCLK separate to the PCIE and SATA clock in 2011. Nope, Intel wanted to make sure you were stuck on their K series quad core chips. So it's always been "You can have the cores, but not the overclocks... Or the overclocks without the cores".

We are still going to be left to one side of course. Intel are already making and selling 12 core 24 thread CPUs but expect to see those in a year or two, unlocked and rebadged as "Extreme" edition CPUs.

Right, so with all of that said let's see what these cores can actually do, shall we?

Firstly I will explain how I performed this research. In Windows 8 (don't even think about using Windows 7, it does not correctly address any more than 4 cores. Anything more is a bodge and an afterthought and does not work properly) there is a very handy little app in your task manager than can be split up to show you how your cores are being utilised.

Here is how it looks once you split it to the full amount of threads, this is also a 8 core 16t system sitting idle.



OK. Now note that each core and thread (so physical and logical) has its own box. The bottom of that box is 0%, the top is 100%. As the graphs fill up they display core use.

Core use is recorded for 60 seconds, so basically the method I used was to load up a game, wait for it to get to an intense part of the action (when the rig makes the most noise, basically) and then press ALT and TAB to return to the graph. At which point I simply take a screen shot.

Note though - Actual utilization will not be accurate because I am now exited from the game. The only way to monitor that accurately is to run an accessory screen and record it in realtime. Not something I will bother with (yet).

So the first game put to the test was Batman Arkham City.



And as we can see, there is plenty of activity across all 16 threads. What we are looking for though is for the spikes to look the same. This indicates even loading over all threads. Very few of these games do this, but, there is still plenty of activity on each thread indicating high usage.

Then it was onto Battlefield 3.



No surprise. I've tested BF3 on the AMD and it wanted at least six cores with residual load over anything more.

Battlefield 4



OK, so we know BF4 is an 8 core game. However, BF4 tends to lean a lot less hard on the CPU. It seems DICE have been doing some work to make sure the GPU gets the harder job. Good job, DICE !

The first genuine surprise of the day, COD : Ghosts.



OK, looks like it could well be a modern console port then. Then it was on to Crysis 2. Again, I already knew what was going to happen here...



Only really wants four cores, spends most of its time leaning on two. Not so good then.

Time for Crysis 3



Much, much better. However, I am also aware that different levels in Crysis 3 change the dynamics. Certain levels want CPU cores, certain levels leave them to one side and call on the GPU. This is why the AMD vs Intel results in Crysis 3 are all over the place. However, I still say you're better off with the cores tbh. Time for Far Cry 3



And we can see that it's kind of lame. Far Cry 3 may use the same engine as Crysis 3 but it's clearly nowhere near as demanding, or complex. I admit the results were taken right at the very beginning of the game and thus, are likely to change as the game goes on.

Time for Hitman : Absolution.



Is one of the better games to demonstrate that it can thread very well. However, Hitman is a benchmark after all. I wish more games contained benchmarks tbh. OK so it was time to try something older.

RAGE by ID, using the TECH engine.



Wow. Now this game is what? three? four years old? yet still likes to thread itself well. God bless John Carmack, the guy clearly wants what I want.

Now let's see what Metro : Last Light is doing.



And again we see nice even core loads. It's pretty apparent that Metro loves having cores at its disposal. It allowed a CPU that won in absolutely nothing to beat another CPU simply by threading itself properly. Good show !

Time for some Tomb Raider benching.



Again, wants the cores. However, never truly loads them up over more than 50%. So in this instance again the Intel managed to produce better results than the AMD. Simply because the core loads are low, but the more cores you have the more the load is spread.

And finally it was time for Wolfenstein : The New Order.



And once again we see a pretty even distribution over all 16 threads. This is because the game is running on ID TECH, which Bethesda and others have been using for a while. This means more support for the future.

Conclusion.

Phew. I'm absolutely bloody knackered now. But, we can clearly see that the "Four cores is enough" argument belongs where it should be left; in 2010. Things have changed, consoles have changed, the ports over to PC have changed.

This may well pave the way perfectly for Intel's 8 core chip. TBH? like many times in the past they've left AMD to do all of the hard work on something that they saw as a waste of time. Now though? they have no choice really. Games and apps are now becoming more highly threaded by the day, and users will demand processors that can make full use of this.

I could well have sat here and benchmarked even more games that use the cores. There are quite a few that I am aware of, but sadly I only have so much time in the day.

Hope you enjoyed reading :) may well be a bit of an eye opener :)
 
Your Xeon is only 1.6GHZ on 8 cores? Firstly you said 2.3GHZ, then you 2GHZ. That's not a bash, but rapidly lowers its raw performance output, at 1.6GHZ it should be slower than a stock 3770K all in.
Also, you're still mis-using IPC :p

Also, you're twisting, (Unless one's stupid) no one thinks xeons are inherently crap for gaming (I certainly don't, I've put a 4C/8T 1230V2 in my brothers rig). I just would never chose the one you've chosen.
 
Last edited:
Your Xeon is only 1.6GHZ on 8 cores? Firstly you said 2.3GHZ, then you 2GHZ. That's not a bash, but rapidly lowers its raw performance output, at 1.6GHZ it should be slower than a stock 3770K all in.
Also, you're still mis-using IPC :p

1.6ghz no turbo. Intel do at least give you 4 bins over 8 cores, so it's easy to lock it to 2ghz.

2.3ghz is one core only IIRC, though it may have been two....

Either way, blame Intel for the confusion.. Don't worry though, AMD pull the same dirty tricks to make their CPUs sound like they're clocked higher than they really are.
re



^ there you go.
 
Last edited:
I wish you'd used MSI Afterburner for Core Usage statistics (So we can see core usage at the same time in figures).

But you've obviously went to some effort.
 
I wish you'd used MSI Afterburner for Core Usage statistics (So we can see core usage at the same time in figures).

But you've obviously went to some effort.

Yeah it's not realtime. It's more of a "Here's what happened over the past sixty seconds".
 
I dont get it. Why am I looking at a comparison between one cpu at 2ghz and another at nearly 5ghz? if you can't compare like for like then I dont see how this is of any use.
 
Nice review thought I'm too lazy to read it all and I feel bad to contradict anything you say (especially when I don't have any tests to proveit) . However you suggest more cores are king but 1core at 8ghz would be faster than 4x 2ghz all other things being equal (unless I'm missing something).

I'd be interested if anyone could prove me wrong or link some benchmarks where it's tested so we can see the overhead caused by multi core (if any at all)
 
intel at 2ghz 4.9ghz amd all benches irrelevant.

Shows you didn't bother to read properly, or simply don't understand.

One CPU vs the other. One can be overclocked, one can't - that's not my fault.

Yeah, probably a pointless benchmark but hey, at least it shows the performance of two CPUs.

That's why I separated the core use statistics and kept that as its own entity, because that was the most important part.


Nice review thought I'm too lazy to read it all and I feel bad to contradict anything you say (especially when I don't have any tests to proveit) . However you suggest more cores are king but 1core at 8ghz would be faster than 4x 2ghz all other things being equal (unless I'm missing something).

I'd be interested if anyone could prove me wrong or link some benchmarks where it's tested so we can see the overhead caused by multi core (if any at all)

It's actually very very easy to disable cores on the Xeon. So, theoretically I could run a few benchmarks and cut cores as I go...

Watch this space ;)

I dont get it. Why am I looking at a comparison between one cpu at 2ghz and another at nearly 5ghz? if you can't compare like for like then I dont see how this is of any use.

They are being compared like for like. One comes unlocked, the other doesn't. Both are real products made by two different companies.

I could compare a Porsche to a Mini if I wanted. Possibly pointless, but still very relevant. If you don't understand it then just skip part 1. Part 2 is far more interesting.

Your Xeon is only 1.6GHZ on 8 cores? Firstly you said 2.3GHZ, then you 2GHZ. That's not a bash, but rapidly lowers its raw performance output, at 1.6GHZ it should be slower than a stock 3770K all in.
Also, you're still mis-using IPC :p

Also, you're twisting, (Unless one's stupid) no one thinks xeons are inherently crap for gaming (I certainly don't, I've put a 4C/8T 1230V2 in my brothers rig). I just would never chose the one you've chosen.

Re-quote. Sorry, was multi tasking and missed a couple of points raised.

1. GIMP only uses two cores, no matter how many you have. In this test it's all about IPC. That's how Asus coded it. Hence I was expecting the Ivy to win, because my 8320 can't even beat an I7 920 on IPC alone..

2. The Xeon crap for gaming bit.. That's for another forum mate. One with far, far less understanding than this one. I've been using Xeons myself for gaming for bloody years. 13 years, because at one point Xeons were massively better than any desktop product (mmm, yummy P3 Xeons with 2mb full clock cache on a Marlinspike).

OK so now I'm beginning to show my age :D
 
Last edited:
why not wind down the 8 core amd to match clock speed or choose a like for like cpu.

2600k,3770k could have been got for same price.


"One CPU vs the other. One can be overclocked, one can't - that's not my fault."

whos is it then ? :p you choose cpus that arnt balanced for a start then wonder why people pull the benchies to bits.
 
why not wind down the 8 core amd to match clock speed or choose a like for like cpu.

2600k,3770k could have been got for same price.


"One CPU vs the other. One can be overclocked, one can't - that's not my fault."

whos is it then ? :p you choose cpus that arnt balanced for a start then wonder why people pull the benchies to bits.

First up why not put the AMD under DICE? or LN2? clock the bugger to 6ghz and beyond?

Whose fault is it.. It's Intel's. Ever since Clarkdale they've gone around with their padlock making sure that everything is rationed out to make them money.

With 1366 they forgot to derp the FSB, we got boards that could run 6 core chips *and* overclock both. With 2011 they did a bit of both. Some chips could separate the PCIE bus and strap, others were locked.

Lock lock lock - that's not my fault !!!

All I could do here is make sure both chips were doing the absolutely maximum they can do. The Xeon is 2ghz, that's it. Underneath? heh, remove the padlocks and you could have some real fun, but as it stands it's completely down to Intel.

So the point I am making? subliminally, AMD are bloody fantastic CPUs for the money.

In a real world that makes perfect sense a £110 retail packed CPU should absolutely not even be able to beg at the heels of a £700+ one. Absolutely no no no no way. But you can buy a 5ghz FX for what? a hundred and fifty notes? and look what it can do.

Far too much for a £110 CPU, basically. According to the BS that flies around this chip should not even be able to get up from its ass, let alone haul ass and dance rings around a £700+ CPU.

And, through all of this you can see why AMD have been absolutely loathe to throw away and perfectly good CPU that performs incredibly for the money just to play games with Intel.

Don't expect them to replace the Vishera any time soon. What they did was do what Intel does, launch a server CPU into the desktop market. Only AMD have been patiently waiting for support. Once it comes, yeah, Intel will be faster for £900 but the fact will be you simply won't need a £900 Intel unless you really need the Epeen points, or, are running an absolutely stupid GPU arrangement.

Trust me, if the general population didn't get that AMD wouldn't be wasting their time running a production line for CPUs, that apparently are so crap no one actually buys them or uses them.
 
Still, you're using IPC wrong.

The IPC doesn't change even when the clock does, the performance of that core does (I have no problem believing a 4.9GHZ core kills a 2GHZ Ivy core, in fact it'd be a fair chunk faster), but that doesn't make the AMD have higher IPC.
 
Just looking at your Cinebench 15 scores Andy and your 2GHz Zeon is only 20cp shy of a 4.8GHz 4670.

I see the point, if not many others are. Frequency is only part of the equation.

Overclocking used to be more valid 15 years ago when you could make a cheap part perform like an expensive one for a little money. Now you need to have an expensive part (for intel anyway) to be allowed to play. Excepting the recent pentium which was a bit underwhelming IMO.
 
Very nice Andy, thank you for the effort of doing all that, it must have taken a while.

Really does go to show that cores count for more than a people think they do.

The whole question of just underclock the AMD, yes he could but then I would expect this thread would have been trashed by now as certain members of the AMD cliché would not be able to see their beloved AMD humbled so much without lengthy argumentative posts, :D as it is the Intel is only just behind with under half the clock speed.
 
Just looking at your Cinebench 15 scores Andy and your 2GHz Zeon is only 20cp shy of a 4.8GHz 4670.

I see the point, if not many others are. Frequency is only part of the equation.

Overclocking used to be more valid 15 years ago when you could make a cheap part perform like an expensive one for a little money. Now you need to have an expensive part (for intel anyway) to be allowed to play. Excepting the recent pentium which was a bit underwhelming IMO.

Yup I'm beginning to come around to the more cores, less voltage, lower clock but still similar performance.

I can run both of my H100s with the fans at 5v. TBH? I could probably remove them completely.

I think this is the sort of thing AMD have been waiting on. Sure, they're never, ever going to get back the single threaded crown, but the more the merrier.. I mean, these guys have 16 core server chips at their disposal.

Very nice Andy, thank you for the effort of doing all that, it must have taken a while.

Really does go to show that cores count for more than a people think they do.

The whole question of just underclock the AMD, yes he could but then I would expect this thread would have been trashed by now as certain members of the AMD cliché would not be able to see their beloved AMD humbled so much without lengthy argumentative posts, :D as it is the Intel is only just behind with under half the clock speed.

I'm not underclocking anything mate. All that will do is basically tell us what we've known for years, Intel dominates the single thread.

Intel will just continue to dominate, well, themselves.

Derping the AMD wouldn't prove anything unless I was the Golden Child and had a CPU that was unlocked, when actually what goes on sale is locked.

Hence my point raised about why not throw the AMD under DICE and then run the tests again? why? because I can, because AMD have allowed me to.

There were glimmers of hope when Bulldozer launched.. It was good in, well, Winzip lol. But, according to legend, it shouldn't be good at anything.

AMD, barse ackwards, as per usual. Release a 64 bit CPU before there's even a proper 64 bit end user OS. Release an 8 core CPU when absolutely nothing uses it, then spend about five years getting it supported lol.
 
Last edited:
Nice work ALXAndy.

Would you mind running x264 bench? I'd like to see how they compare to my 4790K.

4790K @ 4.6GHz
04_x264_v1_0_1_4790_K.png
 
Will try and fit it in later mate. Right now the rig is in the hands of its owner :D

Edit, just realised matey, that's actually a part of the Asus Realbench.. Not sure the scores would be comparable, but it runs H264 encoding as part of the benchmark.



There you go. Core use was a bit erratic and it would pause at times for a few seconds. Could be memory speed or something.
 
Last edited:
well done Andy, i'm tempted to send you my old rig so that you can fix it for me ....stupid ****** *** :D

but i guess i have to do it myself :cool:
 
Back
Top Bottom