• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

6700k 4.5Ghz vs 5820K 4.5ghz

Ideally you should be comparing both cpu's with the same graphics card. A 980ti isn't far off the performance of 970 sli.

But when you're removing the GPU performance from the equation why does it matter what GPU is used? He is testing CPU bottlenecking, the GPU speed is irrelevant.
 
Is it always GPU limited for the entire run? Do we know for sure? Are they on the same drivers? The same OS even? Is one using an SSD whilst the other uses a mechanical drive.

They're all variables not controlled for.

The only way to compare CPUs is in the same system.
 
Last edited:
There's not a chance in hell the Fallout 4 1080p run is always CPU bottlenecked.

For the other 2 I'd say it's possible due to the super low resolution used.
 
At the end of the day the 5820K is as good at gaming as the Skylake chip, give or take a few frames.

The 5820k leaves the Skylake chip for dead in everything else for roughly the same price.

End thread;)
 
What motherboard would someone recommend to accompany the 5820k?

From looking at the reviews of the cheapest Gigabyte boards, it appears they have a lot of issues with the BIOS and USB devices. Realistically are you looking at £200+ for a decent X99 motherboard?
 
The 5820k leaves the Skylake chip for dead in everything else for roughly the same price.

Err, no. There are plenty of applications out there that are still poorly threaded, where Skylake clocked at 4.6-4.8Ghz (common overclock) will out perform Haswell-E.

It's only when you get to extremely well threaded applications, such as encoding/rendering, SETI@Home etc, that the 6/8 cores really shine. These are tasks that 99% of people don't even run.
 
What motherboard would someone recommend to accompany the 5820k?

From looking at the reviews of the cheapest Gigabyte boards, it appears they have a lot of issues with the BIOS and USB devices. Realistically are you looking at £200+ for a decent X99 motherboard?

Pretty much, yes. X99 motherboards are much more expensive than Z170 boards.

Remember you need to buy more expensive RAM kits for X99 also. X99 needs 4 DIMMS to power it's quad channel memory controller. So 4x4GB (16GB) kit for example, whereas Skylake/Z170 only needs 2 DIMMS, so a cheaper 2X8GB(16GB) kit will suffice.
 
Pretty much, yes. X99 motherboards are much more expensive than Z170 boards.

Remember you need to buy more expensive RAM kits for X99 also. X99 needs 4 DIMMS to power it's quad channel memory controller. So 4x4GB (16GB) kit for example, whereas Skylake/Z170 only needs 2 DIMMS, so a cheaper 2X8GB(16GB) kit will suffice.

You don't 'need' to buy a quad channel kit a dual channel kit will work fine with no performance hit in most instances... you do however have the option of quad channel memory with X99 for more bandwidth you don't with Z170...

http://www.pcworld.com/article/2982...e-shocking-truth-about-their-performance.html

This isn't the first time I've had to point this out...
 
Pretty much, yes. X99 motherboards are much more expensive than Z170 boards.

Remember you need to buy more expensive RAM kits for X99 also. X99 needs 4 DIMMS to power it's quad channel memory controller. So 4x4GB (16GB) kit for example, whereas Skylake/Z170 only needs 2 DIMMS, so a cheaper 2X8GB(16GB) kit will suffice.

Depends...

There are £400 x99 boards and £400 z170 boards aswell as £1-200 boards.

Price difference between the two is negligible. I can get both 4x4gb and 2x8gb for the same price shopping around. And a 5820k for £308.

And like caracus said. You dont "need" to have quad channel memory for X99.
 
You say 'lightly threaded' as if it's a bad thing? Hahaha...


err... of course its a bad thing

Would you be happy if a game sat there only using one core?

We haven't seen significant core speed increases for 10+ years now. Looks like IPC improvements are not quite what they used to be...

So games and apps better become more threaded if they want to progress..
 
See here...

http://www.techspot.com/review/1089-fallout-4-benchmarks/page2.html

for a review that shows that Fallout 4 at 1080P / Ultra is not as CPU bound as suggested above (with claims that it does not matter which GPU is used to test!)

Unless you think difference between the 980ti (113/91 Max/Min) and 970 (81/65 Max/Min) is negligible and shows that the game is totally CPU bound at 1080P!

CPU used was the 6700K (@4ghz) in this case!

Changing from a 6700K (@4ghz) to a 5960X (@3ghz) only dropped the results from 113/91 to 105/84 with both running a 980ti on ultra at 1080P

The 4770K (@3.5ghz) was 106/85 for reference if you want to compare quad core to quad core (not that it should matter as the 5960X cant utilise its eight cores in this scenario)

http://www.techspot.com/review/1089-fallout-4-benchmarks/page5.html

Bottom line Fallout 4 is quite reliant on per core CPU power at 1080P ultra but you cant go comparing CPU's when they have different GPU load outs as GPU power is still the bigger factor in most scenarios!

A 500Mhz clock difference +IPC improvement from the Haswell to Skylake netted a whole 7/6 max/min frame rate difference using a 980ti at 1080P/Ultra...

Going from a 970 to a 980ti on a 4Ghz 6700K netted at 32/26 max/min frame improvement!


Ahaha..did you see that GPU usage?Omg you people have no clue what CPU bottleneck is.
GTX970-89% GPU usage in fallout4 in first scene
GTX970-80% GPU usage in second scene
GTX980TI-60% GPU usage in first scene
GTX980TI-55% GPU usage in second

In both cases game is CPU bottleneck.i am sure people who knows how PC works appreciate this test.Rest just dont post and first learn how PC and CPU bottleneck in games works THX.

Except when you properly test the CPU isn't the bottleneck you suggest it is!

I sure that people who 'knows' how proper testing of hardware works have very little to learn re CPU bottlenecking in Fallout 4 from this thread...
 
Last edited:
Last edited:
err... of course its a bad thing

Would you be happy if a game sat there only using one core?

We haven't seen significant core speed increases for 10+ years now. Looks like IPC improvements are not quite what they used to be...

So games and apps better become more threaded if they want to progress..

You clearly know nothing of game design or profit margins.
 
I think if you have to convince people that one CPU is faster than another it probably isn't worth worrying about. I'll wait until it's obviously faster. I may get the next gen hex core i7. It depends on other factors these days because the tiny percent quicker per generation; its more like motherboard features like USB C (and one day 10Gb ethernet).
 
I'd be more interested to see the comparison in DX12 environment, or Mantle, than old, poorly optimised engines like Fallout.

The age-old dilema. Compared with 99 percentile games, which are 'optimised' for 4 cores and DX11, or well optimised games that use a propriatory API and can take advantage of extra cores / hyper threading.

In that scenario, I'd expect the hexacore to edge it. A bit light on actual useful benchmarks though.
 
Not sure why, but i do feel my 5820k @ 4.4 is smoother in GTA V than my 6700K @ 4.5Ghz test machine which i'm building for a mate, both with same SSD and same 980ti, even same game as its under my steam account.
 
yes the smoother effect is defintely a thing. seeing fps looking about the same and the actual play of the game can be very different.

it can be only a couple of fps different but so vastly different when playing.
 
But when you're removing the GPU performance from the equation why does it matter what GPU is used? He is testing CPU bottlenecking, the GPU speed is irrelevant.

You're assuming the two are independent and as we're talking gaming performance this is not so. They differ in several respects with how they talk to the GPUs so isolating the 'bottleneck' to one component is not testing the capabilities of each properly. Imagine someone came along with a CPU that could only take to one graphics card and only at set intervals with low bandwidth. Would it be any good for gaming? Would this methodology discover this?
 
You clearly know nothing of game design or profit margins.

I'm fully aware that games are currently coded to work on the most popular hardware out there I.e quad core cpu's. I'm suggesting that its a good idea for them to be coded now and I the future to make use of more cores as this will likely be the best way to increase performance going forward.

You suggested that it was a Good thing for games and apps to be lightly threaded its clearly not a good idea that they cant make use of more cores threads as more cpu's come with more cores/threads...

I am also aware that its not easy to simply code to use more threads..hopefully low level api's like dx12 and vulkan. Will help here
 
Back
Top Bottom