• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Best cpu for gaming

Man of Honour
Joined
13 Oct 2006
Posts
91,259
If that’s a DX limit, that’s just for passing data to the GPU. The game will also have worker threads. Most games run fine on 6 core chips, but an increasing number are starting to use more, some also ignore SMT threads as they can reduce performance as they have ZERO compute hardware. This will increase as Intel chips have a lot more cores, especially the low to mid-range. If AMD release more £250-300 6 cores, they are taking the **** and deserve to lose market share.

Helldivers 2 will happily spread across all 28 cores/threads on my 14700K - though it is only about 18-20 CPU threads worth of processing and doesn't really suffer on a 8/16 CPU or less.
 
Caporegime
Joined
17 Mar 2012
Posts
47,733
Location
ARC-L1, Stanton System
If that’s a DX limit, that’s just for passing data to the GPU. The game will also have worker threads. Most games run fine on 6 core chips, but an increasing number are starting to use more, some also ignore SMT threads as they can reduce performance as they have ZERO compute hardware. This will increase as Intel chips have a lot more cores, especially the low to mid-range. If AMD release more £250-300 6 cores, they are taking the **** and deserve to lose market share.
Render threads, yes, beyond that you have other stuff done on the CPU but it takes a lot to saturate 8 render threads and the other 8 for physics ecte...

I honestly don't think cost equivalent CPU's like the 14600K with 6 P-Cores and 8 E-Cores would be better than a 7700 in the long run, IMO worse given that it only has 6 performance cores and would be forced to utilize the E-Cores, that as we know causes problems.

I would never buy a mixed core CPU for gaming, full stop.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,259
Something I do like about the 14700K - you can use stuff like OBS to record while gaming and it will be quite happy in situations where a 8/16 CPU will start to show an impact.
 
Caporegime
Joined
17 Mar 2012
Posts
47,733
Location
ARC-L1, Stanton System
Take Star Citizen as an example, an extremely threaded game, i have seen all of my 16 threads 100% loaded.

Intel's mixed core CPU's have problems with that game, there are user editable fixes but they are not perfect, basically in one way or another you have to force the game to run only on the P-Cores.

The same is true for the 7950X3D, again because of the performance disparity between cores.

Intel have been to CIG offices to try and fix the problems with that game on their mixed core CPU's and they couldn't, its an architectural problem with mixed core CPU's.
It is IMO just another example of these companies cheaping out on compromised solutions.
 
Last edited:
Soldato
Joined
1 Feb 2006
Posts
3,402
yeah.... read post above, just more of that compromised trend in hardware!
I think E-Cores make a lot of sense, most threads don’t need P-Cores and you can get 4+ E-Cores in the same area as one P-Core. Gaming is a small part of computing so PC CPU’s will not be designed just for games and adding E-Cores helps drop power consumption which is a good thing. Software can detect the CPU, core count, core type so its up to the dev’s to make use of what is available. Most will not do this as it adds a lot of time to testing. We are at a point of change but It will settle at some point.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,259
I think E-Cores make a lot of sense, most threads don’t need P-Cores and you can get 4+ E-Cores in the same area as one P-Core. Gaming is a small part of computing so PC CPU’s will not be designed just for games and adding E-Cores helps drop power consumption which is a good thing. Software can detect the CPU, core count, core type so its up to the dev’s to make use of what is available. Most will not do this as it adds a lot of time to testing. We are at a point of change but It will settle at some point.

It isn't an area of programming I have a lot of experience with but I don't think at software level (as things stand with Windows) you can reliably control threads in that regard, with more advanced techniques you can kind of hint how you want your program to run but ultimately the OS thread director (if it exists) and any hardware thread management can do something completely different if it wants to.

Using functions like SetThreadAffinityMask and related depending on OS version you can't rely on what actually happens.
 
Last edited:
Soldato
Joined
1 Feb 2006
Posts
3,402
It isn't an area of programming I have a lot of experience with but I don't think at software level (as things stand with Windows) you can reliably control threads in that regard, with more advanced techniques you can kind of hint how you want your program to run but ultimately the OS thread director (if it exists) and any hardware thread management can do something completely different if it wants to.

Using functions like SetThreadAffinityMask and related depending on OS version you can't rely on what actually happens.
I have not used any of this as but:

SetThreadIdealProcessor,
SetThreadSelectedCPUSets

I am currently working on a path tracer and it hammers the CPU so bad that even sound stutters and crackles, would be good to have a few E-Cores that can be filtered out to do OS background work.
 
Caporegime
Joined
17 Mar 2012
Posts
47,733
Location
ARC-L1, Stanton System
I think E-Cores make a lot of sense, most threads don’t need P-Cores and you can get 4+ E-Cores in the same area as one P-Core. Gaming is a small part of computing so PC CPU’s will not be designed just for games and adding E-Cores helps drop power consumption which is a good thing. Software can detect the CPU, core count, core type so its up to the dev’s to make use of what is available. Most will not do this as it adds a lot of time to testing. We are at a point of change but It will settle at some point.

Its still compromised, look at all the smaller tech channels who say when they try to do any sort of high load multitasking on the mixed performance core Intel's CPU's the thing gets choppy, grindy and unresponsive, the Gamers Nexus type channels don't report on it given that these channels have Intel executives on their channel every other week.

That i get, what i don't get is consumers making compromise is ok or good arguments for this stuff, 4 core high end CPU's, 8GB £500 GPU's, 12GB £800 GPU's, CPU's that are stuffed primarily with cores that are useless for anything but Cinebench bar charts.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,259
I have not used any of this as but:

SetThreadIdealProcessor,
SetThreadSelectedCPUSets

I am currently working on a path tracer and it hammers the CPU so bad that even sound stutters and crackles, would be good to have a few E-Cores that can be filtered out to do OS background work.

Interesting comment there (more specific to SetThreadAffinityMask):

In general, the guidance is to avoid hard affinities, as these are a contract between the application and the OS. Using hard affinities prevents potential platform optimizations, and forces the OS to ignore any advice it receives from the Intel Thread Director. Hard affinities can be prone to unforeseen issues, and you should check to see if middleware is using hard thread affinities, as they can directly impact the application’s access to the underlying hardware. The issues with hard affinities are particularly relevant on systems with more Efficient-cores than Performance-cores, such as low-power devices, as hard affinities limit the OS’s ability to schedule optimally.

Even with hard affinities you can't be 100% sure what is actually going on outside your application though.

IMO though a lot of the problems come from developers who thread things for the sake of threading things due to ideological reasons rather than actually working their entire design around what naturally works - possibly partly a consequence of team projects where no one person is really working the whole project to that level.
 
Last edited:
Soldato
Joined
19 Sep 2009
Posts
2,755
Location
Riedquat system
HT has been around a lot longer than E cores and yet there are still games that have better CPU performance with HT off. Or SMT off in the case of AMD. But right now I think the 14700K is a better choice than the 7800X3D (which I have) unless you have plans to upgrade on AM5 or maybe also if all you want to do is power the system up and turn on EXPO. If you do not mind a bit of tweaking then the Intel chip will likely be capable of the same gaming performance and much better multi threaded performance.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,259
I am currently working on a path tracer and it hammers the CPU so bad that even sound stutters and crackles

Not really relevant to modern path tracers, but with older more basic games I had a neat trick to make basic ray tracing feasible simply rendering a simplified version of the world with entirely unique texel colours (with the 2D coordinates of the light map encoded into the colour) which were then used to build a dynamic light map - skipping the need to actually do any 3D tracing - relatively crude feature wise but would run at like 300 FPS compared to like 0.01 FPS doing proper ray tracing.
 
Last edited:
Caporegime
Joined
17 Mar 2012
Posts
47,733
Location
ARC-L1, Stanton System
HT has been around a lot longer than E cores and yet there are still games that have better CPU performance with HT off. Or SMT off in the case of AMD. But right now I think the 14700K is a better choice than the 7800X3D (which I have) unless you have plans to upgrade on AM5 or maybe also if all you want to do is power the system up and turn on EXPO. If you do not mind a bit of tweaking then the Intel chip will likely be capable of the same gaming performance and much better multi threaded performance.

Why give up the efficiency, have to tweak it to get around its compromised situation and even then still not have a perfect solution just to get it kind of on par for gaming to a CPU that is already all things out of the box, and pay more for it and end up with a dead end platform?
So you can have a bigger Cinebench bar? Any home productivity work you might like to do is better served by the GPU, its why i don't care about CPU's with more than 8 cores.
 
Last edited:
Soldato
Joined
1 Feb 2006
Posts
3,402
Not really relevant to modern path tracers, but with older more basic games I had a neat trick to make basic ray tracing feasible simply rendering a simplified version of the world with entirely unique texel colours (with the 2D coordinates of the light map encoded into the colour) which were then used to build a dynamic light map - skipping the need to actually do any 3D tracing - relatively crude feature wise but would run at like 300 FPS compared to like 0.01 FPS doing proper ray tracing.
I just wanted to play around and learn, its been fun and gives my 7950X something to chew on.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,259
HT has been around a lot longer than E cores and yet there are still games that have better CPU performance with HT off. Or SMT off in the case of AMD. But right now I think the 14700K is a better choice than the 7800X3D (which I have) unless you have plans to upgrade on AM5 or maybe also if all you want to do is power the system up and turn on EXPO. If you do not mind a bit of tweaking then the Intel chip will likely be capable of the same gaming performance and much better multi threaded performance.

Depends a bit how much you just use the system for gaming vs more general stuff. For me it is a bit of an odd situation as the gains the 7800X3D has become most apparent when using a 4090 at lower resolutions which is a kind of odd setup unless doing some ultra competitive gaming maybe.

And frankly, though I think the motherboard implementation might play a part here as Gigabyte seem to have put some effort into the Aorus Master, the 14700K system I've built for gaming so far has the best smoothness and responsiveness out of any recent system I've had time with and/or own - which was quite a big hang up for me as my X79 system was extensively tuned to that end.

I've no intention of bothering with Star Citizen and 1-2 other titles though where people say stuttering with E cores in the mix is still an issue to see what the story is there.
 
Last edited:
Caporegime
Joined
17 Mar 2012
Posts
47,733
Location
ARC-L1, Stanton System
I keep meaning to do something RT in Unreal Engine to post here for people to play with but keep starting over as i'm not happy with where i'm going when i get to a certain point.

I know i should scale what i'm doing back and simplify it, something bite size but i can't help it ballooning out of control as i keep learning something new.....
 
Associate
Joined
21 Jun 2011
Posts
1,031
Location
London
This is increasingly starting to sound like a console vs PC gaming conversation. As comedic as that may sound.

You also can’t fix everything with extreme overclocking. It can lead to instability and balancing issues with things like frame pacing.

The world’s a complicated place.
 
Soldato
Joined
19 Sep 2009
Posts
2,755
Location
Riedquat system
Why give up the efficiency, have to tweak it to get around its compromised situation and even then still not have a perfect solution just to get it kind of on par for gaming to a CPU that is already all things out of the box, and pay more for it and end up with a dead end platform?
So you can have a bigger Cinebench bar? Any home productivity work you might like to do is better served by the GPU, its why i don't care about CPU's with more than 8 cores.

I'm not sure why some on here are so quick to dismiss Intel. If you want to get most of the potential out of the box and run stock + EXPO for a gaming system then get the 7800X3D. If you do not mind tweaking or want better multithreaded perf then go for the Intel chip :) Either system should last long enough that the 'dead platform' shouldn't be that big of a deal. But if you do want an upgrade path then obviously AM5 is the right choice. Price difference not that much overall and efficiency can be tuned if required.
I am using the 7800X3D myself and it is a great gaming chip but it has its limits. I am hoping the gains on Zen5 (X3D) will be enough to justify an upgrade.
 
Back
Top Bottom