• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel Core Family

I look at IPC similar to how i treat car MPG from manufacturers.
Its a nice figure to same and claim to have but in the real world... nawp there are a lot of things that can affect it.
 
The realy advantage Intel has,is that a number of games are better optimised for the uarch - it can't be just single threaded scores,since the performance difference is not really massive now. You can see the affects of proper optimisation,when ARMA III apparently got a patch for it,and the performance on Ryzen improved.
 
The realy advantage Intel has,is that a number of games are better optimised for the uarch - it can't be just single threaded scores,since the performance difference is not really massive now. You can see the affects of proper optimisation,when ARMA III apparently got a patch for it,and the performance on Ryzen improved.

True, in fact... same reviewer before and after.

After the patch the 2600 @ 4.2Ghz is getting the same sort of performance as the 7700K.

Before

kLQvYkg.png


After

Brhf1MX.png
 
If you got the stuff cheap, cheaper than the 7600 and the X270 then fair enough, can't really argue with that, conveniently, surely though when looking at second hand you could have picked up a 7600K for even less, same with the motherboard.
There are no benchmarks for iRacing that i can see, so i cannot confirm or deny your claims about, personally i doubt it wouldn't benefit from a proper i7 and for VR? is iRacing the only thing you play because a 4 core 4 thread i5 is one of the worst CPU's you can have for high frame rates because it fluctuates wildly causing huge dips up and down and that's the last thing you want.

There's no benchmarks for iRacing, other than the in game FPS counter. iRacing will use multiple cores, however as I said above it bottlenecks on the single threaded graphics renderer. It's tied into the physics thread (again uses a single thread) which is the other big CPU user and has to complete in a set time window. Audio, networking, particle effects, etc are threaded off and will utilise extra cores if available, but don't use a lot of resource. As it stands there's little benefit to an i7 apart from the extra stock MHz. Hyper-threading doesn't help, in fact some people turn it off to prevent stutters in VR. It's not typical of most games which use an off the shelf graphics engine. So ... it'll run just fine on a dual core, if you're not running triples or VR.

As for the 7600K, I'll take a 7740X for the same money as it'll overclock better, plus I've still got an upgrade path with just a CPU swap if needs be.
 
IPC is an average figure quoted by manufacturers, but it does not change with clock speed. Single core performance (on the same instruction architecture) is measured by clock speed multiplied by the IPC. The IPC is how many instructions the architecture can do per clock cycle - it's in the name.

This is why Intel and AMD quote the IPC for families of CPUs, not individual CPU models. IPC is not a benchmark in itself, but it certainly does affect real world performance.
Firstly there is no IPC figure quoted as there is no such number. They do compare generations by proportion but always are deliberately vague.

(They will draw graphs but the axis has no scale except maybe percent change - and they are often wrong, which were it a measurable aspect of an architecture would not happen. You say average... Of what? What exact software is being run and the instruction set used makes enormous differences so anything based on an unspecified set is not a metric, more a marketing thing.)

Secondly I'm aware what the name means hence said it should stay the same at different clocks but it doesn't - you never seen reviews where 10% overclocks gave a 5% performance gain?
 
@ArmitageShanks those applications are quite real.
I said realtime, not real. The bulk of the applications in your list process a static input. For example, a renderer would typically divide the scene view into smaller sections and dispatch each of them to an available core (you can see this when you run Cinebench). The output of each is assembled into the final target render. The same applies to file processing applications. These are easy to multithread as there's minimal interaction or shared memory between threads.

Realtime applications such as games, webservers, physics modelling etc where the inputs are asynchronous and changing rapidly are much harder to multithread properly, if at all. You have to synchronise reads and writes to any shared memory values, weed out race conditions between threads, avoid deadlocks, etc. Get it right, and the performance gain scales with the number of cores. Get it wrong and you're stuck in debugging hell for eternity. It's non-trivial, although certain language paradigms make it more bearable to code.

Game engines typically offload sound processing and network activities onto separate threads, but the main application loop still runs in a single thread/core, which is why most currently do not benefit from an increase in core count, but DO benefit from an IPC and clock increase where the CPU is bottlenecking the GPU (these days it's usually the GPU that's the bottleneck, which is preferable).
 
I said realtime, not real. The bulk of the applications in your list process a static input. For example, a renderer would typically divide the scene view into smaller sections and dispatch each of them to an available core (you can see this when you run Cinebench). The output of each is assembled into the final target render. The same applies to file processing applications. These are easy to multithread as there's minimal interaction or shared memory between threads.

Realtime applications such as games, webservers, physics modelling etc where the inputs are asynchronous and changing rapidly are much harder to multithread properly, if at all. You have to synchronise reads and writes to any shared memory values, weed out race conditions between threads, avoid deadlocks, etc. Get it right, and the performance gain scales with the number of cores. Get it wrong and you're stuck in debugging hell for eternity. It's non-trivial, although certain language paradigms make it more bearable to code.

Game engines typically offload sound processing and network activities onto separate threads, but the main application loop still runs in a single thread/core, which is why most currently do not benefit from an increase in core count, but DO benefit from an IPC and clock increase where the CPU is bottlenecking the GPU (these days it's usually the GPU that's the bottleneck, which is preferable).

Your information is out of date, this has not been true for some time now, why do you think lower clocked 6 core CPU's like the 8400 / Ryzen 5 are performing better in an increasing number of games than a much higher clocked 4 core.

I have seen the same even in games built on Source Engine, which is some years old.

The 4 core or less argument has been defunct for years.
 
Read post #107 again. Resolutely not true for accurate simulations that require real time physics calculations, eg iRacing, Flight Sims, City Builders. As ArmitageShanks has said it's almost impossible to multithread that type of workload. I'll refer you to my earlier points on iRacing, EA's earlier troubles with SimCity had a similar root cause.

Blindly saying it's untrue based on eye candy filled army games that sell by the bucketload is disingenuous.
 
Just to be clear, these tasks are very suitable candidates for multi-threading, just not with the way most game engines expect it to be done where calcs are 'per frame' which remains primarily because it's what developers are used to.

Physics modelling as with webservers are both examples of where many cores can be used to great effect so Armitage is somewhat incorrect, although 'games' is a broad claim and often true, again mostly as a result of how graphics processing pipelines are set up and draw calls are done.
 
I went digging on the iRacing Forums for a few minutes. Couple of quotes from one of the devs :

In our case we create a few threads and try to run them so they use up 100% of a cores potential (well 95%), we do that because timing is so critical, we have to do things in 16.6 ms or less. Other applications try to make many small threads, each thread may only do a small burst of work, but with many more threads than cores they can be swapped around to keep all the cores busy. This is great if you are trying to maximize your CPU utilization, but there are no guarantees that any one thread will execute in a specified amount of time.

Some context - 50 cars, all connected via the internet. Big crash. You've got under 16.6ms for each physics cycle. Go! Skip to 47m30s.

 
You keep having to go back to iRacing to make your point, its one game, one game does not account for everyother game, its completely insane to exclude everyother game because its not iRacing, a lot of people play first or third person open world games, far more than people play iRacing.

I can and will pull up a bunch of screen shots and videos that show 4 and 8 thread CPU pushed to their limits or bottlenecking the GPU from a lack of threads, if you keep going on i will, i have posted them about 20 times already in this room.

If you only play iRacing then sure you can justify your i5 on a higher end GPU, in reality for the over whelming number of others its a really bad choice because they will end up with a really bad experience.

I know, had an i5 with the 1070, its bad.
 
It does not have a Vega in it, that's Kabylake-G. what you have is actually very old nVidia IP implemented badly.

I have a kabylake-G - It's genuinely hotter than the sun! It throttles like you wouldn't believe in the chassis I have (must be the reason it got code name hades canyon). It throttles that hard that my old laptop 6700hq + 970m 6gb lays waste to it and its Vega GPU. I am sure technically it's awesome but in reality and in most of the super thin chassis that they were designed for it's gonna be utter junk as the chassis just can't dissipate the heat. It's that bad that if I put a bit of fallout 4 on within 30 minutes the machines chassis is literally too hot to touch. I recon if I left it benchmarking over night I might even be able to melt the chassis.

GPU-z still has no clue:

 
Last edited:
You keep having to go back to iRacing to make your point, its one game, one game does not account for everyother game, its completely insane to exclude everyother game because its not iRacing, a lot of people play first or third person open world games, far more than people play iRacing.

I can and will pull up a bunch of screen shots and videos that show 4 and 8 thread CPU pushed to their limits or bottlenecking the GPU from a lack of threads, if you keep going on i will, i have posted them about 20 times already in this room.

If you only play iRacing then sure you can justify your i5 on a higher end GPU, in reality for the over whelming number of others its a really bad choice because they will end up with a really bad experience.

I know, had an i5 with the 1070, its bad.

i5? I'm running an Kaby Lake-X i7. :confused:

You consistently fail to accept that anything other than your "third person open world games" exist, in fact you dismissed it out of hand. I've been happy to respond and give specifics for a well respected simulation in opposition to your generalised arguments, with quotes from a developer. And get called insane for my troubles.

I'm all ears if you can hit me with specifics as regards CPU utilisation for realtime physics that are multithreaded, because it's a tough problem to solve. Not everything is an army game.
 
Yup. TL;DR - I'm right, you're wrong. My computer science degree trumps your crazy AMD fanboi tendencies as it's a real world qualification. ;)

Is this the best time for me to mention most of the comp science students i see go through here would have trouble tying there shoelaces?
:p

Also, just because I have to because it is bugging me... I also have a computer science degree and I think you know as well as I do that it means squat in context of this conversation ;)
 
I have a kabylake-G - It's genuinely hotter than the sun! It throttles like you wouldn't believe in the chassis I have (must be the reason it got code name hades canyon). It throttles that hard that my old laptop 6700hq + 970m 6gb lays waste to it and its Vega GPU. I am sure technically it's awesome but in reality and in most of the super thin chassis that they were designed for it's gonna be utter junk as the chassis just can't dissipate the heat. It's that bad that if I put a bit of fallout 4 on within 30 minutes the machines chassis is literally too hot to touch. I recon if I left it benchmarking over night I might even be able to melt the chassis.

GPU-z still has no clue:


Its a big chip, 1280 Shaders its too big really to share a Laptop CPU cooler, its 2.5X the size of an RX 550

i5? I'm running an Kaby Lake-X i7. :confused:

You consistently fail to accept that anything other than your "third person open world games" exist, in fact you dismissed it out of hand. I've been happy to respond and give specifics for a well respected simulation in opposition to your generalised arguments, with quotes from a developer. And get called insane for my troubles.

I'm all ears if you can hit me with specifics as regards CPU utilisation for realtime physics that are multithreaded, because it's a tough problem to solve. Not everything is an army game.

Its a 7600K with a posh name :P
 
Back
Top Bottom