• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

My new Intel 11900K @5500Mhz vs. old Intel 8700K @5300Mhz CPU performance test

If the numbers are real and not photoshopped then his gaming performance should be 10-15% higher than any 11900k has reviewed

Which game are you referring? Maybe I can test it for you and show the results spesifically for CPU performance excluding graphic card.
 
Yes,but that is a very specific and indeed very odd set of testing criteria.
no one buys a chip, disables hyperthread and then plays games.
Also the results showed the new processor sucking dick.
I would prefer our poster shows what he has managed, as it seems spectacular.

Well actually I do disable HT for gaming. Because it's required for the competitive FPS games. I need a constant stable FPS. HT causes FPS fluctation because of poor thread managament at some games. Btw I don't disable HT at BIOS, but doing it via Windows Affinity option via script before launching the game. :D

Guys if you need me to test any game just let me know. However it should be centric to CPU and should inform the CPU / graphic card score seperately.
 
Last edited:
Any chance you could show us some of the improvements in games from the 8700k, not just benchmarks, since you bought it for games not as a benchmarking system, right? Found any where you get more than a 10% improvement across the 1% low, and the average rates?

I can't do that. I have already removed the old board and the CPU also installed my custom loop. However, I can share my 3DMark results to compare below.

Time Spy test with 11900K@5500Mhz vs. 8700K@5300Mhz. You can compare the CPU scores.
https://www.3dmark.com/compare/spy/20006203/spy/14885742
 
Last edited:
I did nothing special. I have ordered it from the competitor the day it released.

Asus APEX XIII BIOS shows me "83" for the silicon prediction (SP). 83 is somewhere golden sample for 11900K.

Yeah that's pretty lucky. When you take this into account plus the fact you run a custom loop I don't see this as a CPU I would finally ditch my 4790k for and I'm a gamer only
 
Guys if you need me to test any game just let me know. However it should be centric to CPU and should inform the CPU / graphic card score seperately.

I'd like to see CSGO results for that very high gear 2 memory rate you are running, versus a gear 1 at 3600 same timings. Be interesting to see how it stretches its legs.
 
Well actually I do disable HT for gaming. Because it's required for the competitive FPS games. I need a constant stable FPS. HT causes FPS fluctation because of poor thread managament at some games. Btw I don't disable HT at BIOS, but doing it via Windows Affinity option via script before launching the game. :D

Guys if you need me to test any game just let me know. However it should be centric to CPU and should inform the CPU / graphic card score seperately.

can you run SOTTR with the same settings in the review I linked?

https://kingfaris.co.uk/blog/10900k-vs-11900k/13
 
I can't do that. I have already removed the old board and the CPU also installed my custom loop. However, I can share my 3DMark results to compare below.

Time Spy test with 11900K@5500Mhz vs. 8700K@5300Mhz. You can compare the CPU scores.
https://www.3dmark.com/compare/spy/20006203/spy/14885742

Sorry, I was asking for actual game data not a subsection of a benchmark. I am sure you must have recorded some data prior to dismantling the system if you bought it to play games? After all you created an entire thread to talk about the performance increase, and if you don't have any game frame rate data, how do you know it is any faster at all?
 
Basically all games. Latency directly affects frame times for any game with a core data set larger than the CPU's cache system can hold - any object that needs to be checked for render calls and any object affected by player actions, physics or AI needs to be loaded from RAM with every single frame drawn, and every single of these actions will have the CPU waiting on the RAM for the duration.

While lower latency helps, most of those memory calls will be solved by cache, memory bandwidth too makes less effect then you'd think.

Regarding bandwidth I have some figures from a medium sized FPGA I compiled, first with DDR2400 and second with XMP 3200 (Hyper furyX memory, processor was a AMD 5950x stock speeds)

DDR2400:
Synthesis 00:07:23 4.2 13067 MB
Fitter 01:22:37 1.6 22600 MB
Timing Analyzer 00:03:12 4.1 18414 MB
Assembler 00:00:44 3.6 11984 MB
Total 01:33:56 -- --

DDR3200:
Synthesis 00:07:07 4.0 13089 MB
Fitter 01:17:19 1.6 22558 MB
Timing Analyzer 00:03:05 4.1 18395 MB
Assembler 00:00:42 3.6 11991 MB
Total 01:28:13 -- --

I think what I learned from this was faster RAM appears to be deep into the diminishing returns of cost/speed - increasing memory bandwidth by 33% got me a 4% speed increase, although if that's all you have left then go for it.

It looks like you have a damn fast computer there, mines great but I'm waiting around for a modern GPU for decent gaming - still stuck with a Rx580 that's barely okay...
 
I'd like to see CSGO results for that very high gear 2 memory rate you are running, versus a gear 1 at 3600 same timings. Be interesting to see how it stretches its legs.

I don't have fancy DDR4 kit to achieve 4000+ Mhz. So I did the test only in sync mode - Gear 1 - 3733Mhz. 4 Cores - 5500Mhz, 2 x Cores - 5400Mhz, 2 x Cores, 5300Mhz.

I am not a CS: GO player so I assume 491.10 FPS is something good.

UdPkfBb.png
 

Sorry @torwak I've got mixed up, very mixed up, I went back and see you are running at 3733 in gear 1.
The above bit you posted was what confused me, what is the source for that AIDA thingy?
I thought you had it with your own ram doing 4800 in gear2.
The comparison I asked for isn't valid if you don't have that ram, sorry, i got mixed up.
 
Sorry @torwak I've got mixed up, very mixed up, I went back and see you are running at 3733 in gear 1.
The above bit you posted was what confused me, what is the source for that AIDA thingy?
I thought you had it with your own ram doing 4800 in gear2.
The comparison I asked for isn't valid if you don't have that ram, sorry, i got mixed up.

No worrries. The screenshot I shared above belongs to another user. 41ns is not my score. This person is using Patriot Viper 4800Mhz 2 x 16 DDR4 kit Gear2 / not sync.

My score is 43.1ns with Gskill 2x16 DDR4 - 3866Mhz 18-18-18-36 kit in Gear1 mode. I have set it to 15-15-15-32 3733 Mhz to achieve the 43.1ns score.
 
3D Mark Time Spy CPU comparison [email protected] vs. 8700K@5.3GHz -> ~58% faster.

sR08gPx.png


CPUZ - 11900K@5500Mhz Single/Multithread
KbZ2rac.png


CPUZ - 8700K@5300Mhz Single/Multithread

FU6ceqh.png


Aida64 Memory latency test: (Gear1 / 1:1 / 3733Mhz with budget RAM sticks)
DPdNUmP.jpg


Cinebench 20 test:
kswNSrI.png


Geekbench 5 test:
0E42pcy.jpg


CoreTemp:
gDUf9WC.png



What volts was the cpu using?
 
You gotta do a single and multi r23 run.
Get Intel back on the board

11900k owners seem to be scared of r23? I do know r23 added more avx instructions to make CPU's run harder so chances are r23 results in lower clock stability point than r20, so either they have to add higher avx offsets or reduce clock speed
 
Awesome results, glad your enjoying :)

Also having fun with my 11900k. I'm building a custom loop, though my CPU (SP90, quite a good chip) is happy with 5.4Ghz on 3 cores on a NH-D15S!

Went Intel this gen due to getting an incredible deal on the price, as well as the games I play favouring Intel.

Another benefit is that I've had no crashes or issues of any kind. It literally just works, is very stable, which is surprising considering it's a brand new architecture.
 
Back
Top Bottom