• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rocket lake leaks

https://videocardz.com/newz/another...7-11700k-rocket-lake-s-posted-ahead-of-launch

Much better scores here.

Though news that Intel has seeming artificially gimped the i7, lowering it's cache frequency and forcing IMC in 1:2 at DDR3 3200Mhz, which massively increases latency.

I guess Intel wanted clear segmentation between the 8 core i7 and 8 core i9, though achieved it through artificial gimping. Bad news for those wanting to pickup a i7 and clock the nuts of of it, doesn't affect those going after the best gaming CPU, the 11900k.
No mate you got it totally wrong, the best gaming CPU is anything from the AMD 5800x and up, intel aren’t even in the same ballpark
 
https://videocardz.com/newz/another...7-11700k-rocket-lake-s-posted-ahead-of-launch

Much better scores here.

Though news that Intel has seeming artificially gimped the i7, lowering it's cache frequency and forcing IMC in 1:2 at DDR3 3200Mhz, which massively increases latency.

I guess Intel wanted clear segmentation between the 8 core i7 and 8 core i9, though achieved it through artificial gimping. Bad news for those wanting to pickup a i7 and clock the nuts of of it, doesn't affect those going after the best gaming CPU, the 11900k.

It looks completely GPU bound, how can anyone not see that? Look at the slide below.

From the 11700K and Ryzen 5600X to the fastest CPU at the top there is a Difference of 3%.

Is this how we measure CPU gaming performance now? with everything hard up against the GPU limit? if they did that back in the Bulldozer days there would have been an outrage.

All the slides are similar to this.

S4VwYMJ.png
 
Watch what happens, every reviewer, other than Anand will benchmark all these CPU's right up against the GPU limits and they will all result with in a few % of eathother, and they will conclude "they are all about the same, yay"

Because if you can't win at least deny your competition their win.
 
Watch what happens, every reviewer, other than Anand will benchmark all these CPU's right up against the GPU limits and they will all result with in a few % of eathother, and they will conclude "they are all about the same, yay"

Because if you can't win at least deny your competition their win.
Well, with those German sites and recalculating graphs I always look at minimum frames, where there at least is some difference.
Only benching at 1080P is a bit poor though.
At least Ian benches at multiple resolutions sometimes down to 480P which is a lot of extra work, for all the criticism leveled at his cooler and mobo choices.
 
It looks completely GPU bound, how can anyone not see that? Look at the slide below.

From the 11700K and Ryzen 5600X to the fastest CPU at the top there is a Difference of 3%.

Is this how we measure CPU gaming performance now? with everything hard up against the GPU limit? if they did that back in the Bulldozer days there would have been an outrage.

All the slides are similar to this.

S4VwYMJ.png

In fairness I really only care about performance metrics for gpu and CPUs at 1080, 1440 and 4k otter wise I don't care what it does at 480p or 720p cause I want to know performance in real world.

I don't game at any of those other resolutions so they are not of interest. With that as well I can actually compare what is needed for price to performance in gaming. If I need to know what performance metrics are next gen of gpu I can just see the GPU reviews with the correct cpu and so on.
 
Well, with those German sites and recalculating graphs I always look at minimum frames, where there at least is some difference.
Only benching at 1080P is a bit poor though.
At least Ian benches at multiple resolutions sometimes down to 480P which is a lot of extra work, for all the criticism leveled at his cooler and mobo choices.

Completely agree, I'm more interested in minimums and frametime graphs. Average fps is a poor, antiquated metric. It has it's place but only when presented in combination with the other data.
 
Watch what happens, every reviewer, other than Anand will benchmark all these CPU's right up against the GPU limits and they will all result with in a few % of eathother, and they will conclude "they are all about the same, yay"

Because if you can't win at least deny your competition their win.
Looking at that graph and your logic, I wonder if our mate Dave is going to buy a RDNA 2 graphics card. I mean, if Rocket Lake is the best gaming CPU by being 6-20fps slower than the competition, does that make the 6900XT the best gaming GPU by being 15fps slower than the 3090?
 
Watch what happens, every reviewer, other than Anand will benchmark all these CPU's right up against the GPU limits and they will all result with in a few % of eathother, and they will conclude "they are all about the same, yay"

Because if you can't win at least deny your competition their win.


Here, while there was a difference in some games between these two Ryzen vs Comet Lake Laptops the repeating theme from Tim in this was "GPU limitation" "GPU limitation" "GPU limitation" "GPU limitation" "GPU limitation" "GPU limitation"

So turn the resolution down, WTF is wrong with you? You spend hours making a video and for it were lucky enough that the 10870K couldn't drive the RTX 3070 in these laptop all the time (which is noteworthy in its self, here we have a CPU that can't drive the GPU its paired with) so you had a couple of useful slides to show your audience, the rest of it was a complete waste of the viewers time as it told us precisely nothing.

Did you not even what that video back and think "hhmmm..... that GPU limitation that i keep repeating over and over again is annoyingly repetitive and it makes me look like i'm not even trying to find a difference between these products, maybe i should work harder or change some things because it come across a bit like someone's made a meme i don't want to be a meme"

 
Last edited:
i wouldn't worry about stock issues :D

There isn't any stock problems I can confirm that, distis have them in and available, prices are horrendous for the performance and core count, can't see them lasting lost at £550+ they'll need to bring them down to £350 to shift any real quantity after the initial flurry to buy them from the die hard fans goes away. :)
 
Here, while there was a difference in some games between these two Ryzen vs Comet Lake Laptops the repeating theme from Tim in this was "GPU limitation" "GPU limitation" "GPU limitation" "GPU limitation" "GPU limitation" "GPU limitation"

So turn the resolution down, WTF is wrong with you? You spend hours making a video and for it were lucky enough that the 10870K couldn't drive the RTX 3070 in these laptop all the time (which is noteworthy in its self, here we have a CPU that can't drive the GPU its paired with) so you had a couple of useful slides to show your audience, the rest of it was a complete waste of the viewers time as it told us precisely nothing.

Did you not even what that video back and think "hhmmm..... that GPU limitation that i keep repeating over and over again is annoyingly repetitive and it makes me look like i'm not even trying to find a difference between these products, maybe i should work harder or change some things because it come across a bit like someone's made a meme i don't want to be a meme"


Why should you have to find a difference in product though. If it is showing that at the moment for those laptops the CPU doesn't matter out the two and it is GPU bound I can select what is cheapest and be fine. It isn't even in the likes of a laptop you will upgrade the GPU so the performance is what it is in real tasks. You don't go through everything and say oh well there isn't any differences I need to change the useful data set to show differences to un-useful data because people will think I am not doing it right. What would on that laptop do to show if you changed things to 480 or 720 resolution because it isn't like it will magically improve in 3 years where the GPU isn't a bottleneck because you wont be changing it.

And with that I would even say in Desktop you will see little difference as has been shown time and time again where CPU's that showed better performance at 480p but similar in 1080p & 1440p in the past didn't magically get that performance uplift on the CPU 3 years later with the new faster GPU and they both got the same performance uplift and still were GPU bound because games graphics and rendering etc also needed more grunt. It is indeed different if you are playing something like CS:GO only but for most people that data isn't useful.

Showing say the 10900k & 5800x at 1080, 1440, 21:9 1440 and 4k across 20-30 games with their lows averages and highs and time frames is what I want to see most because that is how I can judge performance of that CPU in actual gaming scenarios. Now it is a massive amount of work and basically what the GPU test is but you are maybe comparing just the flagship of the two brands so a 6900x and 3090 for instance just so you can also see if there is any reason one CPU is worse on a particular GPU.

Then when the next generation of GPU's drop you slot in the 7900x and 4090 or whatever they will be and do those tests again. If the performance delta between the two doesn't change then there is no difference in the CPU side and the 480p and 720p difference shown is then irrelevant since it never actually translated and so is a completely wasteful data point that has zero real world metrics. I couldn't care less as a consumer what the best underlying architecture is. I want raw data on performance I actually use.

I certainly found that to be the case with the 4790k compared to say the FX-9590. The delta between them at 480p back then was like 50% delta, although 35% ish at actual gaming resolution of 1080p. That was with 780. Adding in a 1080Ti that gap didn't increase between the Intel and AMD CPU to 50% at gaming suddenly cause the more powerful GPU was used. It remained at 35%. So what did it show back then to have tested 480p resolution? Nothing really because there was no additional uplift or anything from being GPU bound at 35% performance delta. Yeah it is daft to compare since the performance was so different so selecting a 9590 for any reason was silly but the actual analytical data points to this whole but you are GPU bound at 1080p but not at 480p with said processor will give anything years down the line is false as far as I am concerned.
 
Why should you have to find a difference in product though. If it is showing that at the moment for those laptops the CPU doesn't matter out the two and it is GPU bound I can select what is cheapest and be fine. It isn't even in the likes of a laptop you will upgrade the GPU so the performance is what it is in real tasks. You don't go through everything and say oh well there isn't any differences I need to change the useful data set to show differences to un-useful data because people will think I am not doing it right. What would on that laptop do to show if you changed things to 480 or 720 resolution because it isn't like it will magically improve in 3 years where the GPU isn't a bottleneck because you wont be changing it.

And with that I would even say in Desktop you will see little difference as has been shown time and time again where CPU's that showed better performance at 480p but similar in 1080p & 1440p in the past didn't magically get that performance uplift on the CPU 3 years later with the new faster GPU and they both got the same performance uplift and still were GPU bound because games graphics and rendering etc also needed more grunt. It is indeed different if you are playing something like CS:GO only but for most people that data isn't useful.

Showing say the 10900k & 5800x at 1080, 1440, 21:9 1440 and 4k across 20-30 games with their lows averages and highs and time frames is what I want to see most because that is how I can judge performance of that CPU in actual gaming scenarios. Now it is a massive amount of work and basically what the GPU test is but you are maybe comparing just the flagship of the two brands so a 6900x and 3090 for instance just so you can also see if there is any reason one CPU is worse on a particular GPU.

Then when the next generation of GPU's drop you slot in the 7900x and 4090 or whatever they will be and do those tests again. If the performance delta between the two doesn't change then there is no difference in the CPU side and the 480p and 720p difference shown is then irrelevant since it never actually translated and so is a completely wasteful data point that has zero real world metrics. I couldn't care less as a consumer what the best underlying architecture is. I want raw data on performance I actually use.

I certainly found that to be the case with the 4790k compared to say the FX-9590. The delta between them at 480p back then was like 50% delta, although 35% ish at actual gaming resolution of 1080p. That was with 780. Adding in a 1080Ti that gap didn't increase between the Intel and AMD CPU to 50% at gaming suddenly cause the more powerful GPU was used. It remained at 35%. So what did it show back then to have tested 480p resolution? Nothing really because there was no additional uplift or anything from being GPU bound at 35% performance delta. Yeah it is daft to compare since the performance was so different so selecting a 9590 for any reason was silly but the actual analytical data points to this whole but you are GPU bound at 1080p but not at 480p with said processor will give anything years down the line is false as far as I am concerned.

That's not the experience i had with it, it was fine with a HD 7870 in Far Cry 3 but unplayable in Far Cry 4 with an R9 290, performance was worse in 4 with the 290 than it was in 3 with the 7870.

The FX 8350 was a particularity crap CPU and i'm not for a second suggesting a 9900K will be bad in Far Cry 6 or 7, it wont be, its a powerful CPU, but i think a 10900K will stretch its lead over it in later revisions of the franchise with newer faster GPU's.

There is an assumption that later games only put more demand on the GPU and make little if any difference to the CPU, even Steve from Hardware Unboxed seems to think this despite revisiting 7600K vs Ryzen 1600 reviews comparing his early reviews where the 7600K was clearly faster and yet that has been turned on its head with later games on a newer GPU.

Games evolve and with it make higher demands of the CPU as well as the GPU, there is a limit to how much a CPU can give.

I always advised people to get the 9900K if what they were looking for the best gaming CPU because while with a 2080TI the difference between a 9900K and a Ryzen 3700X may have only been "6%" on the day i doubt its 6% now with an RTX 3090 even in newer titles. i bet its more like 15% with the 3700X less able to drive the 3090.

I don't have a problem with people testing at 1080P or even 1440P, why Steve bothers with 4K is beyond me tho. I think 720P testing is useful even if only from an academic point of view because it does a better job at separating the CPU's, it tells us which CPU is actually faster. They did it with the FX 8350 and i defended it then, people still did it right up to and including Zen 2, i defended that too. its useful information, a lot more useful than 4K testing. "GPU limitation" is a waste of everyones time.
 
Last edited:
Another thing to note on the HardwareLuxx review vs Anandtech's is the RAM speed is different.

HardwareLuxx ran the ram at 3600MHz (which is essentially overclocked), whilst Anandtech ran the tests at 3200MHz which is stock for both AMD and Intel this gen. That may also account for some differences between the two tests.
 
That's not the experience i had with it, it was fine with a HD 7870 in Far Cry 3 but unplayable in Far Cry 4 with an R9 290, performance was worse in 4 with the 290 than it was in 3 with the 7870.

The FX 8350 was a particularity crap CPU and i'm not for a second suggesting a 9900K will be bad in Far Cry 6 or 7, it wont be, its a powerful CPU, but i think a 10900K will stretch its lead over it in later revisions of the franchise with newer faster GPU's.

There is an assumption that later games only put more demand on the GPU and make little if any difference to the CPU, even Steve from Hardware Unboxed seems to think this despite revisiting 7600K vs Ryzen 1600 reviews comparing his early reviews where the 7600K was clearly faster and yet that has been turned on its head with later games on a newer GPU.

Games evolve and with it make higher demands of the CPU as well as the GPU, there is a limit to how much a CPU can give.

I always advised people to get the 9900K if what they were looking for the best gaming CPU because while with a 2080TI the difference between a 9900K and a Ryzen 3700X may have only been "6%" on the day i doubt its 6% now with an RTX 3090 even in newer titles. i bet its more like 15% with the 3700X less able to drive the 3090.

I don't have a problem with people testing at 1080P or even 1440P, why Steve bothers with 4K is beyond me tho. I think 720P testing is useful even if only from an academic point of view because it does a better job at separating the CPU's, it tells us which CPU is actually faster. They did it with the FX 8350 and i defended it then, people still did it right up to and including Zen 2, i defended that too. its useful information, a lot more useful than 4K testing. "GPU limitation" is a waste of everyones time.

Fair enough but why bother with even a single 720p test on a laptop which for all purpose is like a console where you'll get a whole new machine and not upgrade the CPU or GPU when they are dated. That data is really pointless.

And fair enough. At least my real world element on things that have been GPU bound you are generally about 3 generations before you are no longer CPU bound.

I would also say for games dev for most part the load has been more GPU consistent in being still bound because for most part games up till last few years didn't make huge elemental changes in cores and the IPC increase we've had for a lot of Intel side has been mimimal also.

It might be more of a factor now as things are changing but I'm still really not interested in the 720p. If I am still getting recent frames from the next two generations of GPUs whilst using same CPU the gains are not worth the outlay. Of course this depends on your resolution, graphic settings and Hz your targeting.

I can't judge if I need to upgrade my CPU on 480p and 720p data though. Only for me for instance on 1440p and 4k because my resolution sits between them and I need to know if CPU there changes how likely I'm to achieve 144fps solid with my GPU.
 
Another thing to note on the HardwareLuxx review vs Anandtech's is the RAM speed is different.

HardwareLuxx ran the ram at 3600MHz (which is essentially overclocked), whilst Anandtech ran the tests at 3200MHz which is stock for both AMD and Intel this gen. That may also account for some differences between the two tests.

If they both have the same RAM speed i don't see how that makes a difference, i think Zen still responds a little better to higher clock RAM than Intel?

Look at this. The first image, 1080P you would think they are all roughly the same performance with the 10700K being slightly faster. This is where you can say "Intel can't even beat thier previous gen" Fail and all that....
The truth is this is a GPU bottleneck and the small differences are margins of error. the 10700K is 2% faster than the 11700K. or is it?

The second image is at 720P, now the 11700K is 11% faster than the 10700K, this is the true performance difference.

qbPf1ia.png


qH873Lo.png
 
Back
Top Bottom