• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Raptor Lake Leaks + Intel 4 developments

I feel like x3d will give about 10% performance increase in games, they will still be behind in productivity. It's all just manipulation, same as all marketing.
 
Based on ?
Going off the last gen, granted it could be up to 20% in a couple of games but that's about it. Productivity wise you just need to use the last x3d as a comparison. Not sure the x3d will even matter at 4k.
Also, yes I used a crystal ball, my head. :p


edit: here is a comparison of the 5800x vs the 5800x3d https://www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-ryzen-7-5800x3d/2.html I would read the comments section (breaks out into forum) as there seems to be a controversy with the ddr speeds used in the tests. Either way, I don't think the performance increase will be significant.
 
Last edited:
One of the things I felt the reviewers didn't focus on was the potential of the lower end chips. Pretty much any 13600k and 13700k can meet or exceed the 13900k's stock settings.

My 13600k (SP77 so about average) will daily 5.6pcores 4.4ecores and 5.0cache from here on. These are all core boosts. The best part is that with a AFII 280mm AIO, my peak temps during these punishing stress tests is about 80c. In gaming, hard pressed to even see 70's with RTX based games. I wanted to do a budget build for myself this time so a 13600k, Asus z690 strix a and bdie were plenty. It's nice to be back on a platform that's fun to tune for. My IMC isn't the strongest so it looks like 4200 mem is my max while good IMC's generally get 4300.

So yeah, if you're looking at 13600k benchmarks from mainstream, take into account the chips overall potential, not just what it does out of box.

image.png
 
Last edited:
I feel like x3d will give about 10% performance increase in games, they will still be behind in productivity. It's all just manipulation, same as all marketing.
People will buy them anyway, because many are mainly interested in playing games, even if they are a bit expensive. They will do this even if they can already comfortably hit 60 FPS in every game they play.

There's no real mystery to it. There's not much benefit from just increasing the core count.

Intel will release more CPUs to counter that somehow squeeze more life out of the Goldencove architecture.

The trouble is that just increasing the clock rate has limits in games, these limits have already been met with Zen 4 and Intel's 13h gen. There's still room to gain performance with extra cache, as we saw with the 5800X3D. Who knows, maybe Intel will try to increase the cache a bit further for the 14th gen?
 
Last edited:
People will buy them anyway, because many are mainly interested in playing games, even if they are a bit expensive. They will do this even if they can already comfortably hit 60 FPS in every game they play.

There's no real mystery to it. There's not much benefit from just increasing the core count.

Intel will release more CPUs to counter that somehow squeeze more life out of the Goldencove architecture.

The trouble is that just increasing the clock rate has limits in games, these limits have already been met with Zen 4 and Intel's 13h gen. There's still room to gain performance with extra cache, as we saw with the 5800X3D. Who knows, maybe Intel will try to increase the cache a bit further for the 14th gen?
14th gen, a rebadge. Lol worthy.
 
One of the things I felt the reviewers didn't focus on was the potential of the lower end chips. Pretty much any 13600k and 13700k can meet or exceed the 13900k's stock settings.

My 13600k (SP77 so about average) will daily 5.6pcores 4.4ecores and 5.0cache from here on. These are all core boosts. The best part is that with a AFII 280mm AIO, my peak temps during these punishing stress tests is about 80c. In gaming, hard pressed to even see 70's with RTX based games. I wanted to do a budget build for myself this time so a 13600k, Asus z690 strix a and bdie were plenty. It's nice to be back on a platform that's fun to tune for. My IMC isn't the strongest so it looks like 4200 mem is my max while good IMC's generally get 4300.

So yeah, if you're looking at 13600k benchmarks from mainstream, take into account the chips overall potential, not just what it does out of box.

image.png
Exactly! You can get a lot more out of these Intel CPU’s with very little difficulty. That’s what I like about Intel. If you want it pushed to the limit all out, if you want efficiency, if you want better temps it’s all possible with some tweaking. Without that, where is the fun?
 

The benefits of tuning and having a good memory platform. Let’s see how much ground the x3d variants can make up.
From my testing, 13900k stock + 7600c34 ram is around 15% faster than a 12900k clocked to 5.4ghz / 4.2 cache with 6000c30 ram. Aida latency was pretty similar between the two, 52ns for the 13900k and 54 ns for the 12900k.

My only issue with the 13900k is high power consumption is some specific games. In cyberpunk for example, i see a constant 150w power draw. Not a problem in most games, since usually its at 60 to 90 watts but there are these few edge cases
 
From my testing, 13900k stock + 7600c34 ram is around 15% faster than a 12900k clocked to 5.4ghz / 4.2 cache with 6000c30 ram. Aida latency was pretty similar between the two, 52ns for the 13900k and 54 ns for the 12900k.

My only issue with the 13900k is high power consumption is some specific games. In cyberpunk for example, i see a constant 150w power draw. Not a problem in most games, since usually its at 60 to 90 watts but there are these few edge cases

Games that do a lot of asset streaming and/or are RT heavy will hammer the CPU. In those games, RPL really distances itself with performance as well so you certainly are getting performance for that power draw.
 
Games that do a lot of asset streaming and/or are RT heavy will hammer the CPU. In those games, RPL really distances itself with performance as well so you certainly are getting performance for that power draw.
Im not entirely in agreement with that. I mean a 15% uplift vs a maxed out 12900k is pretty good, but the power draw issues in those specific games persist even when gpu bottlenecked.

Also, my 13900k has no room for uv. I think the ecores are crashing if I try, while the 12900k had huge room for undervolting. I was running it with a 0.16 undervolt at stock settings.
 
I don’t undervolt on Intel. Finding the right llc and vcore for the frequency you want and then flipping to adaptive would be my recommendation. I have no idea where this UV idea on Intel came from tbh
 
From my testing, 13900k stock + 7600c34 ram is around 15% faster than a 12900k clocked to 5.4ghz / 4.2 cache with 6000c30 ram. Aida latency was pretty similar between the two, 52ns for the 13900k and 54 ns for the 12900k.

My only issue with the 13900k is high power consumption is some specific games. In cyberpunk for example, i see a constant 150w power draw. Not a problem in most games, since usually its at 60 to 90 watts but there are these few edge cases
Cyberpunk is unique in that sense. It’s succeeded in becoming a benchmark rather than the great game it was supposed to be. It hammers your CPU.
 
I don’t undervolt on Intel. Finding the right llc and vcore for the frequency you want and then flipping to adaptive would be my recommendation. I have no idea where this UV idea on Intel came from tbh
Well technically tuning your llc is undervolting in a sense. Indeed i was using adaptive with a negative offset, was able to drop around 70 watts with a slight oc at full workloads. That isnt possible at all on my 13900k, even the slightest offset causes instability in ycruncher
 

Not sure why TG jumped the gun here but CKD is going to be a relatively big change in DDR5. One of the issue with higher frequency and stability in DDR5 is the signal integrity. The preservation of which is a main contributor to holding back ddr5 speeds.

The CKD also requires the CPU to support this. From my information, the current gen does not but RPL-Refresh will. So for those who're into mem tuning, RPL-R with CKD dimm's will be quite fun around Q3 when released.
 
Last edited:
Back
Top Bottom