• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The Official Alder Lake owners thread ***

Stock 12900k vs 5.7 ghz all core overclocked 7700x. Ouch

12900k

cyberpunk.png


7700x

7700x-5000-vs-5700.jpg
 
Last edited:
my gasket repair kit came today, still waiting for the lga 1700 frame kit. might wait till the raptor lake cpus come out to decide if its worth upgrading to or not before messing about with the watercooler.
Mine arrived last week, now just need a reason to take the AIO out.... Come on 14700K:p

12900k playing spiderman at hwunboxed settings (1080 high 6 RT)

I have decided to disregard spiderman entirely. It has a weird-ass engine that spikes the CPU for absolutely no reason yet still retains absolutely smooth gameplay with zero stutter or fps dips at any point at 3440x1440, it is always well above 60fps with RTX on, DLSS Quality and everything else maxed. Makes no sense for the spikes and high CPU usage on average. It is the only game using the Insomniac engine on PC too so does not represent. Besides all that a wider gaming experience, game was rather boring I found, hence the refund on Steam.
 
Last edited:
There is a new BIOS version (F21) for the Gigabyte Z690, the changenote has one thing that stuck out:

  1. Checksum : F395
  2. Improve the linkage between Resizable and 4G above
  3. Add "Instant 6GHz" profile in CPU upgrade option (supports i9-13900K/KF and i7-13700K/KF)
I'm going to guess that this is simply the visual relation to Resize BAR option being unlocked when Above 4G Decoding is toggled to enabled, since it is hidden until that point rather than actually improve performance.

Still not updated BIOS since I rolled back ages ago...

Also for anyone considering the new BIOS feature for "Instant 6GHz" on the 13700x13900, keep in mind this pumps a whole load of vcore into the CPU and load line. Expect more heat, more power.... All for a 3% uplift. Doesn't seem worth it at all.
 
Last edited:
Ah I remember you updating previously Mark. No wonder you are not bothering, lol.
I'm kinda weak, and updated mine to that revision.
Then again I'm running with a per core type overclock.
All seems well with the 12700k.
The 13700k seems a very decent chip.
 
Hmm can you check the BIOS for the 4G Decoding option layout and see if they have just simply moved around the options to be clearer to the majority etc?

I am skipping 13th gen and going to upgrade to a 14700K as the new 'Intel 4' process is supposed to be more efficient so should not have the heat and power drains 13th gen does. Whilst the 13700K is even better than the 12900K, the extra cost/heat/power draw etc doesn't seem worth the upgrade when all is running nice and fast as it is.
 
Last edited:
I'll be back home soon, I'll check and let you know.

IIRC the 4G option is now set apart and stands out, and by default is on the favourites menu.
I'll check tho.



Point taken in regards to the 13th gen, especially in comparisons to the 12th. Diminishing returns and all of that.

I do remember testing out a 3080ti and for the uplift over the 3070 at 1440p it wasn't worth the 100w typical increase in power it used. Well not worth it to me.
 
Last edited:
That's fair, the 3080 Ti undervolted however results in similar performance (or better due to running cooler and being at boost longer) with up to 100w less power draw.

Sounds like the 4G Decoding options etc were just relocated and placed in Favs for easy finding - So no BAR performance changes. I shall stick with this same BIOS for longer then :p
 
I never got to test out much of the undervolting potential. The FE model I bought didn't overly impress me, again compared to the 3070 at 1440p. Perhaps I could have improved the little undervolting I tried but stability seemed to not be as grea tas I would have thought, but stressing the speculative approach knowing it was being refunded.

I remember now.....

bar.jpg


by default the 4G and Bar are set to "Auto"...................................but even tho I have a 3070 the "BAR" was showing as being disabled when checked through GPU-Z. I then changed both to "Enabled" and all has been well. So using the "Auto" function didn't result on the BAR being enabled.
 
Yup looks like they un-grouped BAR and 4G as you only saw BAR if 4G was enabled (Disabled by default). Definitely will keep BIOS as is then thanks :p
 
As video encoding with QuickSync was a discussion previously, and after all this time I finally got round to checking out the video encoding capabilities of the KF, since I have no iGP, there's no QuickSync, so relying on raw core power or GPU acceleration. I compared this against NVENC (the RTX 30 series is in gen 7 of the NVENC hardware which is much better than previous Pascal gen NVENC, so RTX 2060 and up basically) and saw what I expected with the just the CPU encoding alone, obviously NVENC is faster.

This was for a 10-bit 4K sample video to compare encoding time/speed against.

The MO for this test was a video (albeit old) that popped up on youtube comparing QuickSync vs NVENC vs CPU and the results for that were:

yiGyf2W.png


And my findings, in order, H.264 CPU / H.264 NVENC / H.265 CPU / H.265 NVENC:

h264_CPU.jpg


h264_NVENC.jpg


h265_CPU.jpg


h265_NVENC.jpg


I found most interesting the difference between file sizes between the CPU encode vs NVENC, even though the Handbrake parameters remained the same, just switched from CPU to NVENC. Of course the bitrates varied, but even still.

Sizes.jpg

(Original source video on the left)

The output picture/motion etc between them all appear identical.

I think with this in mind I'll just stick to a KF in future too if the price difference between K and KF remains the price of a game or beyond. I won't be using QSV anyway with the above in mind since NVENC is just faster.

Now, I also watched a more recent comparison of QSV from 12th gen vs the other 265 and 264 encoders between both AMD and Nvidia. And 12th gen QSV is praised highly, so there is that, but the whole field has changed now with hardware AV1 encoding:


Intel's GPU AV1 encoder beats everything as tested in the review above. AT that time the RTX 40 series was not out, and RTX 40 has AV1 encode/decode, so will be interest to see that added. Intel's gen 1 AV1 encoder is currently the top, but I do plan on getting an RTX 40 series at some point, or waiting until RTX 50 series, so still, withing to Nvidia hardware encoding seems the best option for me as AV1 is the future, better qaulity, lower bitrates, lower file size than any other codec. My main use case for this is recording game footage, so this fits nicely I think.
 
@Vimes you will be happy to know that I have finally updated the BIOS to F22 (was on F6 as you know), could not stand being 5 versions out of date for so long so just went for it. Now XMP mode works with the same RAM, although with everything left on Auto the gEar mode boots up at 2, not 1 with XMP enabled. With XMP disabled it boots into Gear 1 fine. I have manually changed Gear to 1 and set the RAM back to 3200MHz for the time being whilst I slowly increase RAM freq to see if it still fails to boot at higher freqs like before.

Looks like Gigabyte adjusted the voltages and things with the new BIOS because my previous manual VCCSA and DRAM voltages didn't allow the system to boot, however on Auto it appears fine (so far).

Did notice some settings moved around and some new bits too in this BIOS.
 
Last edited:
@mrk It looks like you have had a good result overall with that BIOS update, well done. Interesting that your previous VCCSA and RAM voltages did not work.
Anything over 3200Mhz seems to be Gear 2 by default with XMP. As you note.
Post back how you go on with anything other than your previously stable memory settings. IIRC the reduction to 3200Mhz made no real differences for you, except for stability.
 
Last edited:
I'm just going to leave it on 3200MHz tbh and ride DDR4 out until I upgrade t DDR4 next year with a 14700K - Yeah 3200 vs 3600 saw no measurable difference in performance hence why I just left it at that for so long too.

So gear 1, 3200MHz, at least everything remains 1:1 with the CPU memory controller :p
 
@Vimes you will be happy to know that I have finally updated the BIOS to F22 (was on F6 as you know), could not stand being 5 versions out of date for so long so just went for it. Now XMP mode works with the same RAM, although with everything left on Auto the gEar mode boots up at 2, not 1 with XMP enabled. With XMP disabled it boots into Gear 1 fine. I have manually changed Gear to 1 and set the RAM back to 3200MHz for the time being whilst I slowly increase RAM freq to see if it still fails to boot at higher freqs like before.

Looks like Gigabyte adjusted the voltages and things with the new BIOS because my previous manual VCCSA and DRAM voltages didn't allow the system to boot, however on Auto it appears fine (so far).

Did notice some settings moved around and some new bits too in this BIOS.
I think this is one of the reason I tend to avoid Gigabyte motherboards now as they can make some great hardware which is let down by poor bios implementation, especially when it comes to RAM.

Things you might try just in case you haven't already; VCCSA ~1.30v, VDDQ ~1.40v. Take down all the XMP timings for your ram and then enter them all manually. Run tCWL at -1 of tCL. So if your Cas Latecny is set to 15 then run tCWL at 14, bear in mind that sometimes tCWL doesn't like odd numbers!
 
Last edited:
The VCCSA etc was what I was on previous to this latest BIOS and it was fine as had all timings, important voltages etc set manually. now those same manual settings don't seem to pass POST, whilst the auto settings works fine (the Auto was unstable before) - So I'm leaving them on Auto lol. My RAM is only CL18 3600 but I suspected all along that because I have 2 sticks of 32GB, and they're not B-die, that I can't really time them any tighter than stock without instability anyway so I've just used them at stock timings.

My benchmarks and gaming performance seems to match or exceed with those of others with similar specs so I just called it a day at that I guess.

For ref:

e3GX8U5.png
 
Back
Top Bottom