• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The Official Alder Lake owners thread ***

Not got my kit yet, but the gasket only starts to erode after a couple of years so I am in no major rush and will almost certainly wait until I've upgraded to 13700K or 14700K (most likely) and gone the DDR5 route with the next upgrade.
 
my gasket repair kit came today, still waiting for the lga 1700 frame kit. might wait till the raptor lake cpus come out to decide if its worth upgrading to or not before messing about with the watercooler.
Mine arrived last week, now just need a reason to take the AIO out.... Come on 14700K:p

12900k playing spiderman at hwunboxed settings (1080 high 6 RT)

I have decided to disregard spiderman entirely. It has a weird-ass engine that spikes the CPU for absolutely no reason yet still retains absolutely smooth gameplay with zero stutter or fps dips at any point at 3440x1440, it is always well above 60fps with RTX on, DLSS Quality and everything else maxed. Makes no sense for the spikes and high CPU usage on average. It is the only game using the Insomniac engine on PC too so does not represent. Besides all that a wider gaming experience, game was rather boring I found, hence the refund on Steam.
 
Last edited:
There is a new BIOS version (F21) for the Gigabyte Z690, the changenote has one thing that stuck out:

  1. Checksum : F395
  2. Improve the linkage between Resizable and 4G above
  3. Add "Instant 6GHz" profile in CPU upgrade option (supports i9-13900K/KF and i7-13700K/KF)
I'm going to guess that this is simply the visual relation to Resize BAR option being unlocked when Above 4G Decoding is toggled to enabled, since it is hidden until that point rather than actually improve performance.

Still not updated BIOS since I rolled back ages ago...

Also for anyone considering the new BIOS feature for "Instant 6GHz" on the 13700x13900, keep in mind this pumps a whole load of vcore into the CPU and load line. Expect more heat, more power.... All for a 3% uplift. Doesn't seem worth it at all.
 
Last edited:
Hmm can you check the BIOS for the 4G Decoding option layout and see if they have just simply moved around the options to be clearer to the majority etc?

I am skipping 13th gen and going to upgrade to a 14700K as the new 'Intel 4' process is supposed to be more efficient so should not have the heat and power drains 13th gen does. Whilst the 13700K is even better than the 12900K, the extra cost/heat/power draw etc doesn't seem worth the upgrade when all is running nice and fast as it is.
 
Last edited:
That's fair, the 3080 Ti undervolted however results in similar performance (or better due to running cooler and being at boost longer) with up to 100w less power draw.

Sounds like the 4G Decoding options etc were just relocated and placed in Favs for easy finding - So no BAR performance changes. I shall stick with this same BIOS for longer then :p
 
Yup looks like they un-grouped BAR and 4G as you only saw BAR if 4G was enabled (Disabled by default). Definitely will keep BIOS as is then thanks :p
 
As video encoding with QuickSync was a discussion previously, and after all this time I finally got round to checking out the video encoding capabilities of the KF, since I have no iGP, there's no QuickSync, so relying on raw core power or GPU acceleration. I compared this against NVENC (the RTX 30 series is in gen 7 of the NVENC hardware which is much better than previous Pascal gen NVENC, so RTX 2060 and up basically) and saw what I expected with the just the CPU encoding alone, obviously NVENC is faster.

This was for a 10-bit 4K sample video to compare encoding time/speed against.

The MO for this test was a video (albeit old) that popped up on youtube comparing QuickSync vs NVENC vs CPU and the results for that were:

yiGyf2W.png


And my findings, in order, H.264 CPU / H.264 NVENC / H.265 CPU / H.265 NVENC:

h264_CPU.jpg


h264_NVENC.jpg


h265_CPU.jpg


h265_NVENC.jpg


I found most interesting the difference between file sizes between the CPU encode vs NVENC, even though the Handbrake parameters remained the same, just switched from CPU to NVENC. Of course the bitrates varied, but even still.

Sizes.jpg

(Original source video on the left)

The output picture/motion etc between them all appear identical.

I think with this in mind I'll just stick to a KF in future too if the price difference between K and KF remains the price of a game or beyond. I won't be using QSV anyway with the above in mind since NVENC is just faster.

Now, I also watched a more recent comparison of QSV from 12th gen vs the other 265 and 264 encoders between both AMD and Nvidia. And 12th gen QSV is praised highly, so there is that, but the whole field has changed now with hardware AV1 encoding:


Intel's GPU AV1 encoder beats everything as tested in the review above. AT that time the RTX 40 series was not out, and RTX 40 has AV1 encode/decode, so will be interest to see that added. Intel's gen 1 AV1 encoder is currently the top, but I do plan on getting an RTX 40 series at some point, or waiting until RTX 50 series, so still, withing to Nvidia hardware encoding seems the best option for me as AV1 is the future, better qaulity, lower bitrates, lower file size than any other codec. My main use case for this is recording game footage, so this fits nicely I think.
 
@Vimes you will be happy to know that I have finally updated the BIOS to F22 (was on F6 as you know), could not stand being 5 versions out of date for so long so just went for it. Now XMP mode works with the same RAM, although with everything left on Auto the gEar mode boots up at 2, not 1 with XMP enabled. With XMP disabled it boots into Gear 1 fine. I have manually changed Gear to 1 and set the RAM back to 3200MHz for the time being whilst I slowly increase RAM freq to see if it still fails to boot at higher freqs like before.

Looks like Gigabyte adjusted the voltages and things with the new BIOS because my previous manual VCCSA and DRAM voltages didn't allow the system to boot, however on Auto it appears fine (so far).

Did notice some settings moved around and some new bits too in this BIOS.
 
Last edited:
I'm just going to leave it on 3200MHz tbh and ride DDR4 out until I upgrade t DDR4 next year with a 14700K - Yeah 3200 vs 3600 saw no measurable difference in performance hence why I just left it at that for so long too.

So gear 1, 3200MHz, at least everything remains 1:1 with the CPU memory controller :p
 
The VCCSA etc was what I was on previous to this latest BIOS and it was fine as had all timings, important voltages etc set manually. now those same manual settings don't seem to pass POST, whilst the auto settings works fine (the Auto was unstable before) - So I'm leaving them on Auto lol. My RAM is only CL18 3600 but I suspected all along that because I have 2 sticks of 32GB, and they're not B-die, that I can't really time them any tighter than stock without instability anyway so I've just used them at stock timings.

My benchmarks and gaming performance seems to match or exceed with those of others with similar specs so I just called it a day at that I guess.

For ref:

e3GX8U5.png
 
Problem is finding values that balance up takes quite a bit of trial and error and that sort of faff just feels a bit meh at times given everything I've trifled through already (Gigabyte BIOS!) -Maybe I'll just save the BIOS settings to a profile and then save to USB so have two backups and drop the timings to CL15 or 16 and see if that boots with the rest left as is, if not, then let it fail POST 3 times so the BIOS auto reverts to defaults and restore the saved settings and either start again or just leave be :p

What sort of difference did he see in gaming and at what res? If it's say 5fps then for me that's not a number worth the extra trial and error faff! I'd need to see a 10fps average gain in order to call it worthwhile I'd say personally. I'm at 3440x1440 for reference, and unless a game is unoptimised, then I tend to leave everything on max with ray tracing enabled too and just control the locked framerate via RTSS letting Gsync take care of the rest.
 
Last edited:
11% is huge and by far the largest margin I've heard of as I'd read reviews comparing gaming performance between stick 3600 C18.vs Bdie C14 and the like, and whilst there was an improvement , it was only in the small fps range, YouTube channels also have side by side frame stats of each too. It was after seeing all that where I decided tweaking wokldnt benefit me based on the like of use I'm giving where outright GPU power is more important since the CPU and memory aren't bound in the resolution or settings I play at.

For productivity a similar thing applies, all my apps use GPU acceleration for both editing, encoding and decoding and exporting.

I think the only time I'll see a noticeable improvement is going DDR5 with high bandwidth memory, so that's next year with 14th gen hopefully.
 
Last edited:
Haha, you're watching the wrong youtube videos. ;) The problem with many of those videos is that they just show XMP 3600C14 B-die compared to XMP 3600C18 so not comparing properly tuned RAM with stock XMP. Also if it is tuned you don't know to what degree as some just change the primary timings and that's it. When done 'properly' there can be some nice improvements, even when gaming at 1440p as demonstrated here.

52692093906_15a5496659_c.jpg

Thanks, those % gains are decent, although it does highlight the gains are completely game engine sensitive - Also looking at the text in the image, those are solely based on the 5% percentile which is understandable because the CPU and RAM subsystem affects the %lows more than the total average, so if someone is CPU limited in game, then you would expect to see a bigger gain in that metric zone from tuning things like memory, I'd prefer to see the overall average, not a smaller percentile tbh as in all the games I play the % lows are stable and at least 60fps anyway :p

This is largely the reaosn I haven't really been too bothered about tweaking too far. Back in the days when I didn't have a capable CPU/RAM I did OC and tweak to gain every last frame I could manage, but these days it's not really a concern I guess!
 
Last edited:
Maybe it's worth a revisit, as you say it is set and forget after all once the balance has bee found.

Realistically though how far can the RAM I've got go? They've 32GB modules and not b-die bear in mind!

I already know that the previous manual dram voltage of 1.46v on this new BIOS version doesn't boot, nor does the vccsa at 1.3v - I've not touch anything else though!
 
That sounds easy enough, so turn off XMP, set timings manually to XMP values, set trrd_s and trrd_l to 4 and then tfaw to 16?

For stability testing, if it's unstable, which voltage would you say would be the one to increase or set perhaps? Keeping in mind all voltages currently are on auto since that's what appears to work fine now vs the old BIOS that needed manual.
 
I'm fine with keeping the RAM at 3200MHz btw as saw no benchmarking difference between 3600 and 3200 back when I was trying to get 3600 stable and found 3200 better for it. So in effect the RAM is more slack currently, just the changes in those timings that will offset against the downclocked RAM I guess.
 
I will have a play around cheers!

I also did some further watching, this guy for example calculates that the % lows gain over 20% fps uplift simply from going from 3200MHz to 4000MHz RAM, keeping the timings exactly the same. 20% is a significant number:


I ran AIDA64 and checked my current results:

u1w1xCH.png
 
Yeah looking at Techspot's chart, even 3600 CL16 offers 57ns which is a significant latency drop. I guess it's time to tweak my latency since that is what matters more for workloads vs raw frequency! Even 3200 at CL16 is 60ns.

HNPAt51.png
 
Last edited:
So I've set the tRRD_S and tRRD_L as per above as well as tFAW to 16. Booted up all fine, left the CL at the default 18 and manually set the DRAM voltage to 1.35.

AIDA64 now shows:

Y3PELtD.png

Any clue as to why the MT/s has slightly dropped resulting in slightly higher latency? The L2 and L3 caches have increased however.

I'm thinking with this result that maybe just setting CL to 16 is the better option? I can't remember fully but I am sure I tried CL16 at 3200MHz once before and could not get it to boot so left it at CL18. Would CL17 be a viable option?
 
Last edited:
Does the fact that they are 32GB modules factor into that though? If I switched to 4x16GB modules for example could that make a difference?
 
Back
Top Bottom