• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The Official Alder Lake owners thread ***

Hmm, yes with those sticks you're probably not going to go higher in Mhz but should still be able to tighten the timings a bit, even with 64Gb.

I've recently done some for a mate and he had the cheapest DDR4 3600 going which are the Corsair Vengeance for 32Gb (2x16Gb) at 18-22-22-42 2T and I've got them to run stabily at 3600 15-17-17-32 1T and tighened the secondaries and tertiaries with tRFC down to 270 from 700. The latency dropped over 10ns (from 63.5ns to 52.5ns) and this was only with a 12400 CPU, though it wasn't a Gigabyte board! ;) (MSI Z690-A Pro)

It's the SA and VDDQ that will affect timings much more so than DRam voltage.

He's noticed a nice little improvement in games and apps that are latency sensitive.
 
Problem is finding values that balance up takes quite a bit of trial and error and that sort of faff just feels a bit meh at times given everything I've trifled through already (Gigabyte BIOS!) -Maybe I'll just save the BIOS settings to a profile and then save to USB so have two backups and drop the timings to CL15 or 16 and see if that boots with the rest left as is, if not, then let it fail POST 3 times so the BIOS auto reverts to defaults and restore the saved settings and either start again or just leave be :p

What sort of difference did he see in gaming and at what res? If it's say 5fps then for me that's not a number worth the extra trial and error faff! I'd need to see a 10fps average gain in order to call it worthwhile I'd say personally. I'm at 3440x1440 for reference, and unless a game is unoptimised, then I tend to leave everything on max with ray tracing enabled too and just control the locked framerate via RTSS letting Gsync take care of the rest.
 
Last edited:
I understand your quandary and I'd probably say that messing around with memory timings is for tweakers. (not the crystal meth kind! ;))

You always have to use percentages not fps improvements. :) My mate says he measured around an 11% improvement, so in some things he got double the improvement that we used to get with Intel CPU's year on year!
 
11% is huge and by far the largest margin I've heard of as I'd read reviews comparing gaming performance between stick 3600 C18.vs Bdie C14 and the like, and whilst there was an improvement , it was only in the small fps range, YouTube channels also have side by side frame stats of each too. It was after seeing all that where I decided tweaking wokldnt benefit me based on the like of use I'm giving where outright GPU power is more important since the CPU and memory aren't bound in the resolution or settings I play at.

For productivity a similar thing applies, all my apps use GPU acceleration for both editing, encoding and decoding and exporting.

I think the only time I'll see a noticeable improvement is going DDR5 with high bandwidth memory, so that's next year with 14th gen hopefully.
 
Last edited:
Haha, you're watching the wrong youtube videos. ;) The problem with many of those videos is that they just show XMP 3600C14 B-die compared to XMP 3600C18 so not comparing properly tuned RAM with stock XMP. Also if it is tuned you don't know to what degree as some just change the primary timings and that's it. When done 'properly' there can be some nice improvements, even when gaming at 1440p as demonstrated here.

52692093906_15a5496659_c.jpg
 
I understand your quandary and I'd probably say that messing around with memory timings is for tweakers. (not the crystal meth kind! ;))

You always have to use percentages not fps improvements. :) My mate says he measured around an 11% improvement, so in some things he got double the improvement that we used to get with Intel CPU's year on year!

I've seen similar improvements in certain titles, just going from a 16-19-19 3600mhz single rank kit to a 14-14-14 3600mhz dual rank kit (sub 50ns latency). With b-die, it's also the improvements you can make to the secondary and tertiary timings. Bringing tRFC right down alone has a decent impact in certain games.

It's only worth doing if you enjoy tweaking though, as it can take a while to get things fully stable on certain setups.
 
Last edited:
Haha, you're watching the wrong youtube videos. ;) The problem with many of those videos is that they just show XMP 3600C14 B-die compared to XMP 3600C18 so not comparing properly tuned RAM with stock XMP. Also if it is tuned you don't know to what degree as some just change the primary timings and that's it. When done 'properly' there can be some nice improvements, even when gaming at 1440p as demonstrated here.

52692093906_15a5496659_c.jpg

Thanks, those % gains are decent, although it does highlight the gains are completely game engine sensitive - Also looking at the text in the image, those are solely based on the 5% percentile which is understandable because the CPU and RAM subsystem affects the %lows more than the total average, so if someone is CPU limited in game, then you would expect to see a bigger gain in that metric zone from tuning things like memory, I'd prefer to see the overall average, not a smaller percentile tbh as in all the games I play the % lows are stable and at least 60fps anyway :p

This is largely the reaosn I haven't really been too bothered about tweaking too far. Back in the days when I didn't have a capable CPU/RAM I did OC and tweak to gain every last frame I could manage, but these days it's not really a concern I guess!
 
Last edited:
Thanks, those % gains are decent, although it does highlight the gains are completely game engine sensitive - Also looking at the text in the image, those are solely based on the 5% percentile which is understandable because the CPU and RAM subsystem affects the %lows more than the total average, so if someone is CPU limited in game, then you would expect to see a bigger gain in that metric zone from tuning things like memory, I'd prefer to see the overall average, not a smaller percentile tbh as in all the games I play the % lows are stable and at least 60fps anyway :p

This is largely the reaosn I haven't really been too bothered about tweaking too far. Back in the days when I didn't have a capable CPU/RAM I did OC and tweak to gain every last frame I could manage, but these days it's not really a concern I guess!
I also used to be a RAM Tuning agnostic as my view was "it makes no difference in Cinebench so why bother". It wasn't until I was using a DXO Photolab and noticed that Mr Robert896r1 here, even though his CPU was 100Mhz slower than mine it was beating my result. I noitced that his RAM was tuned and mine was at XMP so after spending some time tuning my RAM and re-doing it I managed to get the same score as him. My Eureka moment. :)

I've also watched a Gamers Nexus video a while back where they showed the improved difference simple tuning RAM makes to Frame Time. This is what we perceive as smoothness in the game - I'll have to see if I can find it again.
 
Maybe it's worth a revisit, as you say it is set and forget after all once the balance has bee found.

Realistically though how far can the RAM I've got go? They've 32GB modules and not b-die bear in mind!

I already know that the previous manual dram voltage of 1.46v on this new BIOS version doesn't boot, nor does the vccsa at 1.3v - I've not touch anything else though!
 
Maybe it's worth a revisit, as you say it is set and forget after all once the balance has bee found.

Realistically though how far can the RAM I've got go? They've 32GB modules and not b-die bear in mind!

I already know that the previous manual dram voltage of 1.46v on this new BIOS version doesn't boot, nor does the vccsa at 1.3v - I've not touch anything else though!

The easy hack is set trrd_s and trrd_l to 4 and then tfaw to 16. That'll be most of your gains. It'll put more stress on your cpu, mem, etc so do stability tests please.
 
That sounds easy enough, so turn off XMP, set timings manually to XMP values, set trrd_s and trrd_l to 4 and then tfaw to 16?

For stability testing, if it's unstable, which voltage would you say would be the one to increase or set perhaps? Keeping in mind all voltages currently are on auto since that's what appears to work fine now vs the old BIOS that needed manual.
 
That sounds easy enough, so turn off XMP, set timings manually to XMP values, set trrd_s and trrd_l to 4 and then tfaw to 16?

For stability testing, if it's unstable, which voltage would you say would be the one to increase or set perhaps? Keeping in mind all voltages currently are on auto since that's what appears to work fine now vs the old BIOS that needed manual.

Yeah, set timings manually where possible. Voltages will be vddq, SA and DRAM for mem and vcore for cpu. Keep in mind when you oc ram and make it run faster it stresses the cpu more as well.
 
I'm fine with keeping the RAM at 3200MHz btw as saw no benchmarking difference between 3600 and 3200 back when I was trying to get 3600 stable and found 3200 better for it. So in effect the RAM is more slack currently, just the changes in those timings that will offset against the downclocked RAM I guess.
 
That sounds easy enough, so turn off XMP, set timings manually to XMP values, set trrd_s and trrd_l to 4 and then tfaw to 16?

For stability testing, if it's unstable, which voltage would you say would be the one to increase or set perhaps? Keeping in mind all voltages currently are on auto since that's what appears to work fine now vs the old BIOS that needed manual.
I’ve had examples where trrd_l 4 is not possible and has to be bumped up a little. One of mine had to be set trrd_L 7 trrd_S 4.
 
I will have a play around cheers!

I also did some further watching, this guy for example calculates that the % lows gain over 20% fps uplift simply from going from 3200MHz to 4000MHz RAM, keeping the timings exactly the same. 20% is a significant number:


I ran AIDA64 and checked my current results:

u1w1xCH.png
 
Yeah looking at Techspot's chart, even 3600 CL16 offers 57ns which is a significant latency drop. I guess it's time to tweak my latency since that is what matters more for workloads vs raw frequency! Even 3200 at CL16 is 60ns.

HNPAt51.png
 
Last edited:
Yeah looking at Techspot's chart, even 3600 CL16 offers 57ns which is a significant latency drop. I guess it's time to tweak my latency since that is what matters more for workloads vs raw frequency! Even 3200 at CL16 is 60ns.

HNPAt51.png

also another thing to remember they could be running with clean systems so hardly anything running in the back ground
 
I will have a play around cheers!

I also did some further watching, this guy for example calculates that the % lows gain over 20% fps uplift simply from going from 3200MHz to 4000MHz RAM, keeping the timings exactly the same. 20% is a significant number:


I ran AIDA64 and checked my current results:

u1w1xCH.png
You will be able to reduce that latency by quite a bit.

It’s very time consuming tuning ram due to testing. You really do need to do it for your particular system though. You can’t just copy and paste someone else’s unfortunately because what works for them might not for you.
 
Back
Top Bottom