What are your thoughts on CAMM2?

Personally think it's an absolutely backwards step.

Yes it may be "necessary" for high speed memory, but at what cost? May as well just solder the RAM onto the motherboard in terms of what this realistically does for allowing "upgrades".



Yes it takes up marginally less board area than 4 DIMM slots - but for what reason? If board area savings are so important then either dropping to 1 or 2 DIMM slots saves space. Similarly if it was all about board area then SO-DIMM modules have been a thing for almost as long as DIMMs have too :)

It also saves height - but other than in laptops or small form factor machines, why does that matter? An ATX based PC has lots of other components that are higher (backplate, CPU cooler, PCI-E Cards), so why does the RAM also need to be flat?


Want to add some more RAM?
Oh wait - rather than just buying another DIMM/2 DIMMS, now you have to throw away that perfectly good CAMM module, and replace it with a bigger one.
No problem - I'll just resell the old CAMM module... except no one wants it because everyone wants a higher capacity one. Whereas in most cases older lower capacity DIMMs could be reused to make 4 DIMM sets etc in older machines to add more Memory.
I'd wager it's actually a non-consumer practice dreamt up by the RAM cartels in order to grab a slice of the "Apple" non-upgradable business model, whilst still purporting to give you the upgrade option (however impractical that now becomes)


And that's before even considering that the so-called "standard" already has so many different variations (CAMM2 vs LPCAMM2) that they aren't even necessarily going to be interchangeable between say your laptop and a possible desktop PC.
1723623649746.png

Don't think any of these points are much of a concern at all on an enthusiast-grade motherboard. Perhaps it's because you're speaking from the perspective of someone who doesn't upgrade that often or isn't interested in the performance benefits.

1. In terms of losing physical DIMM slots, there is no "cost" in terms of drawbacks. Both speed and latency are the spearheads that insight change in this space, and if something offers improvements then adoption makes sense. Space saving and airflow are also welcome benefits. For enthusiasts, monoblocks that cover the memory array and CPU are open to a more practical approach now.

2. "Adding some more RAM" ultimately isn't an issue, either. Why? Because high-speed binned CAMM modules aren't to be mixed in the same way that conventional memory kits aren't. They're binned by the vendors in the density they're sold. So another non-issue unless wanting to run close to stock, in which case perhaps these products aren't for you.
 
Last edited:
Don't think any of these points are much of a concern at all on an enthusiast-grade motherboard. Perhaps it's because you're speaking from the perspective of someone who doesn't upgrade that often or isn't interested in the performance benefits.

1. In terms of losing physical DIMM slots, there is no "cost" in terms of drawbacks. Both speed and latency are the spearheads that insight change in this space, and if something offers improvements then adoption makes sense. Space saving and airflow are also welcome benefits. For enthusiasts, monoblocks that cover the memory array and CPU are open to a more practical approach now.

2. "Adding some more RAM" ultimately isn't an issue, either. Why? Because high-speed binned CAMM modules aren't to be mixed in the same way that conventional memory kits aren't. They're binned by the vendors in the density they're sold. So another non-issue unless wanting to run close to stock, in which case perhaps these products aren't for you.

Which is fine, but then why not go the whole hog and just integrate the fastest memory you can onto the motherboard?

Overall it would reduce cost and improve signal integrity (and therefore likely speed), by removing the connectors and separate PCB.
 
Which is fine, but then why not go the whole hog and just integrate the fastest memory you can onto the motherboard?

Overall it would reduce cost and improve signal integrity (and therefore likely speed), by removing the connectors and separate PCB.

Cost, for one. Also can't overlook that some of these kits are heavily overclocked. Baking a 9000MT/s+ memory kit onto the board when some CPU samples can't achieve those speeds unconditionally isn't ideal. Some users have a difficult enough time comprehending frequencies in the upper echelon are harder to achieve when they've proactively gone out and bought said kit separately.
 
Last edited:
Cost, for one. Also can't overlook that some of these kits are heavily overclocked. Baking a 9000MT/s+ memory kit onto the board when some CPU samples can't achieve those speeds unconditionally isn't ideal. Some users have a difficult enough time comprehending frequencies in the upper echelon are harder to achieve when they've proactively gone out and bought said kit separately.

As opposed to buying CPUs, motherboards and CAMM modules until you stumble on a selection that play nice together. Pretty sure people would be begging to have the memory baked into the motherboard at that point. You do understand what Armageus is describing would be faster than anything that could ever be achieved with CAMM and actually be cheaper?

The big question where do I connect my RGB controller?
 
I'm looking forward to CAMM2 based boards and would prioritise that feature when it comes to replacing my current setup.

I don't think it has been mentioned - airflow should be better in a case. Especially so with a big air cooler and a traditional front to back airflow design.
Also no worries with DIMM height compatibility with air coolers.

Looks clean, allows higher speeds. It's a win for me.
 
Which is fine, but then why not go the whole hog and just integrate the fastest memory you can onto the motherboard?

Overall it would reduce cost and improve signal integrity (and therefore likely speed), by removing the connectors and separate PCB.
When it eventually becomes necessary, I suspect that we will skip that step and go straight to on-package RAM. There are already CPUs available that are packaged with HBM, so we know it can work well. As much as I appreciate upgradability, going from GB/s of bandwith to TB/s is an enticing prospect. Too expensive for now though.
 
Personally would prefer them to go quad channel rather than chase high speed memory to increase bandwidth.
You have always had HEDT / Workstation options for that as the desktop class has always been a dual-channel platform.

However, an interesting bit of information X299 was quad-channel and used to push around 110GB/s to 120GB/s @ DDR4 4000 in terms of bandwidth but if we take DDR5 6000 offers around 96GB/s, DDR5 8000 will be up to around 128GB/s and then DDR5 10,000 will be up to around 160GB/s. The other side effect of running it quicker is it will reduce latency and also several timings become less critical.
 
You have always had HEDT / Workstation options for that as the desktop class has always been a dual-channel platform.

However, an interesting bit of information X299 was quad-channel and used to push around 110GB/s to 120GB/s @ DDR4 4000 in terms of bandwidth but if we take DDR5 6000 offers around 96GB/s, DDR5 8000 will be up to around 128GB/s and then DDR5 10,000 will be up to around 160GB/s. The other side effect of running it quicker is it will reduce latency and also several timings become less critical.
Power and heat are also increased with higher frequencies. HEDT does not exist anymore, have to spend 2x+(CPU+Mobo+RAM) to get quad channel now and it registered RAM.
 
Last edited:
Power and heat are also increased with higher frequencies. HEDT does not exist anymore, have to spend 2x+(CPU+Mobo+RAM) to get quad channel now and it registered RAM.

Increased power via the PMIC, will increase heat not so much frequency. However, if you have setup DDR5 correctly it will remain below 40c even with 1.6v or more. As an example, you can achieve 6000C28 with fairly tight timings on (SK Hynix A Die) around 1.55 to 1.62 depending on IC quality or you can go the other way of 8800C38 with fairly tight timings (24GB - SK Hynix M Die)which also around the 1.55 to 1.60 mark.

HEDT has become the Workstation now but that's semantics, yes a workstation naturally will cost a lot more money but depending on your application it will outperform the desktop counterpart.
 
Increased power via the PMIC, will increase heat not so much frequency. However, if you have setup DDR5 correctly it will remain below 40c even with 1.6v or more. As an example, you can achieve 6000C28 with fairly tight timings on (SK Hynix A Die) around 1.55 to 1.62 depending on IC quality or you can go the other way of 8800C38 with fairly tight timings (24GB - SK Hynix M Die)which also around the 1.55 to 1.60 mark.

HEDT has become the Workstation now but that's semantics, yes a workstation naturally will cost a lot more money but depending on your application it will outperform the desktop counterpart.
Power scales with frequency, higher frequency = more power, more power = more heat, this is physics.
I don't want to mess with RAM setting, I just plug it in and want it to work at the advertised speed, anything they can do to make this happen is welcome. X99 was the best HEDT platform, its sad that HEDT is gone.
 
Power scales with frequency, higher frequency = more power, more power = more heat, this is physics.

I would disagree, as per my example above.

If you go with the plug-and-play method, as you describe above which would run hotter?

DDR5-6000 CL28-36-36-96 1.40V

DDR5-8000 CL40-48-48-128 1.35V
 
I would disagree, as per my example above.

If you go with the plug-and-play method, as you describe above which would run hotter?

DDR5-6000 CL28-36-36-96 1.40V

DDR5-8000 CL40-48-48-128 1.35V
The 8000 will be hotter but that's not the full story as > 6400 changes the infinity Fabric ratio and DDR5 has a lot of other settings that also affect the result. Just comparing CL28-36-36-96 1.40V vs DDR5-8000 CL40-48-48-128 1.35V does not show all the other factors. Users on overclock.net are using RAM coolers to hit 8000+ and not all CPU's can maintain 8000 speeds at any voltage.
 
Power scales with frequency, higher frequency = more power, more power = more heat, this is physics.
I don't want to mess with RAM setting, I just plug it in and want it to work at the advertised speed, anything they can do to make this happen is welcome. X99 was the best HEDT platform, its sad that HEDT is gone.

Bit of a strawman when speaking about DRAM. Yes, frequency and current are intrinsically related, but there isn't much current to speak of. So that argument is weak.
 
The 8000 will be hotter but that's not the full story as > 6400 changes the infinity Fabric ratio and DDR5 has a lot of other settings that also affect the result. Just comparing CL28-36-36-96 1.40V vs DDR5-8000 CL40-48-48-128 1.35V does not show all the other factors. Users on overclock.net are using RAM coolers to hit 8000+ and not all CPU's can maintain 8000 speeds at any voltage.

Changing the goalposts a bit here but I'll address these:

The main area of heat on DDR5 is the PMIC, as that's doing all the voltage regulation and thus more voltage is more heat and that's why I posted those two examples. Yes, I am aware both of those run different ICs but it was more to highlight a point. Furthermore, once my X870 and Z890 systems arrive, I can post some stock value examples using (24GB) SK Hynix M die. DDR5 also has a handful more voltages than DDR4, as on both platforms you will have VDD & VDDQ which need balancing on the upper ends and then each platform will have its extra voltages which will form part of the IMC.

Yes, I am aware the Infinity Fabric changes from 1:1 over 6400 to a 2:1 ratio. However, if you are running custom timings or if you have copied from BuildZoid timings you may find that your CPU will not run 6400 at a 1:1 ratio while tweaking the FCLK speeds. Above 8000 on the AMD, which I have run on my X670E Hero isn't as stable as a X670E Gene but that's the normal 2DPC board Vs 1 DPC board. When I say isn't as stable, it means I have hit the limits of that board around 8000CL40 which worked fine but below CL40 it was showing errors whereas a Gene will just motor along. So I would say motherboard choice is important if you want to do that and go with a 1DPC board. The CPU IMC will have some variation but the 3 7950X3D's seem to manage it without too many issues.

Outside of changing the ratio from 1:1 to 2:1, the primary values will change which are the CLXX-XX-XX-XX but that's all normal stuff, under Secondary and Tertiary unless set manually will auto set with the boards on algorithms but that's the same as using XMP 1 on an Asus board, as it will auto set the Secondary and Tertiary values. I don't believe I have missed anything else other than SoC voltages of FCLK changes.

On the Intel side, you are not going to hit 8000 or above on a 2-DPC board so you can argue the MB is a more important choice but yes you do also have IMC variations from chip to chip but for the most part, 8000 is fairly straight forward to get running on a 1-DPC (Apex / Dark) board unless you are Buildzoid. If you are pushing for the top end of memory on the Intel side you are going to need to start binning CPUs / MBs and RAM which can get expensive. When I say top-end, I mean running over 8800 or higher

Going back to my original point of setting up DDR5 correctly, here is my example of removing the stock G.Skill heatsinks which are rubbish by the way as they have no pad on the PMIC and replacing them with a set of copper Bitspower Heatsinks. Here is an example from one of my Apex boards, on an Intel Platform running VDD 1.6 @ ~40c that's also under heavy load as you can see.

Zro7dqA.jpg
 
Last edited:

Gigabyte Z890 Aorus Tachyon ICE motherboards are top end, it will probably cost over £700 if you want one with CAMM2 memory slot.
 

Gigabyte Z890 Aorus Tachyon ICE motherboards are top end, it will probably cost over £700 if you want one with CAMM2 memory slot.
Yeah, that's going to put it out of reach for most people. No point if it's just for the ultra high end.
Maybe it's an early adopter tax and will be more widespread in the future.
 
Back
Top Bottom