• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Ryzen 7950X3D, 7900X3D, 7800X3D

Will you be purchasing the 7800X3D on the 6th?


  • Total voters
    191
  • Poll closed .
The 7900x3d looks sweet for sure. Saw a leak that said it would be a 25% performance uplift at 1080p vs zen4. Not sure how relevant this will be to 1440p and 2160p resolutions but it should be an nice increase for gaming. Only thing putting me off is the ridiculously high motherboard prices and DDR5 not being as good on AMD platform vs Intel in terms of max frequencies.
are the am5 memory frequencies typically limited by the cpu [generation] or the motherboard?...or to asking the question another way...will later cpus support faster memory frequencies on the same motherboard? thanks
 
The reason they are called L1 - L3 is due to proximity to the cores.

L1 being the closest and L3 being the furthest. Taking L1 and L2 and putting them above the core (like L3) defeats the purpose of super fast L1 and L2.

think more in 3d though. the paths can start and end in more optimized places. apart from the vertical distance things can occupy the same 2d space. looking at a die shot 1/2 the area is cache. more all the l3 above would mean you could easily add 50% more cores and still have more l3 cache than a standard chip...
 
are the am5 memory frequencies typically limited by the cpu [generation] or the motherboard?...or to asking the question another way...will later cpus support faster memory frequencies on the same motherboard? thanks
Since there is only one datapoint, it is hard to tell. But leaning towards being limited by CPU.
1. It seems current gen Intel Z790 motherboards can push memory beyond 8000. So motherboards in general should be fine.
2. First gen Zen and Zen+ were famous for having poor memory support. Correct me if I'm wrong, later CPUs with improved memory controllers can run higher memory speeds even on old motherboards?
 
The IMC in both case (AMD & Intel) is on the CPU, so the maximum memory frequency is therefore CPU dependent.

That's part of the equation. The other, very important part, are the traces on motherboard between CPU socket and RAM slots. These have to be optimized to support high frequencies.
 
It's nothing to do with fitting all the game data in cache.

The difference is how the game is accessing data. If it's written such that it's making random reads (usually due to lots of indirection), then it doesn't matter big your cache is....the CPU is going to sat waiting for data to be fetched from RAM all the time.

If the game is written in a decent data-oriented fashion, organising logic such that it's doing lots of sequential reads, then the prefetch engine can work effectively and keep the cache stuffed with data *before* the CPU needs it, so there's minimal wasted cycles.

Simulations (like MSFS) lend themselves to data oriented designs as you typically have lots of data that needs unconditional logic running over it, with no branches, like the weather and fluid simulations.

Till intel pull out the hbm on cpu then everything sits close to the cpu
 
2. First gen Zen and Zen+ were famous for having poor memory support. Correct me if I'm wrong, later CPUs with improved memory controllers can run higher memory speeds even on old motherboards?

Yes, it is true, the memory controllers usually improve over time. Like you said though, it's hard to tell what the limit is, without more data points. When I've watched buildzoid's recent DDR5 videos, he seems pretty sure what motherboards are capable of, just by assessing their topology and the PCB, so my guess is that early AM5 and 1700 CPUs have poor memory controllers. Perhaps a Ryzen fan can correct me if I'm wrong, but I think the early B350/B450 and X370/X470 motherboards were initially thought to be a limiting factor for the CPUs, but now that they support Zen 2/3, it's been clear that (at least, with BIOS updates) they actually weren't.
 
Till intel pull out the hbm on cpu then everything sits close to the cpu
Looking forward to the first CPU's to include HBM, will be interesting to find out how bandwidth starved modern CPU's are. Would be great if they added 32GB HBM and did it so installing DDR RAM was optional as most would not need to buy extra RAM, could also reduce the number of slots on the MB to 2. Probably not going to happen for a long time but Intel is doing it for servers so?
 
Last edited:
Till intel pull out the hbm on cpu then everything sits close to the cpu
On-die HBM will give masses of bandwidth which is useful for compute applications where you are crunching massive amounts of data.

In games, you're not dealing with particularly large amounts of data (not compared to compute). Performance bottlenecks in games are largely around cache usage and latency, which is why the X3D chips see such impressive results.
 
On-die HBM will give masses of bandwidth which is useful for compute applications where you are crunching massive amounts of data.

In games, you're not dealing with particularly large amounts of data (not compared to compute). Performance bottlenecks in games are largely around cache usage and latency, which is why the X3D chips see such impressive results.
Should help igpu performance
 
Hopefully not too long to wait for OCUK to have these after CES. I’ve got a 7900 xtx preordered and need a rig to put it in!

7900XTX probably have a healthy price cut by the time Zen4 3D Cache are available (likely around March according to rumours). Announcement is expected in January at CES.
 
Last edited:
7900XTX probably have a healthy price cut by the time Zen4 3D Cache are available (likely around March according to rumours). Announcement is expected in January at CES.
I think considering how well it has sold then it may not come down much but then we may just see a cut on some aib cards down to msrp level.
 
It's nothing to do with fitting all the game data in cache.

The difference is how the game is accessing data. If it's written such that it's making random reads (usually due to lots of indirection), then it doesn't matter big your cache is....the CPU is going to sat waiting for data to be fetched from RAM all the time.

If the game is written in a decent data-oriented fashion, organising logic such that it's doing lots of sequential reads, then the prefetch engine can work effectively and keep the cache stuffed with data *before* the CPU needs it, so there's minimal wasted cycles.

Simulations (like MSFS) lend themselves to data oriented designs as you typically have lots of data that needs unconditional logic running over it, with no branches, like the weather and fluid simulations.

From my own experience (may not be as extensive as yours, feel free to correct me!) I've found the larger L3 cache's have more impact on your first use case than your second. that doesn't mean the second isn't always the quickest, but with more scope and range for branch prediction it softens the blow for cache misses?

I've used Intel PCM on Xeon processors to figure out where our bottlenecks are and just noted that we get more of an uplift with larger L3 cache for more algorithmic/logical threads than not.. The data processing threads tend to fit nicely in L2's range anyway (once optimised), but larger or more varied datasets do see an increase. We do encoding of large data streams at times, those algorithms are too heavy to 'fit' fully in L2 and our hit rates decrease so the L3 size often does help then.

As a side note, when we have not seen any performance difference between hardware generations with more L3/better features, it's normally been data synchronisation issues between threads (so stalling the CPU constantly) or worse, heavy context switching on data processing, that gets very low hit ratios on all the cache.
 
Back
Top Bottom