• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Zen 3D V-Cache Ryzen CPUs May be Available in December 2021

Slightly off topic for this thread but also a continuation of recent conversations...

The announced recommended specs for Far Cry 6 are quite heavy on the CPU requirements.

For 1440p with RT they recommend a 5600x

For 4k with RT they recommend a 5800x.

Now I have literally just received my new 5600x but thinking to send it back and get a 5800x just for a bit of safety when it comes to future requirements.

Then again I am at 1440p.

Specs: https://twitter.com/FarCrygame/status/1433520107263832064/photo/1
 
Slightly off topic for this thread but also a continuation of recent conversations...

The announced recommended specs for Far Cry 6 are quite heavy on the CPU requirements.

For 1440p with RT they recommend a 5600x

For 4k with RT they recommend a 5800x.

Now I have literally just received my new 5600x but thinking to send it back and get a 5800x just for a bit of safety when it comes to future requirements.

Then again I am at 1440p.

Specs: https://twitter.com/FarCrygame/status/1433520107263832064/photo/1
They were clearly written by someone with a very loose idea of what they're doing and just thought they should put in something that's one louder for a higher resolution. Increasing resolution lowers your CPU usage because your GPU isn't rendering as many frames. If a 5600X is fine at 1440p, it is inherently fine at 4K too.
 
Slightly off topic for this thread but also a continuation of recent conversations...

The announced recommended specs for Far Cry 6 are quite heavy on the CPU requirements.

For 1440p with RT they recommend a 5600x

For 4k with RT they recommend a 5800x.

Now I have literally just received my new 5600x but thinking to send it back and get a 5800x just for a bit of safety when it comes to future requirements.

Then again I am at 1440p.

Specs: https://twitter.com/FarCrygame/status/1433520107263832064/photo/1

Gigabyte X570 Aorus Elite should handle 5800X nicely. I do not know about cooler since they say 5800x could get hot. But then again the 3d cache in the incoming series may improve gaming though it is not sure which socket it will land on. No easy answer to this situation.
 
They were clearly written by someone with a very loose idea of what they're doing and just thought they should put in something that's one louder for a higher resolution. Increasing resolution lowers your CPU usage because your GPU isn't rendering as many frames. If a 5600X is fine at 1440p, it is inherently fine at 4K too.

That is the weird anti-logic till today which should reverse, and maybe now is the moment in time.
It doesn't make sense that 1 loads the CPU more than 4 load the same CPU.

The DirectX is broken.
 
They were clearly written by someone with a very loose idea of what they're doing and just thought they should put in something that's one louder for a higher resolution. Increasing resolution lowers your CPU usage because your GPU isn't rendering as many frames. If a 5600X is fine at 1440p, it is inherently fine at 4K too.

Yes possibly.

Looking at benchmarks generally (with out the rare outliers) both the 5600x and 5800x are around 5% apart.

And given the increase in resolution its weird for them to suggest a 5800x over a 5600x at 4k.

I think the difference here is RT. Perhaps at high resolutions it does need more CPU power to compute it even though yes it is a GPU feature.

All I can think of.
 
That is the weird anti-logic till today which should reverse, and maybe now is the moment in time.
It doesn't make sense that 1 loads the CPU more than 4 load the same CPU.

The DirectX is broken.
You seem rather confused and have inadvertently answered your own query. The CPU isn't handling the work of drawing frames to the monitor - that's on the GPU, so the GPU is what's being given more work to do when the resolution increases and it becomes more time consuming to draw each frame. What the CPU does is feed information about what it (or the game via it) wants drawn by the GPU. If you're getting 100fps in a game at 1080p, that's 100 frames per second that the GPU needs to know how to process. If you up the resolution to 4K and you're now only getting 25fps, the GPU only needs to be fed with enough information to process 25 frames every second. Obviously that means the CPU is doing less work. That's also why CPU bottlenecks ease at higher resolutions, because the burden of processing is shifted from the CPU to the GPU. The CPU isn't doing less work per frame, but it's not having to provide as many of them because the GPU can't process them as quickly any more. If you compared two different graphics cards where one could achieve 100fps at 1080p and the other could achieve 100fps at 4K, the CPU load would be roughly equal and any bottleneck on the CPU side would return. It's more accurate to say that CPU bottlenecks are simply masked at higher resolutions.

It's perfectly logical and entirely makes sense. It has nothing to do with DirectX either. All graphics APIs work this way, be it DirectX, OpenGL, Vulkan or otherwise. There are certainly differences in how each (and how effectively each) utilises the CPU, but the basic concepts remain the same. The CPU is always feeding work to the GPU.
 
You seem rather confused and have inadvertently answered your own query. The CPU isn't handling the work of drawing frames to the monitor - that's on the GPU, so the GPU is what's being given more work to do when the resolution increases and it becomes more time consuming to draw each frame. What the CPU does is feed information about what it (or the game via it) wants drawn by the GPU. If you're getting 100fps in a game at 1080p, that's 100 frames per second that the GPU needs to know how to process. If you up the resolution to 4K and you're now only getting 25fps, the GPU only needs to be fed with enough information to process 25 frames every second. Obviously that means the CPU is doing less work. That's also why CPU bottlenecks ease at higher resolutions, because the burden of processing is shifted from the CPU to the GPU. The CPU isn't doing less work per frame, but it's not having to provide as many of them because the GPU can't process them as quickly any more. If you compared two different graphics cards where one could achieve 100fps at 1080p and the other could achieve 100fps at 4K, the CPU load would be roughly equal and any bottleneck on the CPU side would return. It's more accurate to say that CPU bottlenecks are simply masked at higher resolutions.

It's perfectly logical and entirely makes sense. It has nothing to do with DirectX either. All graphics APIs work this way, be it DirectX, OpenGL, Vulkan or otherwise. There are certainly differences in how each (and how effectively each) utilises the CPU, but the basic concepts remain the same. The CPU is always feeding work to the GPU.

Makes sense. But what about RT?

Maybe that's why the cpu recommendations form Ubisoft differ from 1440p to 4k?

I mean RT is compute based not raster based. So maybe upping the resolution does increase CPU requirements.

Or maybe not. If the picture is drawn (with RT computed) then the resolution it is rendered at doesn't increase compute requirements of RT on both GPU and CPU.

I have no idea.
 
With latest Alder Leaks leaks pointing to November and AMD improved Ryzen in December it seems like the competition is going to be fierce. AMD may pull an ace with 3d cache.
 
Makes sense. But what about RT?

Maybe that's why the cpu recommendations form Ubisoft differ from 1440p to 4k?

I mean RT is compute based not raster based. So maybe upping the resolution does increase CPU requirements.

Or maybe not. If the picture is drawn (with RT computed) then the resolution it is rendered at doesn't increase compute requirements of RT on both GPU and CPU.

I have no idea.
My understanding is that simply enabling ray tracing does add some additional CPU overhead, but I've never any suggestion that there's any sort of scaling beyond that with resolution, or that it's something which adding more cores would alleviate. If you look at Digital Foundry's CPU testing of Cyberpunk 2077 with RT enabled, both the Intel and AMD chips perform essentially identically at every resolution. There's no scaling towards the higher core count parts when you go from 1080p to 1440p to 4K. They remain within margin of error relative to each other, bar the 1080p result for the 10600K, which seems like a testing error to me, as it's not replicated on the AMD side or with the 11600K. And Cyberpunk 2077 is an extremely CPU-heavy game even without RT enabled, so any sort of resolution or thread scaling regarding ray tracing should be highlighted even more.

https://www.eurogamer.net/articles/digitalfoundry-2021-intel-core-i9-11900k-i5-11600k-review?page=4
 
My understanding is that simply enabling ray tracing does add some additional CPU overhead, but I've never any suggestion that there's any sort of scaling beyond that with resolution, or that it's something which adding more cores would alleviate. If you look at Digital Foundry's CPU testing of Cyberpunk 2077 with RT enabled, both the Intel and AMD chips perform essentially identically at every resolution. There's no scaling towards the higher core count parts when you go from 1080p to 1440p to 4K. They remain within margin of error relative to each other, bar the 1080p result for the 10600K, which seems like a testing error to me, as it's not replicated on the AMD side or with the 11600K. And Cyberpunk 2077 is an extremely CPU-heavy game even without RT enabled, so any sort of resolution or thread scaling regarding ray tracing should be highlighted even more.

https://www.eurogamer.net/articles/digitalfoundry-2021-intel-core-i9-11900k-i5-11600k-review?page=4

Very good find.

Hard to know if the new consoles though change this equation.

Probably not as Cyber Punk is a 'next gen' game and still follows the same logic.
 
You seem rather confused and have inadvertently answered your own query. The CPU isn't handling the work of drawing frames to the monitor - that's on the GPU, so the GPU is what's being given more work to do when the resolution increases and it becomes more time consuming to draw each frame. What the CPU does is feed information about what it (or the game via it) wants drawn by the GPU. If you're getting 100fps in a game at 1080p, that's 100 frames per second that the GPU needs to know how to process. If you up the resolution to 4K and you're now only getting 25fps, the GPU only needs to be fed with enough information to process 25 frames every second. Obviously that means the CPU is doing less work. That's also why CPU bottlenecks ease at higher resolutions, because the burden of processing is shifted from the CPU to the GPU. The CPU isn't doing less work per frame, but it's not having to provide as many of them because the GPU can't process them as quickly any more. If you compared two different graphics cards where one could achieve 100fps at 1080p and the other could achieve 100fps at 4K, the CPU load would be roughly equal and any bottleneck on the CPU side would return. It's more accurate to say that CPU bottlenecks are simply masked at higher resolutions.

It's perfectly logical and entirely makes sense. It has nothing to do with DirectX either. All graphics APIs work this way, be it DirectX, OpenGL, Vulkan or otherwise. There are certainly differences in how each (and how effectively each) utilises the CPU, but the basic concepts remain the same. The CPU is always feeding work to the GPU.

No and no.
This is a wrong approach.

You have to load all your resources - if you have free CPU threads - give them work to process.
There is enough - be it sounds, physics, ray-tracing, whatever.

Just load your sleeping CPUs !

:D
 
Do you mean that if it is on AM4 it will be for 5h series motherboards only?

I’ve no clue what the level of support would be. AMD usually leave a lot of flexibility to the motherboard makers. Intel need to Alder lake needs to compete with Zen.
 
With latest Alder Leaks leaks pointing to November and AMD improved Ryzen in December it seems like the competition is going to be fierce. AMD may pull an ace with 3d cache.

Yes competition indeed, I'll look forward to an extra 10-25% performance with an additional 10-25% increase in prices. ;)

Get ready for a potential 6600x 6 core that may price close to £300 togther with whatever the eq Alder Lake ends up being.
 
Last edited:
Yes competition indeed, I'll look forward to an extra 10-25% performance with an additional 10-25% increase in prices. ;)

Get ready for a potential 6600x 6 core that may price close to £300 togther with whatever the eq Alder Lake ends up being.

25% maybe a bit optimistic only with cache increase though it will be good if AMD manages to achieve it.
 
25% maybe a bit optimistic only with cache increase though it will be good if AMD manages to achieve it.
We had around an 18-23% performance boost with the 3600 > 5600X yet price increased by 50% from £200 > £300 and people still lapped it up so these companies know that DIY PC builders are happy to pay more for less performance per pound spent.
 
We had around an 18-23% performance boost with the 3600 > 5600X yet price increased by 50% from £200 > £300 and people still lapped it up so these companies know that DIY PC builders are happy to pay more for less performance per pound spent.
Agreed but I am speculating about cache here. With 5h series I think improvement came mostly from IPC boost.
Yes the prices went up. And by the looks of it they are heading for new peaks.
I think DIY PC builders are not that many and in my opinion companies take into consideration mostly the general public.
 
Back
Top Bottom