• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon Resizable Bar Benchmark, AMD & Intel Platform Performance

I can already answer that, you just need a Ryzen 5000 series CPU and a RDNA2 GPU for the best SAM performance across multiple games. As to why some games show big gains and some show none, that's above my pay grade. What I do know is that there is no logical reason to disable SAM using the configuration above, that's why it's automatically enabled during driver installation on compatible systems. It's not enabled globally because there's a long list of games where performance suffers. That's why the excuses used to justify keeping it off don't wash with me.

If HUB and the testing was done on older CPUs, sure I'd go along with keeping it off. They don't test on older CPUs though, not for their hardware/game benchmark tests for the most part. Typically you don't want to artificially limit system performance when testing graphics cards as you want to show the best possible performance each card can offer.

If you want to show limited performance on older CPUs where bottlenecks may be present, fair enough. That's a different video though.

Again, I'm not disputing any of those bits...... :cry:

I just want to know why some games see no benefit or "arguably" in some cases, a decline i.e. despite what you and a couple others are trying to have us believe..... it doesn't come across as just being a simple on/off switch if results are all over the place depending on game, hardware....

Surely is it not in amds best interest to ensure that game developers know how to get the best from SAM/rebar? Hence not worth bringing it to attention of whoever it is in amd? Would be a good PR piece at the very least as you like to make clear many times now how much superior SAM is to resize bar, even though they are technically the exact same or to better put it, achieve the same objective i.e. this:

nSMnp6s.png


If he's not going to believe actual users with the issue and reputable sites such as pcgh and computerbase, he's not going to believe that bloke. :p

And there we go again with this point... Where have I or others stated we don't believe the likes of tommy, gerard and those 2 tech sites?

Again, see this point:

Except that's the thing... I and others have "noted" it and offered our inputs as to "potential" reasons why certain people are getting that issue, no one is not saying it's not happening for "certain" systems/users...

It's a certain band of people who don't want to look at the bigger picture or "acknowledge" some other "facts" i.e. why are some 3090 users experiencing fps drops? Why does it seem to be a widespread problem across a whole range of nvidia gpus and not amd? Why did likes of tommy experience this issue even at 1440P and using FSR with NO HD texture pack? (although he recently stated that no longer happens but didn't answer my question as to if that was from upgrading his cpu or/and adding 16GB more RAM.....) Why is my 3080 not showing the issue (unless I enable rebar and don't use FSR)? Why have the developers stated they are looking into the issues if it's purely just vram amount and nothing else (even when people are experiencing the issues without the HD texture pack)? Why do HUB etc. not note the same experience with the 3080 in their recent 3080 vs 6800xt video? Perhaps because they didn't enable rebar in that testing? Why does the FPS remain at single digits and not return to higher FPS like it does in every other game when there is a legit shortage of vram i.e. as per my cp2077 video.... Maybe a vram management issue? Hence why the developers might be looking into......

These are the questions that no one can or will come back to after all this time..... So it's not a clear cut "vram and nothing else" that a small minority are leading it on to be when you look at the whole picture hence the:

Rather than just taking a valid observation and knowing this is indeed a thing to consider

Being very true but it's one side who aren't doing that.....

Time for a popcorn.gif trying to watch Wrinkly explain something he does not understand. :cry:

Pretty sure wrinkly works in development, in which case, he will be a lot more clued up than the majority here :)

I 100% said multiple times that happened ONLY while trying to recover the game by reducing settings after the arse falls out it!

ONLY happened AFTER the game ran out of vram on 4k HD texture pack/Rt/max settings and then trying ingame to try and recover it.

(Remember-trying to recover a slideshow here)It would not recover no matter reducing whatever settings all the way down to 1440p with FSR.

If it runs out-forget recovering WITHOUT a restart.

If you reload the game with reduced settings of course it runs 1440p FSR!



I gave up replying to you about it because you don't listen!

The tpu FC6 performance slides state: Max Details, RT off, don't see any mention of running the HD pack either:

https://www.techpowerup.com/review/far-cry-6-benchmark-test-performance/4.html

Well maybe you should have made that more clear the previous times I asked you to explain :cry: Details people, they're important ;)

Ok, but you do realise that from that "same" article and your very own quote, he stated in the summary right here:

What's really interesting is VRAM usage. I measured well over 10 GB with the 4K, HD Texture pack, and ray tracing combo, which does stutter sometimes on the 10 GB GeForce RTX 3080, but runs perfectly fine with cards offering 12 GB VRAM or higher. I guess I was wrong when I said that the RTX 3080's 10 GB will suffice for the foreseeable future.

I'm amazed at your patience @Nexus18.

Bottom line, Microsoft, Nvidia and Sony all plonked for 10GB this generation, a 25% increase over previous for PC. Two highly unoptimised AMD sponsored titles require more on PC, therefore Microsoft, Nvidia and Sony all got it wrong according to our resident knitting circle. Elden Ring is a good example of how to stream data, which at 1440p uses 4-5GB of VRAM, with very little CPU / GPU overhead.

I suspect Nvidia just isn't putting the effort in to RBAR at the moment as they have no incentive to do so while developing both Lovelace and Hopper. But as you say, it would be good to get some low level input.

Someone has to educate these folks ;) :p :D :cry: Thankfully it is only a small class to teach :D
 
Your System RAM is magnitudes slower than your VRAM, any slowdown in texture streaming is a slowdown in performance.

The system will not do it unless it really needs to, but if your texture files are too large they end up shunted to System RAM, what you get then is a bottleneck in streaming and you end up with an effective memory bandwidth of an Integrated GPU.

Tell me, how do you think a 50GB texture pack fits within a 12GB video card?
 
I'm a self taught developer who started with the 1k ZX81 and a copy of programming the Z80 by Rodney Zaks. I also did some 6502 and 68000 before landing on x86 at 16 where I started professionally developing bespoke systems. At 21 I started my own little company which still, at 53, affords me a somewhat relaxed life style. I mention the my early experience with software as it taught me to be efficent with both code and data.

Then how is it that you do not understand the difference in texture streaming from VRAM vs System Memory?
 
Tell me, how do you think a 50GB texture pack fits within a 12GB video card?

The 50GB texture pack is for the whole game, the whole thing doesn't sit in VRAM, you don't play the whole game at once, hence "Texture Streaming" it loads in the bits when and as needed.
 
Oh here comes the denial playbus!

Playdays_Logo.jpg


If he's not going to believe actual users with the issue and reputable sites such as pcgh and computerbase, he's not going to believe that bloke. :p

I like his channel though and do watch his content sometimes. :)

:cry: true, not much point I guess.

I 100% said multiple times that happened ONLY while trying to recover the game by reducing settings after the arse falls out it!

I gave up replying to you about it because you don't listen!

Ahh yes, consistent and annoying, dont waste your breath. He's been on ignore for weeks. Doesn't get it. :cry:

You're comparing Game Consoles to high end Graphics Cards.

Prepare for wronkly another guy that will argue black is white.

Time for a popcorn.gif trying to watch Wrinkly explain something he does not understand. :cry:

This guy seems to pop his head from around a rock sporadically and also resorts to dad joke level insults after exhausting his technical competency.

Then how is it that you do not understand the difference in texture streaming from VRAM vs System Memory?

Clueless. Maybe offer 'phone a friend'?
 
Loads in bits from where?
I mean, if you want him to say hard drive then that's all well and good, but how has a game like doom (the new one not the classic) managed to run on gpus for the last 5 years?

It loads what it needs to into the vram. Not all of it. The transfer rate can be whatever you like, but you still cant get a quart into a pint pot.

But I'm not a developer. So I don't know really.
 
I mean, if you want him to say hard drive then that's all well and good, but how has a game like doom (the new one not the classic) managed to run on gpus for the last 5 years?

It loads what it needs to into the vram. Not all of it. The transfer rate can be whatever you like, but you still cant get a quart into a pint pot.

But I'm not a developer. So I don't know really.

Well he claimed that you can't stream in data as an when needed as it would cause too much of a bottleneck, and then went on to tell me that data is streamed in as and when needed :eek:
 
Loads in bits from where?

Pre-loading. From the drive

The GPU has levels of "cache"

In descending order by priority and speed.

Level 1
Level 2
If its an RDNA2 GPU you also have a Level 3
VRAM
System RAM
Main drive

Streaming starts with L1, what doesn't fit there is moved to L2, then L3 if you have an RDNA2 GPU and then VRAM, if you have 9GB of critical files in your 10GB VRAM and it needs to load a 4GB texture pack it will load it to System RAM and steam it from there

You are never "Streaming" from the drive, unless your System RAM is also full. don't confuse pre-loading and streaming.

Loading is fetching the necessary data, Streaming is pushing it to the render pipe.
 
I'm a self taught developer who started with the 1k ZX81 and a copy of programming the Z80 by Rodney Zaks. I also did some 6502 and 68000 before landing on x86 at 16 where I started professionally developing bespoke systems. At 21 I started my own little company which still, at 53, affords me a somewhat relaxed life style. I mention my early experience with software as it taught me to be efficent with both code and data.

Very nice :cool:

Posted this in one of the many vram threads but found it rather amusing given this whole fiasco :D

Funnily, it's rather ironic as over the last 1-2 weeks on my project (whilst completely different scenarios, it will be much the same for "game" development too), we have been dealing with a performance issue for an app in production, essentially end users complaining about sluggish experience/non-responsive at times, RAM usage was pretty much maxed out, as per usual, the initial thought of everyone is oh, just add more RAM.... despite the fact that we had already done this 3 times in the past before and was far exceeding the requirements not to mention, the amount of users was limited.... so with some proper investigation i.e. monitoring and checking log files and some testing, turns out it was an issue with the "application" not managing nor clearing allocated RAM after processing of said files and usage, needless to say, we got this fixed the "proper" way by having the developers fix this on the "app" side. I would say given that the initial game released with a 16GB VRAM requirement and after said patch, dropped to 12GB VRAM, it sounds like a similar story here, at least with either the game or/and peoples configurations since it's not happening to "everyone" hence the lack of reports

Obviously I'm not a game developer but when you work in the industry with various development teams and products, it provides a good glimpse into how things work, especially in a proper development environment/project where you "need" to try and adopt/implement good coding practices, something which you don't quite understand/get from just doing side projects for fun/in your free time..... I imagine if the supposed "game developers" on here were so good at it, they would be working for an actual games development company or even just at a regular development company....

Oh here comes the denial playbus!

Playdays_Logo.jpg




:cry: true, not much point I guess.



Ahh yes, consistent and annoying, dont waste your breath. He's been on ignore for weeks. Doesn't get it. :cry:



Prepare for wronkly another guy that will argue black is white.



This guy seems to pop his head from around a rock sporadically and also resorts to dad joke level insults after exhausting his technical competency.



Clueless.

:cry:

Generally people use ignore because:

1. they can't comeback to any arguments
2. don't like to read/hear the truth
3. are clueless hence the 2 points above

Given you can't differentiate between the 3070 8gb and 3080 10gb in "steves comment", well that just proves them points :D
 
Pre-loading. From the drive

The GPU has levels of "cache"

In descending order by priority and speed.

Level 1
Level 2
If its an RDNA2 GPU you also have a Level 3
VRAM
System RAM
Main drive

Streaming starts with L1, what doesn't fit there is moved to L2, then L3 if you have an RDNA2 GPU and then VRAM, if you have 9GB of critical files in your 10GB VRAM and it needs to load a 4GB texture pack it will load it to System RAM and steam it from there

You are never "Streaming" from the drive, unless your System RAM is also full. don't confuse pre-loading and streaming.

Didn't know any of this. Thank you.
 
Ok since you seem to know a lot on this topic @humbug , you should be able to answer a question I have been asking for a long time now and no one else has attempted to answer it but you seem to be up to the job :)

So, lets get to it:

1. Can you explain why in the case of tommys 3080 in FC 6 when his FPS drops to single digits... his fps does not return to normal if he wonders of to a new area or looks elsewhere or at the very least go up when he reduces the settings? As would happen in 99% of games where vram symptoms would manifest

Yet, as per my videos, you can see the vram is exceeding 10GB yet it doesn't stutter and doesn't drop to single digits? Why is that?

Where as in my showcase of CP 2077 when using several 4-8k texture packs, you can clearly see frame latency issues and fps drops as it gets closer to 10GB, however, once leaving the area, the fps returns to normal figures and frame latency goes back to normal:


Can you explain?

Genuinely would love to know about this, as I said, I'm not a "game" developer so educate me please
 
Pre-loading. From the drive

The GPU has levels of "cache"

In descending order by priority and speed.

Level 1
Level 2
If its an RDNA2 GPU you also have a Level 3
VRAM
System RAM
Main drive

Streaming starts with L1, what doesn't fit there is moved to L2, then L3 if you have an RDNA2 GPU and then VRAM, if you have 9GB of critical files in your 10GB VRAM and it needs to load a 4GB texture pack it will load it to System RAM and steam it from there

You are never "Streaming" from the drive, unless your System RAM is also full. don't confuse pre-loading and streaming.

Loading is fetching the necessary data, Streaming is pushing it to the render pipe.

At the start you asked where I thought data, destined for the GPU, was streamed from. I answered system ram and then drive storage. You could also add in cloud if you wish. Yes there are various forms of cache, both hardware and software at each stage. Where did I appear confused?
 
@Wrinkly you can answer your own question if you would just ask yourself the right question.

When moving texture files to a render pipeline which is faster?

DDR4 @ 60 GB/s
Or
GDDR6 @ 600 GB/s

Ok since you seem to know a lot on this topic @humbug , you should be able to answer a question I have been asking for a long time now and no one else has attempted to answer it but you seem to be up to the job :)

So, lets get to it:

1. Can you explain why in the case of tommys 3080 in FC 6 when his FPS drops to single digits... his fps does not return to normal if he wonders of to a new area or looks elsewhere or at the very least go up when he reduces the settings? As would happen in 99% of games where vram symptoms would manifest

Yet, as per my videos, you can see the vram is exceeding 10GB yet it doesn't stutter and doesn't drop to single digits? Why is that?

Where as in my showcase of CP 2077 when using several 4-8k texture packs, you can clearly see frame latency issues and fps drops as it gets closer to 10GB, however, once leaving the area, the fps returns to normal figures and frame latency goes back to normal:


Can you explain?

Genuinely would love to know about this, as I said, I'm not a "game" developer so educate me please

I'll look at that later, if i can give you an answer, or not, i will get back to you.
 
Edited my post for @Wrinkly, could have just answered it here instead of writing this, time for me to get off this computer and have my dinner...
 
@Nexus18
Your Cyberpunk video above is a result of pipeline bottleneck.

Basically that much AI and level of detail on screen at once is reducing the speed the CPU can deliver and receive que from the GPU.

I am no expert and couldn't tell you in fine detail of what the main issue is I just know the basic, of basic what to look for.

Before RDNA AMD GPUs in dx11 titles can experience this issue by not optimise the GCN properly.

Edit - this is well worth a read
https://developer.nvidia.com/sites/all/modules/custom/gpugems/books/GPUGems/gpugems_ch28.html
 
Last edited:
@Nexus18
Your Cyberpunk video above is a result of pipeline bottleneck.

Basically that much AI and level of detail on screen at once is reducing the speed the CPU can deliver and receive que from the GPU.

I am no expert and couldn't tell you in fine detail of what the main issue is I just not the basic, of basic what to look for.

Before RDNA AMD GPUs in dx11 titles can experience this issue by not optimise the GCN properly.


https://developer.nvidia.com/sites/all/modules/custom/gpugems/books/GPUGems/gpugems_ch28.html

Yeah that's what I was thinking at first too as noticed the same kind of issue with my ryzen 2600 when getting into crowded areas, even with medium crowd density, however, that doesn't happen with stock game on my ryzen 5600 i.e. when the 4-8k texture packs are removed, the problem goes away, perhaps it is just a combination of everything going on in addition to the 4k-8k texture packs i.e. overloading the cpu so cpu can't feed rest of system quickly enough? But then if you look at the stats, cpu usage is much the same overall throughout, it's mostly just the vram usage changing.

Good read that link too.
 
Back
Top Bottom