• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

For some reason not everyone is affected by stuttering, you might be one of the lucky one's, luckily my guy with the 3090 was affected so it allowed the testing, but there is people with 10 gig VRAM and less who play FF7R with seemingly no stuttering (remember also a 3090 wouldnt automatically solve it as the game doesnt tune itself to fully utilise the VRAM out of the box). I tried to start a diagnosis thread on reddit asking people for their system configuration to see if there was a pattern for those who dont get stuttering but no one was interested and I even encountered hostility for the idea. Those without stuttering I assume would still be affected by the texture quality issues though.

Also not sure whats going on with directstorage, it wouldnt surprise me if games shipped with the libraries themselves as some kind of DX12 module, people seem to be assuming it will be a feature update on windows but that might not be required.
 
Last edited:
Sadly I thnk TPU analysis of VRAM usage is pretty bad, the only reviewer who handles this stuff properly in my opinion is digital foundry. e.g. DF pointed out on FF7 remake that the game of course has a wide spread stuttering problem which has been proven to be caused by textures and shaders swapping in and out of VRAM. DF noted that everytime the game stutters, commonly used performance metric tools dont pick it up so someone analysing gpuz or afterburner logs would see no performance issues. Whilst DF have their own external tools (which they had to develop as they do consoles a lot that dont have their own internal tools) that did pick up the stutters. They had the balls to tweet out the review industry got FF7R wrong but sadly didnt have the balls to put it in their video.

Another issue the review industry has with VRAM analysis is they dont analyse texture streaming and texture quality, so e.g. a 8 gig VRAM card might gets the same framerate on ultra textures as a 12 gig VRAM card, but if you examine the textures they might be lower quality, as all UE4 games dont have static texture quality, they have "variable" quality which dynamically controls the texture LOD based on VRAM and other resource availability (the texture quality setting in UE4 games sets the max variable quality its is not guaranteed). I remember tinkering in FF7r on the UE4 console trying to turn this behaviour off but seems its difficult to do. Then of course you also have texture pop in issue which again increase when VRAM is under stress. (Note on the 10 gig 3080 thread someone kindly made screenshots for me from a game where he said he couldnt tell the difference between different texture quality levels and I could immediately see the difference, so different gamers have different sensitivity to it), for me texture quality is one of the most import quality metrics in games. its very jarring to see a high res character standing next to a low res blurry wall.

I compared playing FF7R on the PS5 to PC was set to high detail textures the max setting. In chapter 8 as an example the PC was loading the textures the PS4 had very low res PS3 quality type textures, PS5 has a much higher LOD, but in other areas the PC matches the PS5, chapter 8 also has a lot of pop in issues on the PC, which suggests its demanding VRAM wise. In areas like sector 7 (chapter 3). especially on the yuffie DLC where they expanded the explorable area, there is various stutters that can happen due to the sheer amount of objects that get streamed in and out.

I found someone with a 3090 who was prepared to help me with testing we did it live on discord, and we discovered together if you manually increase the VRAM budget on the UE4 console on FF7r, most of the issues disappear, chapter 8 textures are like the PS5. Stuttering doesnt vanish but its 90% better, pop ins go away, however by default on a 3090 the game doesnt increase the VRAM budget my much on a 3090 by itself, so if you was just a dumb tester like the TPU guys seem to be, and you just swapping hardware, you not really testing 24 gig VRAM as the game isnt been allowed to use it, it has to be manually configured on the UE engine. On my 3080 it sets the VRAM budget to 5500MB, on the 3090 it was set to 6500. When we bumped it manually to 20000 it was way better. I should also note if I left it at 5500 my game crashed with VRAM OOM, I have to reduce it to 5000 or lower (or drop shadows or textures to low quality) as the game also uses VRAM for RAM type stuff. Something which is going to get more common on newer console ports due to the shared memory architecture. FF7r has extremely low RAM usage as most RAM type stuff is loaded into VRAM instead. e.g. it needs over 3 gig of VRAM just to get to the title menu. The new far cry game was coded the same way originally and the patch that mitigated VRAM issues moved RAM stuff from VRAM to RAM to free up VRAM for textures.

FF15 another game with VRAM issues and the reason I originally upgraded my 1070 to a 1080ti. This game had amazing visuals when you turned on all the hidden options and used alongside the 4k texture pack, but the VRAM usage was crazy high. In addition if Nvidia grass was enabled it had a nasty memory leak, the game code overflowed VRAM usage to system RAM, so if you ran out of VRAM the game didnt crash, you just had a large performance loss, however it carried on leaking and the entire windows OS would hit critical memory condition if you didnt restart the game before all virtual memory was consumed, FF15 also is what triggered me to upgrade from 16 gigs of ram to 32. This bug never got patched. Likewise ff7r issues havent been patched.

I hope this post explains how easy it is for reviewers to not analyse these problems properly (I have read TPU analysis of VRAM's effect on games and its just embarrassingly lazy), and when I think about every game I have played that has stuttering issues (usually JRPGs), the problems are always linked to texture asset swapping. Texture asset swapping is basically a hack to get round under spec'd memory resources. Initially developed for xbox360 and PS3 which had an extremely low amount of memory just a few hundred megs. Ironically the industry has kind of done a 180 and its now PC gpu's that may be under spec'd for VRAM with consoles having more generous amounts. The UE4 engine with the development of things like directstorage seems to finally have its issues resolved, but of course PCs dont yet have that API so the problems remain.
Good post, but dont worry, just chuck another £700 at a gfx card with more VRAM, call everyone else stupid and its the way games are made these days. Nothing to do with the drivel AAA devs are chucking out for MSRP when they are little more than Alpha versions. I've seen Alpha versions e.g. Squad which played better than the cack they peddle as AAA today, play better as an alpha.

People keep paying for dribble, they'll be supplied - dribble. Same people then chuck more at gfx card to mitigate some of the dribble in an attempt at polishing the turds that are today's game releases. So bad, some games get dropped from making it to consoles for timescales in years.
 
Good post, but dont worry, just chuck another £700 at a gfx card with more VRAM, call everyone else stupid and its the way games are made these days. Nothing to do with the drivel AAA devs are chucking out for MSRP when they are little more than Alpha versions. I've seen Alpha versions e.g. Squad which played better than the cack they peddle as AAA today, play better as an alpha.

People keep paying for dribble, they'll be supplied - dribble. Same people then chuck more at gfx card to mitigate some of the dribble in an attempt at polishing the turds that are today's game releases. So bad, some games get dropped from making it to consoles for timescales in years.

Its not that people making points like this want you to keep buying new cards every 5 minutes to keep up with demands, that's Nvidia's job.

The complaint is these cards don't have enough VRam for what they are capable of, and what we pay for them, its challenging Nvidia to do better, we all know a 3070Ti is significantly faster than a 2080Ti, costs near twice as much as a 8GB midrange card from just a few years ago.

When people apparently rage against that perfectly reasonable position it just makes that person looking like they sleep with a blow up version of Jenson, its strange, insane even to those of us who feel like we are being ration dribbled planed obsolescence to be rebutted like that for our stance on it, good job Nvidia.
 
Question, with RDNA 2 pathetic RT performance (which has been an issue since day 1 in far more titles where either RT settings have to be reduced or turned off entirely), would we also class that as planned obsolescence?
 
Question, with RDNA 2 pathetic RT performance (which has been an issue since day 1 in far more titles where either RT settings have to be reduced or turned off entirely), would we also class that as planned obsolescence?

No because all Nvidia have to do is use 2GB memory IC's instead of the 1GB ones they do use, don't quote me on it but i remember someone from AMD commenting that these days the 2GB IC's don't actually cost meaningfully more than the 1GB ones.

AMD's lacking RT performance is architectural, if i had the equipment i can turn my 8GB 2070S in to a 16GB one by littrally swapping the 1GB IC's to 2GB ones. I can't do anything about AMD's RT performance....
 
Its not that people making points like this want you to keep buying new cards every 5 minutes to keep up with demands, that's Nvidia's job.

The complaint is these cards don't have enough VRam for what they are capable of, and what we pay for them, its challenging Nvidia to do better, we all know a 3070Ti is significantly faster than a 2080Ti, costs near twice as much as a 8GB midrange card from just a few years ago.

When people apparently rage against that perfectly reasonable position it just makes that person looking like they sleep with a blow up version of Jenson, its strange, insane even to those of us who feel like we are being ration dribbled planed obsolescence to be rebutted like that for our stance on it, good job Nvidia.

Few get this.
 
Nvidia used 2GB IC's for the RTX 3060, because somehow 6GB is not enough for a card like that, but 8GB for a card that blows the 2080Ti in to the weeds is.

We all know the real reason is because people didn't like the RTX 2060 only having 6GB, but 8GB is enough for anything so why would we even want more than that? like 4 cores was always enough and any more than that gave you chickenpox!

For the love of............................
 
No because all Nvidia have to do is use 2GB memory IC's instead of the 1GB ones they do use, don't quote me on it but i remember someone from AMD commenting that these days the 2GB IC's don't actually cost meaningfully more than the 1GB ones.

AMD's lacking RT performance is architectural, if i had the equipment i can turn my 8GB 2070S in to a 16GB one by littrally swapping the 1GB IC's to 2GB ones.
Be good to see the source for that claim. Especially on the price front and also what was possible at the time of "launch" as pretty sure we had people from the industry saying 12GB GDDR6X was not possible, it was either 10GB GDDR6X or 24GB GDDR6X or 16GB GDDR6 (which would have been slower and suffered on the raw perf. front)

Then see this point I made a couple of pages back.

Then the other consideration point is price, how much extra is more vram worth? Would you say the extra £600+ for 2GB more over the 3080 is worth it? Bearing in mind, you'll probably be able to pick up a 4070, which will match/beat a 3090 for £500-600....

How much are you willing to pay just to get more vram?

This is what few people don't get... Who in their right mind can see the logic in paying an "additional" £600+ just to get more vram when said flagship gpus (read 3090/ti) are already struggling and not because of the lack of vram but because of lack of grunt...Who do you think is going to be better of in the end with all things considered (when it comes to just gaming experience on the whole and money spent)?

Option 1:

Someone who bought a 3070/3080 for their MSRP and is going to upgrade to a 4070/4080 for their MSRP (assuming MSRP is between £500-700)

Option 2:

Someone who bought a 3090 for their MSRP and isn't going to upgrade to a 4070/4080

Option 3:

Someone who bought a 3090 for their MSRP and is going to upgrade to a 4070/4080 for their MSRP (assuming MSRP is between £500-700)




Out of interest does anyone remember what the price difference was for the 290/390 4 and 8GB versions?



It might be architectural but it's not like amd weren't aware of what RT brought to the table for game developers and the direction it would be going i.e. becoming the norm as time/hardware progresses with rasterization being phased out.

Also, technically it's not even "planned obsolescence" for either amd/nvidia either as no one is forcing either end users to use said settings i.e. the settings are optional/can be tuned to work within the specs of said GPU, essentially said GPUs don't all of a sudden stop working or/and not support said games.

Good explanation to what "planned obsolescence" really is:


Nvidia used 2GB IC's for the RTX 3060, because somehow 6GB is not enough for a card like that, but 8GB for a card that blows the 2080Ti in to the weeds is.

We all know the real reason is because people didn't like the RTX 2060 only having 6GB, but 8GB is enough for anything so why would we even want more than that? like 4 cores was always enough and any more than that gave you chickenpox!

For the love of............................

At a time 4 cores was more than enough and even overkill, same way 2 cores was more than enough and at a point also overkill too. Technology and the requirements advance as time goes along, shock horror.....
 
Same old faces with same old chip on their shoulder. Although they do have the money to spend every gen for a new one..
 
Same old faces with the same old useless posts that add nothing of their own thoughts to the thread :cry:

"Industry experts say" the 3090 has 2GB GDDR6X IC's.

I'm tired of the religion of experts, use your own ####### grey matter for once.

As per usual, nothing of substance to comeback to I see, much like how this thread went down:


:cry:
 
At a time 4 cores was more than enough and even overkill, same way 2 cores was more than enough and at a point also overkill too. Technology and the requirements advance as time goes along, shock horror.....
This is a fair point, but some 4 core processors still function pretty well. At one point, 10gb of ram was more than enough and even overkill. Technology and requirements advance as time goes on.

Vram will have to increase in future. Nvidia are clearly able to add more, but won't. No need to right now as far as I'm concerned either. But it will probably find issues in AAA games a lot sooner than a similarly specced 16gb card. At that point, Ray tracing on current gen cards will be moot as that technology will have improved exponentially - neither a 3080 nor a 6800 will be able to keep up then.

However, technology moves on, and at that point there will be plenty of 16gb, 24gb and more cards available. I can see the AMD offerings lasting longer, but the advent of Ray tracing is probably going to shorten the lifespan of both sides current generation cards a lot sooner than we have seen in the past.
 
Same old faces with the same old useless posts that add nothing of their own thoughts to the thread :cry:



As per usual, nothing of substance to comeback to I see, much like how this thread went down:


:cry:

About how i expected to, its not the first time i made a post like that and yet i don't see anyone complaining about the decade of quad core CPU's having ended.

We are a strange, fickle ###### bunch a lot of us are, that is all i can disparagingly say.
 
This is a fair point, but some 4 core processors still function pretty well. At one point, 10gb of ram was more than enough and even overkill. Technology and requirements advance as time goes on.

Vram will have to increase in future. Nvidia are clearly able to add more, but won't. No need to right now as far as I'm concerned either. But it will probably find issues in AAA games a lot sooner than a similarly specced 16gb card. At that point, Ray tracing on current gen cards will be moot as that technology will have improved exponentially - neither a 3080 nor a 6800 will be able to keep up then.

However, technology moves on, and at that point there will be plenty of 16gb, 24gb and more cards available. I can see the AMD offerings lasting longer, but the advent of Ray tracing is probably going to shorten the lifespan of both sides current generation cards a lot sooner than we have seen in the past.
Sure hum at one point was even saying 6 core isn't enough for gaming these days, HU made a video showing what was what, then hence the thread above, "zOMG they clearly don't watch YT or use DC whilst gaming", Steve uploaded a new video to show what impact this had and shock horror, 6 core is still pretty much matching current gen 8 core cpus and beating previous gen 8 core cpus :p

It'll be interesting to see what nvidia do with the 40xx series and vram, given current gen consoles are already sacrificing settings/resolution to achieve a locked 60 fps, I can't see many games pushing the vram front much more outside of amd sponsored titles on the PC.

Direct storage will be a game changer to the vram utilization aspect though imo:


And that is true on the RT front, it is only going to get more demanding, especially if the refresh consoles get a nice RT boost but there will be quite the difference between ampere and RDNA 2 i.e. one will be able to use some RT effects unlike the other and most likely far more titles than what we will see with vram issues (given how many titles RT is affecting RDNA 2 than there are vram issues affecting ampere as of right now).

I suppose if I had spend £1+k on a gpu, I would be wanting to get more vram in order to make it last longer and get my value for money from it but alas, this point still stands:

Who do you think is going to be better of in the end with all things considered (when it comes to just gaming experience on the whole and money spent)?

Option 1:

Someone who bought a 3070/3080 for their MSRP and is going to upgrade to a 4070/4080 for their MSRP (assuming MSRP is between £500-700)

Option 2:

Someone who bought a 3090 for their MSRP and isn't going to upgrade to a 4070/4080

Option 3:

Someone who bought a 3090 for their MSRP and is going to upgrade to a 4070/4080 for their MSRP (assuming MSRP is between £500-700)
 
And that is true on the RT front, it is only going to get more demanding, especially if the refresh consoles get a nice RT boost but there will be quite the difference between ampere and RDNA 2 i.e. one will be able to use some RT effects unlike the other and most likely far more titles than what we will see with vram issues (given how many titles RT is affecting RDNA 2 than there are vram issues affecting ampere as of right now).
Im not as convinced of this. Ray tracing in its current state is being run and in a lot of cases only feasible with upscaling. No problem with that as far as I'm concerned. But ampere at high resolutions with full Ray tracing is not foolproof, and with the rate of expected improvements in Ray tracing coming up, I don't think ampere will age that well.

Point taken with RDNA 2, but if both are struggling to run RT in a few years, the vram may be of more benefit for longer. It'll depend how the long promised direct storage and upscaling tech comes through.
 
Back
Top Bottom