• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Why - "Is 'x' amount of VRAM enough" seems like a small problem in the long term for most gamers

Again why I had the 680 4GB but got slagged as it never had the grunt to use 4GB but IMO it could use more than 2GB (3GB or so) so....

One of those black/white EVGA FTW custom blowers + Backplate models, looked the dogs. :D
 
Again why I had the 680 4GB but got slagged as it never had the grunt to use 4GB but IMO it could use more than 2GB (3GB or so) so....

One of those black/white EVGA FTW custom blowers + Backplate models, looked the dogs. :D

It was easy to use 4GB on those cards if you used them in mGPU at high resolution.

Another thing that is interesting from those days is you can still just about get away with using a 3gb HD 7970 for gaming with settings turned down.
 
With future proofing and vRAM, these days it's less of a problem than it used to be. Namely because vRAM used to be predominantly used for texture budgets in older games, whereas today much more of the vRAM is used on other effects...

Going forward this newer trend is only going to get more acute as we move towards much faster real time streaming of assets from super fast m.2 drives with DirectStorage, that's going to really keep a lid on how much asset data we need to store in vRAM at any one moment. The faster the I/O to vRAM is, the more we can push vRAM towards using that memory for effects that are being drawn on screen in the current frame, rather than a kind of dumb cache of textures a lot of which aren't even in use. That's kinda old school now, in fact the PC for once is actually lagging in this regard, way behind the consoles. Their adoption of this will drive us in that direction very rapidly once developers catch on and the engines start making full use of it.
Texture memory demands have no doubt been kept stagnant for years by old consoles lagging PC hardware.


DirectStorage changes nothing in need of VRAM for pushing boundaries.
Same marketing has been seen before and it's just even bigger pile of same old ****.

During golden age of PC gaming when PCI bus was standard for expansion cards AGP (Accelerated Graphics Port) was introduced for graphics card.
One of the selling lines was that because of it being faster than PCI, you didn't need to cache assets locally on graphics card and could skimp in amount of onboard memory.
But that ad line flew as well as airplane made of lead.
Because of system RAM and connection to it being too slow and laggy to be replacement for onboard VRAM.

Which applies equally much now:
For any increase in speed of system RAM and connections, GPU processing power and size of working data have gone equally up.
And any Flash memory is always magnitude laggier than system RAM!


Reason for hyping of asset streaming from SSD is small generational increase in memory of new consoles.
In the past new console generation brought big increase in amount of memory:
PS2: 32+4 MB> PS3: 512MB (256 RAM/256 VRAM) > PS4: 8GB
Xbox 64MB > Xbox 360 512MB > Xbox One 8GB

Now it only doubled if you look the most optimal way and compared to Xbox One X there's only third more.
That's very meh in comparison and not enough for pushing forward in long run.
So Microsoft and Sony put BS departments into work.
And suddenly we have this religion of puny 5GB/s bandwidth and unholy awful latency being enough to replace local 500GB/s memory...

At least system RAM could give data out at full rate of PCIe x16 slot, whose 16GB/s (or 32GB/s for PCIe v4) is still far cry from on board VRAM.
 
Texture memory demands have no doubt been kept stagnant for years by old consoles lagging PC hardware.

DirectStorage changes nothing in need of VRAM for pushing boundaries.

Texture demands have actually gone up, just in variety of textures rather than quality of any one individual texture. So we've seen absolutely insane leaps of game install size on disk in part due to very large texture libraries being used. And we know developers can push that even further. When id first did this mega texture stuff in the original Rage they wanted to release a much bigger texture library for the game but they admitted that you'd simply not be able to ship something of that size. Their source art library the devs created was something insane, around 1Tb of textures if i remember correctly. They really innovated texture streaming by pushing that technology much further than anyone had in the past.

I think the reason that individual texture quality has stagnated is actually due to screen resolutions stagnating. I was looking back through old R6 Siege videos and commentary on the UHD texture pack and it was considered by most to be simply not worth it if you game at 1080p because you just can't tell the difference, that it was only worth it at 4k. Very few people actually game at 4k, majority of gamers are at 1080p still and have been for a very long time.

The point of direct storage is not to make the SSD a substitute for vRAM, while it certainly will be fast it wont be fast enough to do that. The point is that modern game engines don't need huge caches for textures, they only store what they need to render the area around the player location. As soon as a player avatar is moved to a new zone/area in the world, the engine can ditch a lot of the old assets it no longer needs and load new stuff. Having even faster drive access just means you can push that same technology harder, you can be more aggressive with texture swapping because you might only need a 1-2 seconds to fetch new assets instead of 30 seconds or something like that. If you read the actual goals of DirectStorage from Microsofts developers they state their aim is to eliminate the long elevator rides and similar tricks used in games to mask loading of new assets, they want that as close to seamless as possible.

As for consoles, I think today vRAM is being used a lot more for graphical effects and less for textures in terms of relative usage. The amount of memory the consoles have is about right for what they can meaningfully make use of. The APUs they have are kinda mid range compared to the PC dGPU market and if you start piling on the effects they simply run out of horsepower before you can fill up the memory. Roughly 10GB is reserved on both consoles for use in rendering and the other 6GB for the game engine+system. And if I had to place a bet I'd say they don't even come close to using all 10GB of that. A 3080 right now when pushed hard by modern games runs out of GPU grunt before running out vRAM and a 3080 is a lot faster than the consoles. My bet would be that if you had system tools to measure games usage of memory on the consoles for the purpose of graphics (on the PC what we call vRAM) that it'd be somewhere at the 6-7GB range. Memory on the consoles isn't really a limiting factor, the APU is.
 
4k Texture packs like the one WD: Legions has can cause a lot of problems (gameplay freezing for 20-30 seconds) combined with a 4k display res, on a graphics card with 8gb or less of VRAM. The textures are a little sharper with the texture res on ultra, but the settings below look the same, and it uses a lot more VRAM with it enabled (8-9gb or with it enabled on high, about 5gb vram disabled on high texture res. Personally, I don't mind turning it off, it only makes a small visual difference in gameplay (depending on how much the inbuilt sharpening filter is applied).

The game itself incorrectly estimates how much VRAM is used either either way.

So, probably a good rule of thumb not to use ultra /4k texture packs with 8gb VRAM.

In 2022/2023, Unreal Engine 5 (likely other engines too) will be using photogrammetry to capture details from photographs of real objects, I think this will result in much higher texture detail and corresponding VRAM usage on higher settings. New consoles will need that extra VRAM.
 
Last edited:
8-9gb or with it enabled on high, about 5gb vram disabled on high texture res. Personally, I don't mind turning it off, it only makes a small visual difference in gameplay
Classic example of when I say fixing VRAM issues is usually easy. Very few games are going to get developed in the historical, short and medium term that genuinely require more than 8GB VRAM because they don't want to cut off such a huge proportion of gamers. They'll keep offering these settings to keep it under check.
 
In 2022/2023, Unreal Engine 5 (likely other engines too) will be using photogrammetry to capture details from photographs of real objects, I think this will result in much higher texture detail and corresponding VRAM usage on higher settings. New consoles will need that extra VRAM.

Yeah but again their very own engine tech demo/videos literally covers how worlds are zoned into regions and as you pass in/out of those regions it calculates what you can see and adjusts essentially what is in memory. The trick here is that very high quality models and textures are only needed right up close to your point of view, as objects appear more distant you simply cannot resolve all that detail and you can swap it out lower quality variants of the textures. Mixed together with LOD systems and the zoning of worlds into sections, along with some clever predictive code you can only hold in memory the highest quality assets for when they can be appreciated, which is right up close to your viewport.

vRAM usage will of course still go up but it's not really a linear relationship with the quality and amount of objects anymore. It used to be way back in the past when you couldn't just stream in assets in game and you needed to cram all your assets into vRAM during the "loading" of the level. Twice as much vRAM meant very close to twice the amount of details. Today it's just not linear like that. We've got all these old hangups and expectations like vRAM doubles every year from back when it mattered

Just like @Blackjack Davy said you'd be surprised at just how much vRAM is used even for texture data these days. I think my testing in Doom Eternal showed something like 8GB of vRAM actually being used total, and that's when the texture pool size is set to high, all other settings ultra nightmare in 4k. That results in a 2GB texture pool size. There was no visual difference to the game with texture pool size higher than this which suggests the game can swap what it needs in and out and that texture pool without needing to drop the quality. So the textures are only 25% of the vRAM usage, the rest is actually data the GPU is using for other effects.

This is why my prediction is that as we move further into the future the vRAM amounts will become a function of GPU speed because much more of that memory will be put towards graphical effects @EsaT even pointed this change in paradigm with the consoles and how their memory growth has changed.
 
Yeah but again their very own engine tech demo/videos literally covers how worlds are zoned into regions and as you pass in/out of those regions it calculates what you can see and adjusts essentially what is in memory. The trick here is that very high quality models and textures are only needed right up close to your point of view, as objects appear more distant you simply cannot resolve all that detail and you can swap it out lower quality variants of the textures. Mixed together with LOD systems and the zoning of worlds into sections, along with some clever predictive code you can only hold in memory the highest quality assets for when they can be appreciated, which is right up close to your viewport.

I think that's all quite theoretical and based on game developers knowing how to get most out of the UE5 game engine (e.g. for Levels of Detail) and the type of game they are making, e.g. strategy, FPS/third person, open world sandbox etc. I think for other game engines (proprietary or free to use), many developers will want to use similar photogrammetry + 3D imaging techniques, but I doubt they will be well optimized for texture streaming, which often seems to suffer in open world games like GTA V and Assassin's Creed titles.

Of course, there are some science fiction games like Starcraft II, where photogrammetry isn't particularly relevant, except perhaps for creating more realistic terrain.

Another poorly optimized game (with slow texture streaming, low quality textures at long distances) was Kingdom Come: Deliverance, which used the CryEngine. I think the developers picked this engine because they liked the realistic foliage, and the developers were familiar with the engine, unfortunately, they want to use the same engine again for the sequel! This is despite the fact that the developers had a lot of issues working with the engine, and no doubt had to make some compromises. There was a strong focus on realism, and modelling real locations in the game accurately, makes me wonder if the team will be using photogrammetry in the sequel.
 
Last edited:
I think that's all quite theoretical and based on game developers knowing how to get most out of the UE5 game engine (e.g. for Levels of Detail) and the type of game they are making, e.g. strategy, FPS/third person, open world sandbox etc. I think for other game engines (proprietary or free to use), many developers will want to use similar photogrammetry + 3D imaging techniques, but I doubt they will be well optimized for texture streaming, which often seems to suffer in open world games like GTA V and Assassin's Creed titles.

Well I think the system is built into the engine from the ground up, but it does take some developer tweaking to allow them to do things like set the size of the grids the world is zoned into. But that has been the case for most modern games, especially since the support for open world games has taken off. You need a certain amount of manual optimization, and the kind you can use and which work well for different games (RTS/FPS whatever) is going to be different. Texture streaming is available in all the major AAA engines and they vary in quality/sophistication but they are common place.

I liked KCD, I can't remember purely from memory what the texture streaming was like in that game, I know performance was not great but I suspect it's because they're a smaller studio with less resources to make the game. I think its was originally funded through crowdfunding. The studio was acquired and is way bigger now so I expect if they're working on a sequel they'll be able to hire a lot of technical engineers to get the most out of the engine. It's a very nice looking game though, foliage and natural stuff like that is hard to get looking good without being computationally expensive.
 
This is kind of interesting, in WD Legions, the average framerate of the RTX 3070 TI (GDDR6X 8GB VRAM) is only about 1-2 FPS higher than the RTX 3070 (GDDR6 8GB VRAM) at 4K resolution. I suppose you need the extra Compute Units and Raytracing hardware (that the RTX 3080/3090 have) to significantly improve FPS in games like this. Link here:
https://tpucdn.com/review/nvidia-ge...dition/images/watch-dogs-legion-3840-2160.png
 
Last edited:
I liked KCD, I can't remember purely from memory what the texture streaming was like in that game, I know performance was not great but I suspect it's because they're a smaller studio with less resources to make the game. I think its was originally funded through crowdfunding. The studio was acquired and is way bigger now so I expect if they're working on a sequel they'll be able to hire a lot of technical engineers to get the most out of the engine. It's a very nice looking game though, foliage and natural stuff like that is hard to get looking good without being computationally expensive.

You kinda needed a SSD to avoid texture pop in and stutters (big city, monastry) . It improved a lot moving from an HDD to a modest SATA3 SSD and then even a little bit more to a .m2 SATA3 drive.

I wouldn't look too much into current gen games to base the needs of vRAM for next ones. Plenty of low quality assets around.
 
Having had several GPU's in the past that have become bothersome to use to be VRAM I think it's been a problem for some time myself. Obviously your exact use case will always dictate exactly how much you needed, but there are certainly gaming situations which call for more. Before I got my 3090 I was using a 980Ti, and while that was a bit long in the tooth, it still provided a reaonable gaming expereince at 1440p, until it ran out of VRAM and was a slideshow.

We also have a 3060Ti in another machine here and that's the same. In most games it's fine, but it's not all that hard to run it out of VRAM either. Load up a VR game, and while it would have the horsepower to be fine, the games quickly turn into a stuttery mess because the card is constantly out of VRAM. I tried No Mans Sky in VR on a Quest 2 at native resolution + ultra settings and saw VRAM usage peak at nearly 17 GB on my 3090. A 3080 would have the grunt to run that reasonably well, but how quickly would be in VRAM hell?

I think for standrd 2D gaming at 4K, 10-12 GB is mostly going to be fine for now, but how fine it will be in 2 or 3 years (when a 3080 would still have acceptable performance otherwise) remains to be seen. It's certainly not impossible to run into VRAM problems today though of any of those cards.
 
You kinda needed a SSD to avoid texture pop in and stutters (big city, monastry) . It improved a lot moving from an HDD to a modest SATA3 SSD and then even a little bit more to a .m2 SATA3 drive.

I wouldn't look too much into current gen games to base the needs of vRAM for next ones. Plenty of low quality assets around.

I was going to say I don't really recall texture pop in being a problem for me. But then I have an more exotic RAID 0 array of 2x M.2 NVMe Samsung 960 Pros which max out at the full PCI 4x bandwidth of 4GB/sec. Other games that had pop in and/or stutter as you load between zones are gone as well. Subnautica was terrible for that loading in new biomes and fast disk sorted that right out. This is why I'm excited for DirectStorage, and to upgrade to a PCI-E 4.0 motherboard to raise that cap to 8GB/sec
 
I was going to say I don't really recall texture pop in being a problem for me. But then I have an more exotic RAID 0 array of 2x M.2 NVMe Samsung 960 Pros which max out at the full PCI 4x bandwidth of 4GB/sec. Other games that had pop in and/or stutter as you load between zones are gone as well. Subnautica was terrible for that loading in new biomes and fast disk sorted that right out. This is why I'm excited for DirectStorage, and to upgrade to a PCI-E 4.0 motherboard to raise that cap to 8GB/sec

I remember the gray models of some buildings and the textures loading on them on fast travel and such (on 7200rpm HDD). Plus a looong waiting when changing some graphical settings (on HDD).
It got better as the storage got better.

On the other hand, plenty of games that handled the streaming part much better.
 
Related to this vRAM issue and peoples discussion of high res texture packs being a worry that the next gen consoles will drive forward. It was interesting to just learn watching Digital Foundrys technical comparison of Watch Dogs Legion showed the PC had a high res texture pack that none of the consoles used. I dunno what to make of that precisely but I'm sure that'll liven the debate back up :p
 
Back
Top Bottom