I'm more confident about this after some testing.
The argument about Far Cry 5 in the parody "is 16Gb enough for the 6900XT" thread sparked my interest in this because it was used as another example of a game that requires more than 10Gb of vRAM which of course I was fast to test. On my GTX 1080 it was reporting about 7.5Gb allocated but real world usage fluctuated between 4Gb and about 6Gb absolute max/peak, this is in 4k Ultra with the HD texture pack. Another user LtMatt pointed out how this wasn't "optimal" because in his tests he got stuttering loading assets which only went away moving from an 8Gb to 16Gb Radeon card. And I clearly did not have this and I wanted to know why.
So I did a few things, I took video graphing frame times to look for micro hitching that was so small it wasn't easily visible while playing and honestly the game is extraordinarily stable in terms of frame times you can jump into a helicopter, take off spam missiles and explosions, fly across the entire map between different zones, land in the middle of the more built up areas where there's a lot of detail, while frame rate is borderline bad simply due to maxing the settings for stress testing, the consistency was very high. Then I watched my own game play videos back and one thing I noted is that the vRAM usage wildly bounces between 4Gb and 6Gb used, the game will fill about 2Gb of vRAM in just a few seconds as you approach a compound. And you've got to think that this is about right on a drive that is basically at the 4GB/sec limit of PCI-e 3.0 and a fast CPU, filling a mere 2Gb of vRAM data is going to be trivial, in fact the stress on the SSD itself is even less, because textures are stored compressed and they make up most of the assets by size, so 2Gb in vRAM is less on disk.
Games like Far Cry 5 and in fact most modern game engines have long overcome vRAM limitations by simply streaming textures into a vRAM pool on the card and doing so predicatively so that the textures are there just in time to avoid a cache-miss. It's how a game like Far Cry 5 can be 60Gb install on disk, a vast amount of that being textures, yet be an open world game where you can visit any part of the world with no real loading to speak of. And if you ignore this useless allocated vRAM metric, which tells you nothing, and you focus on what is really in use, you can see that fluctuating rapidly as you fly around the map forcing old unused textures out of vRAM and new stuff is streamed in.
I have a fairly unique setup 2x Samsung 960 Pros in RAID 0, which is 2 fast SSDs both with sequential read speeds of 3500 MB/sec set up to be read/written in parallel which doubles the speed (although actually not really doubled because the PCI-e 4x bandwidth is a bottleneck), in addition an overclocked
[email protected] which is no slouch for gaming and more specifically uncompressing texture data on the fly. I suspect this has something to do with my game being hitch free. These are horrible bottlenecks to have to overcome with expensive components however, and I think DirectStorage and the RTX IO for Nvidia and the AMD equivalent (does this have a name?) will make this a thing of the past.
This is why I have said in the past that it's a new paradigm, vRAM size increases don't track game size anymore because it's not being used as a dumb cache that you attempt to cram the entire game in to, now what is more important is how much vRAM do you need to only hold what assets you need for your immediate surroundings.