Are you claiming it's a linear relationship (always)? That there is a linear relationship between VRAM usage and required GPU performance to render a frame?
Such that any increase in VRAM requirement must result in a proportional (linear) increase in GPU stress?
I'm curious.
The fact that there are sometimes multiple VRAM configurations of the same card would seem to call this into question. Eg 480 4GB and 480 8GB. In such cases you would claim that the 8GB variant is of no value? That the 4GB card must be equally as viable as the 8GB card?
I'm not putting words into your mouth here I'm clearly just asking a question(s).
I've not stated that the relationship is strictly linear because I'm honestly not sure. However the relationship wouldn't need to be a linear one for you to make predictions about how future demands of games on both vRAM and GPU. All that matters is that it's not random, if there's a predictable relationship, even if that relationship is non-linear (maybe it's exponential or logarithmic), you can still use it to predict.
But we do know there is a relationship between these 2 things, and that must necessarily be the case, because the reason we put a model or a texture or any other asset into vRAM is for the GPU to calculate the next frame (or an upcoming frame) using that asset. Any additional unique asset you add to the scene in game necessarily increases the load on the GPU and also the amount of vRAM usage.
And if you kinda take a step back and look at the broader picture of gaming over say the last 20 years and just look at GPU speeds in terms of rough performance like say the FP32 performance in TFLOPS and the amount of memory have it kinda looks like it is a linear relationship. I've just been faffing with some numbers and added a trend line, and linear trend line seems to fit best. But I've been buying GPUs since the voodoo days in like 1998 so that's more than 20 years and it's an obvious relationship, GPUs get faster vRAM increases.
I mean I used to do modding a long time ago back at Uni and prior, back when we were on like Series 4-5 Nvidia cards, I had a 4600Ti at the time. And I remember mapping in Unreal engine at the time, and looking at performance and looking at vRAM usage. And you can just keep throwing more unique static meshes and textures into the level you're making, and as you start to get near the ceiling of your vRAM you kinda notice that you're getting into unplayable frame rate territory as well. You can't just worry about 1 limit and ignore the other, you have to consider both.
With different vRAM configs, I kinda touched on this before. Not all cards are used strictly for gaming, there's other apps that use them as well, non real time rendering, CAD, you see this a lot with the Quadro cards which are marketed at these people as "workstation" cards. It's why the 3090 is going to have 24Gb because it's a halo product, there no way in hell you'd ever be able to load that card up with 24Gb of game assets and have a playable frame rate. 16Gb would be way more than sufficient for the purpose of gaming.
One thing to keep in mind that multi vRAM configs in video cards are rare and my suspicion is this has something to do with architecture limitations. The way the architectures are built mean that vRAM configs are only possible in certain multiples, say in the 1060 case you could get 3Gb and then double that at 6Gb. Well what happens if the expected useful vRAM is say 4Gb, well you can either under provision or over provision. If you under provision maybe you only lose a small amount of quality or frame rate, but the card is cheaper with less vRAM. And the 6Gb has more than enough vRAM but a jacked price If there's no obvious best config it's better to probably make both and let the consumer decide if that trade off is worth it or not. I'm reading an article which stated the base price of these cards were £190 for the 3Gb and £240 for the 6Gb. You seemed cynical before, about the cost of vRAM and if that lead to savings so I think this is a clear example where it does. Source
https://www.eurogamer.net/articles/digitalfoundry-2016-nvidia-geforce-gtx-1060-3gb-vs-6gb-review_14