This is just repeating what's already been said in other threads. If 10gb isnt enough then neither is 12gb - it's not enough of an increase to make any meaningful difference. When the 3080 was released, 10gb was fine. Now, it's still fine just about, but we are seeing the start of more and more vram being used for reasons i don't really understand and yeah, FC6 seems to take a dump on the 3080 10gb. so yeah. It's down to use case and either you dont care about fringe cases, or you do. And those fringe cases will slowly become the norm. I was hoping direct storage would help with this but given it was supposed to be released early last year, i don't think we'll get it any time soon and even then I'm sure it'll take years to be supported by most new releases.
Anyway, I get the impression AMD are more efficient with their memory usage hence why they can get away with a 256bit bus when nvidia are pushing 320/384bit for their top end cards. I can see that being the reason why nvidia cant match the vram capacity on the AMD cards on their 3080 GPUs...
on the 3080s nVidia use 1gb chips, which are the most common by far. Ten of them, giving a 10gb vram buffer. To match AMD, they could:
1) drop to a 256bit and, using GDDR '8x' mode, double the number of modules used to give them a 20gb buffer. This is how the 3090 has 24gb without needing a 768bit bus.
2) drop to a 256bit bus and use 2gb modules to give them a 16gb buffer.
3) widen the bus further to 448bit/14gb or even 512/16gb.
4) use the apparent '1.5gb' modules that are in the gddr6 spec to given us a 320bit/15gb 3080. No ideas on availability or suitability for the nv architecture though.
I dont think dropping to 256bit would given nVidia the performance they want and going wider adds even more cost and complexity and the cynic in me says they wouldn't do that when leather jacket man can see just how much more they can wring out of their legions by releasing a 12gb card instead (and jacking up the RRP in the process *cough*).