The architecture is normally designed to cope with multiple cards at multiple performance levels with multiple variants of the GPU and different memory capacities. It's all a big trade off, you can see that AMD targeted 16GB for example which is greater than 10GB but they've also probably over provisioned their cards, certainly the lower even models but also probably the 6900XT as well, that thing will never use 16GB and have playable frame rates, they could have got away with 12GB or maybe even 10GB, but odds are their architecture would only allow 8GB as the next lowest vRAM config which would not be enough for the 6800XT or 6900XT for sure. So you're paying more for memory you can't use, does that mean it's an idiotic design decisions? No It's a trade off of the architecture, these kind of tradeoffs are made with every architecture on both camps.
Games like CoD Cold War have high resolution texture packs which when installed push the install to 130GB on disk and it runs in 10GB of vRAM just fine even with Ray tracing and all the effects on Ultra, in fact it's a good example of a modern game which is GPU/Compute bottlenecked, you can't run that game maxed out in 4k on a 6800XT for example you get about 20fps, and even on the 3080 with better RT performance you still only get about 40FPS. Yet we're not exceeding 10GB of vRAM. This follows a trend established by most of the very newest AAA titles, that GPU bottlenecks before vRAM does.