Irrelevant to my post. I never said anything about console cost.
It's very relevant if you want to compare hardware because it has a big impact on what you're saying. There's clear benefits and costs involved in that deal for cheap mid tier hardware and you can't just accept the benefits and pretend like the costs associated with them don't exist or aren't relevant.
Talks about console prices and their cost, then speculates that MS and sony don't know how to build consoles and have decided to pay extra to put too much VRAM in them. Okay.
Maybe you should apply to work there, so you can show them how it's done Mrs remotely technical.
I never said they don't know how to build consoles. Sony and MS have the same constraints as Nvidia and AMD do when doing things like deciding memory size. You can't just arbitrarily pick any number you like, your architecture choices limit you to specific multiples and sometimes the next multiple down is not enough but the one above it is too much. Just like a card such as the 16GB AMD cards which get no where near full usage and with modern games run out of GPU grunt long before getting anywhere close to being full, and older examples like the 1080Ti with 11Gb that during most of its lifespan never used more than about 7-8Gb
Again if you look at WD:L for the consoles they do not use the high res texture pack the PC does, yet with that pack in use and absolute maxed out settings you can squeeze WD:L inside of 10Gb on the 3080. So memory isn't the issue on the consoles, it's other factors meaning they can't realistically use 10Gb of vRAM.
Also people still bringing up the 6GB of "slow" memory like it is literally unusable for running graphics. Yeah there is no way GDDR6 running at 336GB/s can be used for storing graphics items.
Oh wait whats that, the 6700XT runs at 384 GB/s, literally unusable, what was AMD thinking.
AMD deliberately made a trade off with their GPU design and created "infinity cache" which trades away transistors for use in doing calculations on the GPU itself and instead uses them for much larger amounts of local high speed cache. The upshot of this is more data is kept on die and requirements on memory bandwidth to the vRAM is lower. This allows them to target cheaper GDDR6, but at the expense of wasting die space on cache that could be spent on say RT cores or something else.
The Xbox made that speed trade off for probably the same reason, slower memory is cheaper memory and if the whole memory pool doesn't need to be that fast because a large chunk is being used for more system RAM-like purposes then it doesn't need to be that fast. The only reason I mention this is because it re-confirms the calculations most of us had done about what % of memory is realistically going to be used by which parts of the system and how much you can realistically consider the equivalent of vRAM.
It really depends on a few more things and should be evaluated on a game by game (arguably game model by game model) basis. You can't make any blanket statements on it. At least you didn't say 4k textures should only be used when gaming at 4k.
Sure you can get a more detailed comparison going game by game, but all 3d rendering in general suffers from the same limitation which is that there's no use expanding texture detail above what can be rendered to the screen at the screen resolution you're in. At some point higher res textures always become pointless. Because the consoles can't really run next gen games like WD:L in high resolutions and are less and less likely to in future with AMDs FSR, they're going to be more limited in the detail of the textures they can see benefit from, especially compared to something like native 4k on the PC.
You sprinkled a bit of truth but interlaced it with falsehood. The hypothesis was that VRAM requirements for games would grow due to the consoles having more VRAM. It was mentioned that at a point the 3080 would not be able to run better textures than the consoles because it doesn't have the VRAM to store them.
We are a year in with video games still catering for the previous generation of consoles. With the last few games seeming to grow in requirements. Unless you have proof that these new consoles have peaked and they won't be getting any better.
Yes it will grow (from the previous generation) but most people here who have discussed the consoles seem to have converged on the same expectation which is that they wont have access to about 10Gb of vRAM, and again the MS architecture and the speeds of the memory seems to also fit nicely with this estimate.
Whether I'm right in my additional hypothesis about the consoles likely not being able to make good use of 10Gb for vRAM purposes remains to be seen, we need tools to measure memory usage on the consoles ideally. But if we take a modern game targeted at the next gen systems (making use of RT features and which push the best in class PC dGPUs to their limits), a game like WD:L, then so far I've been vindicated on my hypothesis. A 3080 can run more or less maxed out 4k with high res texture pack. The consoles have massively cut down visual effects (as outlined by DF) and lower screen resolution, and use lower quality textures. Despite all of that memory...something that we don't really have an explanation for if the consoles can handle it...and the assets already exist. Whats the alternative theory that would explain this?