Of course this is simply not true. The up front cost is a direct limitation to what people can afford to buy. If you subsidize hardware so the upfront cost is less but is spread over a larger amount of time that has implications on your ability to buy hardware. For example with consoles that locks you into a 6+ year cycle of the same hardware. During which time PC users may upgrade 2 or 3 times as new generations of hardware are released. Meaning it matters where in the console life cycle you are, if you're right at the start it's in your favour, if we're 4+ years in the story looks completely different. You know that, I know that and everyone here knows that, so denying it matters is just silly.
This is just you pivoting the argument. If I was talking about price to performance or if i questioned "why it was faster than a majority of gaming PCs" you would have a leg to stand on. But i was very clear in what i was referring to and that was performance only.
The consoles at this moment in time are fast bits of kit. You were wrong when You said "it's no secret to anyone even remotely technical that console generations do not have fast APUs, they're sharing the same die for CPU and GPU functions and all in all they aren't that fast". If you have an actual argument for why they are not fast then lay them out otherwise stop bringing up price.
You are limited to the degree you can engineer around it because several important things exist with a tight relationship with one another. The amount of vRAM, the memory bus width, the memory speed/frequency and the total memory bandwidth are all related to one another. You need enough memory bandwidth to feed the GPU as to not be a bottleneck which means you need a certain memory bus width multiplied by a certain frequency, which in turn means you need multiples of memory modules to write to in parallel which limits you to certain memory configs.
.
You know that 1Gb modules exist right? My assumption is that they are cheaper. They could have used that to balance cost and VRAM need. As far as i am aware you can mix 1 and 2GB modules. Are there penalties to doing this on a closed eco system. But thats not important there are many options available to balance these requirements, that to assume that there is simply too much VRAM in the consoles is stupid. How do you know it's not the other way around? For all we know the guys over at sony and MS wanted more VRAM but found it was too costly or that it would give them excessive bandwidth, that could equally be true.
Lets be honest if there is "too much" VRAM it isn't by a significant amount. Lets take your 10GB figure it's probably 2GB tops if i was being generous. But as has been mentioned and ignored, games devs can always cram better textures and models for a cheap performance penatly relative to everything else but lets ignore that final bit.
The fact that you think you can just engineer this problem away or that the consoles could (when they actually rely on basically the same architecture which is fundamentally PC GPU orientated) shows you lack understanding of what is happening here. There are technical reasons for the limitations which comes from the limitation of memory manufacturers and the size, speed and bus width of those modules.
I just didn't see the need to fully explain my line of thinking, because i prefer to keep my posts lean (you should try it). When i talk about engineering around the problem i am talking about more than just finding away around bandwidth bottlenecks. There are ways to balance engineering problem, it's not just VRAM and bandwidth that is forming this equation. There are variable on the otherside like the GPU cores themselves that they can balance, to reduce as much waste as possible and I will confidently say that, that is what both sony and MS have done, within scope of their design brief. This is custom silicon not an off the shelf part.
No one has escaped this problem, Nvidia both uncomfortably under and overshot with 3G/6G versions of the 780 and overshot with 11G on the 1080Ti, AMD likely overshot with the 6000 range with 16G, it happens all the time. Sorry to bust the bubble of the console fanboys but the GPUs aren't all that, direct comparisons have been done to PC hardware and we know that hardware cannot make use of 10Gb of vRAM, we know from benchmarks that any time you try and load up and older GPU with 10Gb that you get completely unplayable frame rates. There's a maximal amount of vRAM that any specific GPU can realistically make use of.
There are no consoles fanboys here *******. Oh look stating your opinion as facts again,while ignoring anything that can run contrary to it (read above). Standard.
You didn't fix anything. What I said is true. The AMD cards have lower memory bandwidth in large part because they use vRAM which is clocked slower, and the memory bandwidth is the frequency of the memory multiplied by the bus width. That was a trade off they could make because they need less memory bandwidth due to infinity cache. I notice that you never actually dispute any of these claims, because they're fundamentally true. Anyone that knows anything about this architecture knows what I've said is trivially true, and that it's not inherently good or bad but a trade off of negative and positive outcomes. On the one hand they waste more die space on cache which means less die space for processing. On the other hand being able to use GDDR6 is an advantage because its cheaper and less power hungry. But with regards to power, so what? First it doesn't negate what I said, and second the fact that GDDR6x uses more power is not a problem for Nvidia. So your point is what?
Reduces power and heat output by not needing highly clocked memory as well as reducing the amount of data needing to be shuffled across the memory interface. Princess Fiona over here declares it a waste of die space. If you had any integrity or an actual appreciation of engineering you wouldn't refer to it as a waste of die space. Don't worry you'll sing a different tune as soon as Nvidia announces they are doing the same thing.
They can use it but it incurs performance penalties, if you try and use it for real time rendering and it turns out to be too slow you run into performance issues where the GPU/APU has to wait partially idle for the memory to serve the data it needs.
Point me to the exact part were i said contrary or are you just writing for the sake of it.
If there was no downside to using slower memory no one would every use fast (expensive) memory. They likely never will actually use that because first of all the OS and all associated other console processes have to run inside that 6Gb including the game core engine code (all the compiled engine dlls), so anyone making any kind of complex AAA game is going to almost certainly fill most of that slower 6Gb on the OS.
Citation on bolded section. Give me actual stats from a XSX game.
This is why all this "mask slipping" snideness is just not warranted.
Is this you?
Spot on, it's no secret to anyone even remotely technical that console generations do not have fast APUs, they're sharing the same die for CPU and GPU functions and all in all they aren't that fast.
Maybe you should check your behaviour first. Elsa.
I've always acknowledged the upsides and downsides of various decisions and pointed out where people are only appealing to one of those things to give a bias view on the situation
Was this before or after calling infinity cache a waste of space? Just for reference.
There is no best way to do something, in engineering there are only tradeoffs, you make decisions which improve the end result in some way but that comes at some cost.
Preaching to the choir i see. And you're not even completely right, but i'll give it to you.
You can post videos laughing all you like, but that's just a pivot away from the details of the discussion.
I post laughing images and memes as a reminder to not take this **** seriously and because it is quick and easy. If I'm posting memes then you should revaluate what you wrote. For example using WD:L as the benchmark for what the consoles can achieve.
I won't even waste my time entertaining such a ridculous assumption. Not when games like the Demon souls remake and Ratchet and Clank exist. and Not when we are
A YEAR into the console life cycle. If you were as smart as you try to appear you would shut the **** up and watch it play out, not assert your hastily cobbled together opinion as fact.
I've noticed you do this. You make some offhand comments that don't actually explicitly deny what the other person is saying, you just pivot to something else irrelevant like trying move to a discussion of power usage.
The projection of that statement though, lets discuss it.
The reason for your wall of text is because you use it as a way to slowly move the goal posts. It makes it hard to spot because people get lost in the waffle. If people follow the chain of posts you often end up arguing either something you didn't start with (console price) or a vague statement (like your final statement about textures). It is a great technique for getting someone to arguing something they were not orginally arguing but i simply do not take the bait which is why i will ignore vast swafts of your post. Moving a conversation along has its uses but you just take the ****. You also seem to think too highly of yourself and feel the need to "teach people", to the point of making assumptions and viewing people below you. Like trying to tell a mechanical engineer that engineering is about trade offs.
I truly wanted to thank you for wasting my time
@PrincessFrosty.