• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
Man of Honour
Joined
13 Oct 2006
Posts
90,818
Metro Exodus is a bit better example than most games of what ray tracing can bring to the equation if people can look past the more immediate limitations but it isn't close to an offline renderer - even though it is better optimised/feature complete than Quake 2's path tracer Q2 RTX is still the closest to an offline renderer in a game right now in terms of what the actual renderer can do with the right environment and assets.
 
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
The cost of the consoles is irrelevant to whether or not they are fast and whether or not they are faster than a majority of gaming PCs.

Of course this is simply not true. The up front cost is a direct limitation to what people can afford to buy. If you subsidize hardware so the upfront cost is less but is spread over a larger amount of time that has implications on your ability to buy hardware. For example with consoles that locks you into a 6+ year cycle of the same hardware. During which time PC users may upgrade 2 or 3 times as new generations of hardware are released. Meaning it matters where in the console life cycle you are, if you're right at the start it's in your favour, if we're 4+ years in the story looks completely different. You know that, I know that and everyone here knows that, so denying it matters is just silly.

What you did say is that they put too much VRAM. There is no way that sony and MS ares going to arbitrarily put too much VRAM on their consoles, which by your admission are price sensitive. They would engineer a way around it. Oh wait they did, that fancy SSD on the PS5 looking mighty fine. But that's a different conversation.

You are limited to the degree you can engineer around it because several important things exist with a tight relationship with one another. The amount of vRAM, the memory bus width, the memory speed/frequency and the total memory bandwidth are all related to one another. You need enough memory bandwidth to feed the GPU as to not be a bottleneck which means you need a certain memory bus width multiplied by a certain frequency, which in turn means you need multiples of memory modules to write to in parallel which limits you to certain memory configs.

You can tweak memory size all you like but if that leaves you with a low memory bandwidth then your GPU will crawl to a halt because it can't be fed with data fast enough. The fact that you think you can just engineer this problem away or that the consoles could (when they actually rely on basically the same architecture which is fundamentally PC GPU orientated) shows you lack understanding of what is happening here. There are technical reasons for the limitations which comes from the limitation of memory manufacturers and the size, speed and bus width of those modules.

No one has escaped this problem, Nvidia both uncomfortably under and overshot with 3G/6G versions of the 780 and overshot with 11G on the 1080Ti, AMD likely overshot with the 6000 range with 16G, it happens all the time. Sorry to bust the bubble of the console fanboys but the GPUs aren't all that, direct comparisons have been done to PC hardware and we know that hardware cannot make use of 10Gb of vRAM, we know from benchmarks that any time you try and load up and older GPU with 10Gb that you get completely unplayable frame rates. There's a maximal amount of vRAM that any specific GPU can realistically make use of.

I think they know what they are doing.

And I never said they didn't, i said the problem is inherent to the technology. And that the result you get is typically a trade off, of the different options you have at the time.

FTFY

P.S. you've let the mask slip. Twice now. ;)

You didn't fix anything. What I said is true. The AMD cards have lower memory bandwidth in large part because they use vRAM which is clocked slower, and the memory bandwidth is the frequency of the memory multiplied by the bus width. That was a trade off they could make because they need less memory bandwidth due to infinity cache. I notice that you never actually dispute any of these claims, because they're fundamentally true. Anyone that knows anything about this architecture knows what I've said is trivially true, and that it's not inherently good or bad but a trade off of negative and positive outcomes. On the one hand they waste more die space on cache which means less die space for processing. On the other hand being able to use GDDR6 is an advantage because its cheaper and less power hungry. But with regards to power, so what? First it doesn't negate what I said, and second the fact that GDDR6x uses more power is not a problem for Nvidia. So your point is what?

Maybe you didn't notice but i was mocking the silly notion that games developers can't use the slower memory for graphics items. Game developers can use it and if they need to or want to they damn well will use it. It doesn't matter what our remotely technical forum experts think.

They can use it but it incurs performance penalties, if you try and use it for real time rendering and it turns out to be too slow you run into performance issues where the GPU/APU has to wait partially idle for the memory to serve the data it needs. If there was no downside to using slower memory no one would every use fast (expensive) memory. They likely never will actually use that because first of all the OS and all associated other console processes have to run inside that 6Gb including the game core engine code (all the compiled engine dlls), so anyone making any kind of complex AAA game is going to almost certainly fill most of that slower 6Gb on the OS. That's part of the design of the architecture, MS knew this which is why they did it they saved money on slower and cheaper memory in such a way that is has no real impact on performance. That's analagous to PC gamers using System RAM which is fast enough to supply the CPU but not crazy fast like GDDR6(x) you'd find on the GPU.

You can post videos laughing all you like, but that's just a pivot away from the details of the discussion. I've noticed you do this. You make some offhand comments that don't actually explicitly deny what the other person is saying, you just pivot to something else irrelevant like trying move to a discussion of power usage.

This is a good contemporary and simplified article which shows how memory modules are used in parallel to get memory bus width wide enough for what is needed to give you a fast enough memory bandwidth and why this relationship exists in the architecture. The more educated people are on this the less people can get away with these claims that you can just engineer around it https://cyberindeed.com/how-gpu-bus-affects-video-memory-2020/

This is why all this "mask slipping" snideness is just not warranted. There is no best way to do something, in engineering there are only tradeoffs, you make decisions which improve the end result in some way but that comes at some cost. I've always acknowledged the upsides and downsides of various decisions and pointed out where people are only appealing to one of those things to give a bias view on the situation.
 
Soldato
Joined
12 May 2014
Posts
5,225
Of course this is simply not true. The up front cost is a direct limitation to what people can afford to buy. If you subsidize hardware so the upfront cost is less but is spread over a larger amount of time that has implications on your ability to buy hardware. For example with consoles that locks you into a 6+ year cycle of the same hardware. During which time PC users may upgrade 2 or 3 times as new generations of hardware are released. Meaning it matters where in the console life cycle you are, if you're right at the start it's in your favour, if we're 4+ years in the story looks completely different. You know that, I know that and everyone here knows that, so denying it matters is just silly.
This is just you pivoting the argument. If I was talking about price to performance or if i questioned "why it was faster than a majority of gaming PCs" you would have a leg to stand on. But i was very clear in what i was referring to and that was performance only.

The consoles at this moment in time are fast bits of kit. You were wrong when You said "it's no secret to anyone even remotely technical that console generations do not have fast APUs, they're sharing the same die for CPU and GPU functions and all in all they aren't that fast". If you have an actual argument for why they are not fast then lay them out otherwise stop bringing up price.

You are limited to the degree you can engineer around it because several important things exist with a tight relationship with one another. The amount of vRAM, the memory bus width, the memory speed/frequency and the total memory bandwidth are all related to one another. You need enough memory bandwidth to feed the GPU as to not be a bottleneck which means you need a certain memory bus width multiplied by a certain frequency, which in turn means you need multiples of memory modules to write to in parallel which limits you to certain memory configs.
.
You know that 1Gb modules exist right? My assumption is that they are cheaper. They could have used that to balance cost and VRAM need. As far as i am aware you can mix 1 and 2GB modules. Are there penalties to doing this on a closed eco system. But thats not important there are many options available to balance these requirements, that to assume that there is simply too much VRAM in the consoles is stupid. How do you know it's not the other way around? For all we know the guys over at sony and MS wanted more VRAM but found it was too costly or that it would give them excessive bandwidth, that could equally be true.

Lets be honest if there is "too much" VRAM it isn't by a significant amount. Lets take your 10GB figure it's probably 2GB tops if i was being generous. But as has been mentioned and ignored, games devs can always cram better textures and models for a cheap performance penatly relative to everything else but lets ignore that final bit.

The fact that you think you can just engineer this problem away or that the consoles could (when they actually rely on basically the same architecture which is fundamentally PC GPU orientated) shows you lack understanding of what is happening here. There are technical reasons for the limitations which comes from the limitation of memory manufacturers and the size, speed and bus width of those modules.
I just didn't see the need to fully explain my line of thinking, because i prefer to keep my posts lean (you should try it). When i talk about engineering around the problem i am talking about more than just finding away around bandwidth bottlenecks. There are ways to balance engineering problem, it's not just VRAM and bandwidth that is forming this equation. There are variable on the otherside like the GPU cores themselves that they can balance, to reduce as much waste as possible and I will confidently say that, that is what both sony and MS have done, within scope of their design brief. This is custom silicon not an off the shelf part.


No one has escaped this problem, Nvidia both uncomfortably under and overshot with 3G/6G versions of the 780 and overshot with 11G on the 1080Ti, AMD likely overshot with the 6000 range with 16G, it happens all the time. Sorry to bust the bubble of the console fanboys but the GPUs aren't all that, direct comparisons have been done to PC hardware and we know that hardware cannot make use of 10Gb of vRAM, we know from benchmarks that any time you try and load up and older GPU with 10Gb that you get completely unplayable frame rates. There's a maximal amount of vRAM that any specific GPU can realistically make use of.
There are no consoles fanboys here *******. Oh look stating your opinion as facts again,while ignoring anything that can run contrary to it (read above). Standard.

You didn't fix anything. What I said is true. The AMD cards have lower memory bandwidth in large part because they use vRAM which is clocked slower, and the memory bandwidth is the frequency of the memory multiplied by the bus width. That was a trade off they could make because they need less memory bandwidth due to infinity cache. I notice that you never actually dispute any of these claims, because they're fundamentally true. Anyone that knows anything about this architecture knows what I've said is trivially true, and that it's not inherently good or bad but a trade off of negative and positive outcomes. On the one hand they waste more die space on cache which means less die space for processing. On the other hand being able to use GDDR6 is an advantage because its cheaper and less power hungry. But with regards to power, so what? First it doesn't negate what I said, and second the fact that GDDR6x uses more power is not a problem for Nvidia. So your point is what?
Reduces power and heat output by not needing highly clocked memory as well as reducing the amount of data needing to be shuffled across the memory interface. Princess Fiona over here declares it a waste of die space. If you had any integrity or an actual appreciation of engineering you wouldn't refer to it as a waste of die space. Don't worry you'll sing a different tune as soon as Nvidia announces they are doing the same thing.

They can use it but it incurs performance penalties, if you try and use it for real time rendering and it turns out to be too slow you run into performance issues where the GPU/APU has to wait partially idle for the memory to serve the data it needs.
Point me to the exact part were i said contrary or are you just writing for the sake of it.

If there was no downside to using slower memory no one would every use fast (expensive) memory. They likely never will actually use that because first of all the OS and all associated other console processes have to run inside that 6Gb including the game core engine code (all the compiled engine dlls), so anyone making any kind of complex AAA game is going to almost certainly fill most of that slower 6Gb on the OS.
Citation on bolded section. Give me actual stats from a XSX game.

This is why all this "mask slipping" snideness is just not warranted.
Is this you?
Spot on, it's no secret to anyone even remotely technical that console generations do not have fast APUs, they're sharing the same die for CPU and GPU functions and all in all they aren't that fast.
Maybe you should check your behaviour first. Elsa.

I've always acknowledged the upsides and downsides of various decisions and pointed out where people are only appealing to one of those things to give a bias view on the situation
Was this before or after calling infinity cache a waste of space? Just for reference.

There is no best way to do something, in engineering there are only tradeoffs, you make decisions which improve the end result in some way but that comes at some cost.
Preaching to the choir i see. And you're not even completely right, but i'll give it to you.


You can post videos laughing all you like, but that's just a pivot away from the details of the discussion.
I post laughing images and memes as a reminder to not take this **** seriously and because it is quick and easy. If I'm posting memes then you should revaluate what you wrote. For example using WD:L as the benchmark for what the consoles can achieve.
I won't even waste my time entertaining such a ridculous assumption. Not when games like the Demon souls remake and Ratchet and Clank exist. and Not when we are A YEAR into the console life cycle. If you were as smart as you try to appear you would shut the **** up and watch it play out, not assert your hastily cobbled together opinion as fact.

I've noticed you do this. You make some offhand comments that don't actually explicitly deny what the other person is saying, you just pivot to something else irrelevant like trying move to a discussion of power usage.
The projection of that statement though, lets discuss it.


The reason for your wall of text is because you use it as a way to slowly move the goal posts. It makes it hard to spot because people get lost in the waffle. If people follow the chain of posts you often end up arguing either something you didn't start with (console price) or a vague statement (like your final statement about textures). It is a great technique for getting someone to arguing something they were not orginally arguing but i simply do not take the bait which is why i will ignore vast swafts of your post. Moving a conversation along has its uses but you just take the ****. You also seem to think too highly of yourself and feel the need to "teach people", to the point of making assumptions and viewing people below you. Like trying to tell a mechanical engineer that engineering is about trade offs.


I truly wanted to thank you for wasting my time @PrincessFrosty.
 
Associate
Joined
24 Jun 2021
Posts
216
Location
U.K.
Ray Tracing cannot compete with Path Tracing

I'm going to stay with a low spec 6700XT until they release hardware that can output Path Tracing in real time, even 35 fps+ would be acceptable in 1080p
I think AMD/NVIDIA already have the hardware just too expensive to pass on to us, lets see within the next 2 or 3 years.

Path tracing is just a way of using ray tracing.

Quake 2 RTX's

Yep spot on we used to use low geometry/textures back in 1998 in directx 3d and opengl, I used to play Q2 quite a lot back in the day.

I would suggest Metro Exodus.

Looks pretty with the RT GI but we programmers can make the GI look very good using just the plain old raster engine.

Still need more rays per pixel to compare it to the photo like quality of 3dsmax, that means beefy 3d hardware for real time.
 
Soldato
Joined
12 May 2014
Posts
5,225
Please fully explain your line of thinking with this, it would help me understand your points (No sarcasm intended)
A leaner post is easier to read and digest. My main goal is to get my point across in a succinct manner, and to waste as little of my time and yours.

If there is anything that needs to be fully elaborated on, I expect people to ask for further clarficiation (as per your post) and just for the record there is nothing wrong with that.

I acknowledge limitations to this posting methods. Poster will not always ask for further information and I do at times snip my post at the wrong section which doesn't help, but i think the trade off is worth it.

Edit: for the record i have nothing against long posts. They have a time and a place.
 
Associate
Joined
1 Oct 2020
Posts
1,145
Cool. Just seems a bit odd saying that the cost increase of adding a 1GB module (If that does work in the architecture, I have no idea on this) would be minimal, when cards cost an insane amount at the moment. There are obviously a lot of factors which are driving that cost, but saying more should be added seems really strange.

Also pricing wise £649 for a high end GPU with 10GB (at MSRP - Not realistic atm, but it was the design goal of the card) seems reasonable progress when compared to a 2080ti with 11GB of slower memory which cost a damn sight more at launch (Was it £1,100?)

Price wise, expensive but progress. Would 1GB extra make that much of a difference? Yet to see our 3080 struggle at 4K with anything mainstream except Cyberpunk that I'm aware of.
 
Last edited:
Caporegime
Joined
21 Jun 2006
Posts
38,372
Cool. Just seems a bit odd saying that the cost increase of adding a 1GB module (If that does work in the architecture, I have no idea on this) would be minimal, when cards cost an insane amount at the moment. There are obviously a lot of factors which are driving that cost, but saying more should be added seems really strange.

Also pricing wise £649 for a high end GPU with 10GB (at MSRP - Not realistic atm, but it was the design goal of the card) seems reasonable progress when compared to a 2080ti with 11GB of slower memory which cost a damn sight more at launch (Was it £1,100?)

Price wise, expensive but progress. Would 1GB extra make that much of a difference? Yet to see our 3080 struggle at 4K with anything mainstream except Cyberpunk that I'm aware of.

It's limited by the memory bus.

They could have had 10GB or 20GB nothing in-between and 20GB was far too much.
 
Soldato
Joined
12 May 2014
Posts
5,225
Cool. Just seems a bit odd saying that the cost increase of adding a 1GB module (If that does work in the architecture, I have no idea on this) would be minimal, when cards cost an insane amount at the moment. There are obviously a lot of factors which are driving that cost, but saying more should be added seems really strange.
The 1GB memory modules was in reference to the consoles only (I seem to have a vague recollection of consoles doing mixed size chips before). It was also about them swapping out unneeded 2GB modules for 1GB modules, if MS and sony felt they had too much VRAM, which would most likely reduce costs. It is based on the assumption that you can run mixed size.

As for GPUs, i don't know if they can run mixed sizes.
 
Associate
Joined
1 Oct 2020
Posts
1,145
I'd have thought that consoles would need to be over-specced in the first instance as they would be designed to last for a lot longer than a GPU would be as they need to be able to offer the promise of the later games for a lot longer. This would be a contributing factor to them being loss-leaders, so whether they felt it was too much RAM at the moment is not the point, I'd guess. They need to look longer term.

A GPU is not designed to be cutting edge for the same time period. 20GB of RAM is overkill which would be more expensive than is currently needed. If it's a choice between 10GB or 20GB, from a purely bang for buck view, 10GB is about right.
 
Status
Not open for further replies.
Back
Top Bottom