• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
The difference in game is not that great. You keep the textures you see the most at 4k and downsample the rest to 2k. Or you can compress the textures. I believe you can use the tensor cores to compress the contents of the vRAM. I believe the tensor method is lossless.

Snip

The difference between 4k and 2k is based on the size (and shape to an extent) of the object, the amount of screen real estate it is occupying and resolution that it is being viewed on. it doesn't matter if it is an animation or video game or a still image. Edit: such a blanket statement is wrong. I'm also pretty certain that they already use 4k textures for some models in some games.

This is the part where someone chimes in that they don't walk up to walls to inspect the textures they just play blah blah blah.

Do you not interact with items in video games? Walk up to doors to open them. hide behind objects, Get close to an enemy to melee them etc
Or do you just sit afar and admire the scenery?

Regarding the second part of your post are you implying that the devs for this game don't know about downsampling textures or using mip maps or compressing textures? Or did you just feel the urge to explain that to me?
 
Last edited:
The difference between 4k and 2k is based on the size (and shape to an extent) of the object, the amount of screen real estate it is occupying and resolution that it is being viewed on. it doesn't matter if it is an animation or video game or a still image. Edit: such a blanket statement is wrong. I'm also pretty certain that they already use 4k textures for some models in some games.

This is the part where someone chimes in that they don't walk up to walls to inspect the textures they just play blah blah blah.

Do you not interact with items in video games? Walk up to doors to open them. hide behind objects, Get close to an enemy to melee them etc
Or do you just sit afar and admire the scenery?

Regarding the second part of your post are you implying that the devs for this game don't know about downsampling textures or using mip maps or compressing textures? Or did you just feel the urge to explain that to me?

The people who say there is no discernible difference are probably the same people who said there was no difference between 1k and 2k textures or that 1080p and 4k videos look no different etc etc. In the end, it is that persons perception and it is usually wrong.

As resolutions and graphical fidelity in games increase, the textures also need to increase in resolution too. Those kind of improvements in graphical quality are not always immediately noticeable, but they are there, and over time they become normalized and add up to the next level in image quality.

To pretend otherwise is just pointless because it's absolutely nothing new and the exact same thing has been happening to various degrees since 3D games were invented.
 
The people who say there is no discernible difference are probably the same people who said there was no difference between 1k and 2k textures or that 1080p and 4k videos look no different etc etc. In the end, it is that persons perception and it is usually wrong.

As resolutions and graphical fidelity in games increase, the textures also need to increase in resolution too. Those kind of improvements in graphical quality are not always immediately noticeable, but they are there, and over time they become normalized and add up to the next level in image quality.

To pretend otherwise is just pointless because it's absolutely nothing new and the exact same thing has been happening to various degrees since 3D games were invented.

1080p and 4k videos can look the same. There are factors like viewing difference and screen size. In a moving image it hard for the brain to see the fine details. Also movement causes ghosting or blurring on most modern flat screens. Thats why in games it hard to see the difference. If you are close to a 4k monitor and look a stills closely then you can see the difference. Also AA like TAA can soften the image removing the fine detail and it looks very much like 2k textures.
 
Last edited:
1080p and 4k videos can look the same. There are factors like viewing difference and screen size. In a moving image it hard for the brain to see the fine details. That why in games it hard to see the difference. If you are close to a 4k monitor and look a stills closely then you can see the difference. Also AA like TAA can soften the image removing the fine detail and it looks very much like 2k textures.

Are you trying to convince yourself or me? Because I am confident in my views here, having seen the progression of gaming from 2D to early 3D to what we have now and from CRT TV's through to every resolution we have today. There is literally nothing you are writing that is new or that doesn't sound like some anally retentive attempt to engage in circular arguments and I really have more constructive uses for my forum time than to engage in that kind of thing. Lets just agree to disagree. :)
 
Are you trying to convince yourself or me? Because I am confident in my views here, having seen the progression of gaming from 2D to early 3D to what we have now and from CRT TV's through to every resolution we have today. There is literally nothing you are writing that is new or that doesn't sound like some anally retentive attempt to engage in circular arguments and I really have more constructive uses for my forum time than to engage in that kind of thing. Lets just agree to disagree. :)

Make a real point and I will answer. Given you have a 1660 super you dont game at 4k.
 
I still don’t fully understand why Nvidia would go with 10gb of expensive, faster gddr6x, if they were trying to maximise profits at the cost of vram. Surely they could have included 10+gb of slower gddr6 at a similar price point.
 
Random 3080 Owner: I want more then 10GB vram as it's a problem for me in games. Latest Watch Dogs: Legions
"He's not a real gamer, ignore him"
"Is his PC even legit?"
"It's a random though take what he says with a grain of salt"

ROFL
 
Random 3080 Owner: I want more then 10GB vram as it's a problem for me in games. Latest Watch Dogs: Legions
"He's not a real gamer, ignore him"
"Is his PC even legit?"
"It's a random though take what he says with a grain of salt"

ROFL
Well, the only person calling him a random is you and did you watch his previous video? or are you just being a tool?
 
I still don’t fully understand why Nvidia would go with 10gb of expensive, faster gddr6x, if they were trying to maximise profits at the cost of vram. Surely they could have included 10+gb of slower gddr6 at a similar price point.

Because to nvidia this wasn't expensive, they will get excellent price due to the volume they can buy. Check out the latest stories, they all point to stockpiling as Jensen was countering any AMD trump card with his own power play. They have that much DDR6x they can release the Ti's no problem.

With the speed of this ram NVIDIA feel that it can deliver enough bandwidth without having to step it up. I think they are safer with the 10Gb in the 3080 than they are with the 8Gb of the 3070. The truth will out Im sure in a couple of months of what the real amount of VRAM is needed to run virtually all 4k games without the obscure mod packs for games that go beyond the normal release cycle.
 
I still don’t fully understand why Nvidia would go with 10gb of expensive, faster gddr6x, if they were trying to maximise profits at the cost of vram. Surely they could have included 10+gb of slower gddr6 at a similar price point.

Personally I think they new AMD would be pushing them this generation and opted for GDDR6x to maintain some sort of performance advantage.

But I think it might have back fire now for various reasons.
 
Because to nvidia this wasn't expensive, they will get excellent price due to the volume they can buy. Check out the latest stories, they all point to stockpiling as Jensen was countering any AMD trump card with his own power play. They have that much DDR6x they can release the Ti's no problem.

With the speed of this ram NVIDIA feel that it can deliver enough bandwidth without having to step it up. I think they are safer with the 10Gb in the 3080 than they are with the 8Gb of the 3070. The truth will out Im sure in a couple of months of what the real amount of VRAM is needed to run virtually all 4k games without the obscure mod packs for games that go beyond the normal release cycle.

Interesting points, makes sense. So do you believe the increased bandwidth of GDDR6X can compensate for the deficit in Vram capacity?
 
Interesting points, makes sense. So do you believe the increased bandwidth of GDDR6X can compensate for the deficit in Vram capacity?

Yes, as I think engineers know far more about the product than I ever will. We also have enthusiasts on here that read into it/understand it better than I, and they have evidence the actual used VRAM is far lower than the reported 'banked' VRAM that software polls in the tools regular users use.
 
Yes, as I think engineers know far more about the product than I ever will. We also have enthusiasts on here that read into it/understand it better than I, and they have evidence the actual used VRAM is far lower than the reported 'banked' VRAM that software polls in the tools regular users use.

True I did read a Nvidia employee stating they were consulting with game developers over Vram needs going forward. Well hopefully the increased bandwidth will provide the required performance. Apparently Godfall at 4K ultra is using 12gb Vram, so 3080 benchmarks in this game can hopefully provide answers. Though I imagine it’s 12gb allocated not utilised.
 
Yes, but some game optimisation could stave off the issue if Godfall worked with nvidia it would tumble down. The real question is before more people get hold of the 3080 is, does it starve performance because of the 10Gb VRAM? Its unlikely but we need confirmation as if its true, they (in technical eyes) will never live it down.
 
I still don’t fully understand why Nvidia would go with 10gb of expensive, faster gddr6x, if they were trying to maximise profits at the cost of vram. Surely they could have included 10+gb of slower gddr6 at a similar price point.

This is a weird pervasive myth people spread because they're all hopped up on "ngreedia" being horrible capitalists...or something.

They don't pocket any savings by putting less vRAM onto their card, the profit margin stays pretty fixed per product, they pass on the savings to the consumer through cheaper prices. If you wanted more than 10Gb of GDDR6x then the card would cost more. GDDR6 is slower and would leave the GPU memory bandwidth starved if they used it. The only reason AMD can use GDDR6 is because they use less memory bandwidth because they have infinity cache, a large on-chip cache that prevents the need to access vRAM as much.

Memory configs are also limited by the architecture of the video card, you can't just have any amount of vRAM you want, it has to be multiples of some value which is determined by things like the memory bus width.
 
Interesting points, makes sense. So do you believe the increased bandwidth of GDDR6X can compensate for the deficit in Vram capacity?

Yes, as I think engineers know far more about the product than I ever will. We also have enthusiasts on here that read into it/understand it better than I, and they have evidence the actual used VRAM is far lower than the reported 'banked' VRAM that software polls in the tools regular users use.

Th0nt are you saying that you think having 10GB of faster GDDR6x VRAM can completely compensate for a theoretical situation (lets for a moment assume that the Godfall devs did actually implement that) where 12GB or more of VRAM (whether GDDR6 or GDDR6x) is required to run some super duper fancy Ultra mode? Your reasoning reason being... because Nvidia have engineers that 'know their stuff' which somehow means that all of their decisions, despite that it's the management bean counters who actually have the final say on how generous specs can be, must be making great decisions? That's a lot of blind faith to have.

Nvidia already said, in a public blog, that while more VRAM is always nice that they knowingly compromised on the VRAM amount of the 3080 due to cost factors. Then they tried to justify it by saying they had spoken to devs and come up with a good average number of 10GB that they felt was fine for this generation. https://www.nvidia.com/en-us/geforce/news/rtx-30-series-community-qa/

A few examples - if you look at Shadow of the Tomb Raider, Assassin’s Creed Odyssey, Metro Exodus, Wolfenstein Youngblood, Gears of War 5, Borderlands 3 and Red Dead Redemption 2 running on a 3080 at 4k with Max settings (including any applicable high res texture packs) and RTX On, when the game supports it, you get in the range of 60-100fps and use anywhere from 4GB to 6GB of memory. Extra memory is always nice to have but it would increase the price of the graphics card, so we need to find the right balance.

Did they speak to all devs? Can they predict all the visual settings that some devs may want to implement next year and beyond? No, of course they can't. They also used current and even previous generation games as examples of VRAM usage, they mentioned nothing about what they thought would happen in 2021/2022. It's nice that you have blind trust in corporations, but maybe now and then it's nice to just engage a little of your independence and think critically.

At this point the people who are saying 10GB of VRAM was a great decision on a next-gen flagship card appear to be those who are engaging in some disingenuous rationalizing, either because they bought a 3080 10GB and are thus defending the purchase, or because they are part of that select and rather 'special' group of people who engage in white knighting for corporations.

The only reason AMD can use GDDR6 is because they use less memory bandwidth because they have infinity cache, a large on-chip cache that prevents the need to access vRAM as much.

Memory configs are also limited by the architecture of the video card, you can't just have any amount of vRAM you want, it has to be multiples of some value which is determined by things like the memory bus width.

"The only reason AMD can use GDDR6 is because they use less memory bandwidth because they have infinity cache" is a rather dismissive and blasé way of saying that AMD implemented some very smart engineering in their cards this generation. That is, objectively speaking, a real accomplishment that they could engineer a card with GDDR6 and have it perform as good or better as Nvidias cards with GDDR6X. It makes Nvidias decision to go with newer and more expensive GDDR6X ultimately look like a bad call.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom