• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
One thing that has me slightly concerned is why AMD thought it was a good idea to put 16GB on all their cards.

I think the decision for cards is more constrained than people generally know. Due to the way the architecture of the chip and the memory, as well as the bus width and the desired memory bandwidth you often end up limited to vRAM configurations which are multiples of some value. You can't just put an arbitrary amount onto the card. And in these sorts of circumstances I tend to think that faced with either undershooting or overshooting an ideal target they'd rather overshoot and avoid the bottleneck.
 
One thing that has me slightly concerned is why AMD thought it was a good idea to put 16GB on all their cards.

Being the console hardware partner makes me think they know something we don't.

Unfortunately I have a Gsync monitor so I either get a 3080 or I wait for a card with more Vram.

It's good that it will be a few more months yet until I get my card as it gives me time to wait and see a bit longer.

On the flip side, AMD RT performance wont be as powerful as Nvidia's.

Unless I see the RTX 3080 as a 2 year investment and then upgrade again in 2023.

Developers will always design games to run with the 10GB limit. Almost every GPU is at 8GBs or lower. The Radeon 7 with 16GB of vRAM did not make 16GB a standard. Current 4k games do not hit 8GB of vRAM.

More RAM can also increase the bus bit width and bandwidth. More vRAM is always better, people will just have to decide for themselves if they want 16GB, 24GB or 10GB. Its a choice, if you believe that 16GB will become a stardard for 4k then go for it. I believe its for the bandwidth because the 6800, 6800xt and 6900xt all use the same amount of vRAM. AMD clearly cant match the bandwidth of nvidia GDDX6.

RT is of course worthless too if you play no games that use it. Like Doom Eternal.

The main issue is the SAM feature, I am seeing statements that I works only on Ryzen 5000 series CPU's. So to get the most out of a 6800xt or 6900x people may have to buy a new cpu or even upgrade there cpu, RAM or motherboard. This makes the 6800xt-6900xt much more expensive if you are Intel.
 
Last edited:
More RAM can also increase the bus bit width and bandwidth. More vRAM is always better

Not really, they're pretty much independent of each other, with the possible exception that typically picking a target bus width or target bandwidth means sticking to certain multiples of memory modules.

The latest radeon cards all have 16Gb of vRAM but their bus width is smaller than Nvidia at 256bit, vs Nvidias 320bit bus width. And it also has less memory bandwidth as the GDDR6 is slower, at 512 GB/s vs Nvidia counterpart at 760 GB/sec. The actual memory capacity itself maxes out eventually, once you have more than games need then it has no impact on frame rate. You can clearly see that in the benchmarks of modern games from both camps.
 
Not really, they're pretty much independent of each other, with the possible exception that typically picking a target bus width or target bandwidth means sticking to certain multiples of memory modules.

The latest radeon cards all have 16Gb of vRAM but their bus width is smaller than Nvidia at 256bit, vs Nvidias 320bit bus width. And it also has less memory bandwidth as the GDDR6 is slower, at 512 GB/s vs Nvidia counterpart at 760 GB/sec. The actual memory capacity itself maxes out eventually, once you have more than games need then it has no impact on frame rate. You can clearly see that in the benchmarks of modern games from both camps.

Bandwidth has four factors that I know of.
Frequency, latency, pump rate (SDR or DDR etc) and Bus width.

(memory clock in Hz × bus width ÷ 8) × memory clock type multiplier = Bandwidth https://www.gamersnexus.net/dictionary/5-memory-bandwidth-gpu

GDDR6 has a burst length of 16 bytes (BL16), meaning that each of its two 16-bit channels can deliver 32 bytes per operation. GDDR6X has a burst length of 8 bytes (BL8), but because of PAM4 signaling, each of its 16-bit channels will also deliver 32 bytes per operation. To that end, GDDR6X is not faster than GDDR6 at the same clock. - https://www.tomshardware.com/uk/new...ls-the-future-of-memory-or-a-proprietary-dram

The way a memory controller works on a GPU, the less RAM chips mean a lower bus bit width. Each chip is two 16 bits channels or 32 bits (x 10 = 320 bits rtx 3080). If the 3080 had 12 chips it would have 384 bits bus width like the 3090. This is why more RAM chips are better. You can pick your bus width by removing chips from the PCB or increase it by adding more. Up to a limit anyway. Right now, Micron's GDDR6X memory chips are only available in 8Gb densities, translating to 1GB of capacity. The 3090 has to use two to get the increased capacity for 24GB but this wont increase the bus width.

With maximum speeds of 21Gbps, GDDR6X memory delivers a bandwidth increase of over 30% when compared to 16Gbps GDDR6 memory modules. This lead extends to 50% after you consider that most GDDR6-based consumer GPUs only use 14 Gbps GDDR6 memory modules.

The 6900xt most likely has 2GB x 8 chips https://www.extremetech.com/computing/258901-samsung-introduces-new-16gbps-gddr6-2gb-capacities. 8x2 is 256 bit. Memory controller matters as well and sets the maximum bus width.

So working out the 3080's bandwidth. The frequency is 9500 the bus width is 320 bit and the ram is DDR.

Note that this is 1188MHz RAM. 19Gbps effective. 19Gbps /2 = 9500
9500 x 320 /8 *2 = 760,000 MB/s or 760GB/s https://www.techpowerup.com/gpu-specs/geforce-rtx-3080.c3621

Now for the 3090. 1219MHz RAM. 19.5Gbps effective. 19Gbps /2 = 9750
9750 x 384 /8*2 = 936,000 MB/s or 960GB/s https://www.techpowerup.com/gpu-specs/geforce-rtx-3090.c3622

For the Radeon 6900xt RAM frequency 2000MHz 16Gbps effective. 16GB/2=8000
8000x 256/8*2 = 512,000MB/s or 512GB/s https://www.techpowerup.com/gpu-specs/radeon-rx-6900-xt.c3481

Say for interest sake if we lowered the bus bit width to 256 for GDDR6x.
For the 3090. 19.5Gbps effective. 19Gbps /2 = 9750
9750 x 256 /8*2 = 624,000 or 624GB/s
For the 3080. 19Gbps effective. 19Gbps /2 = 9500
9500 x 256 /8 *2 = 608,000 or 608GB/s

So the GDDR6x RAM is faster even at the same bus bit width. Thus, higher effective I/O Data Rate is a factor.

GDDR6 RAM on the 6900xt has a higher clock speed than the GDDR6x RAM on both the 3080 and 3090. Its faster clock wise. The difference is both bus width and a higher effective I/O Data Rate. Even at the same bus width GDDR6x has a higher effective I/O Data Rate. Thus faster.
Sources https://www.overclock3d.net/news/so...ls_its_gddr6x_memory_and_their_future_plans/1

GDDR6x on the 3080 and 3090 is running with a slower effective I/O Data Rate. GDDR6x was intended to run at 21GB/s effective I/O Data Rate. If this had of happened.

For the 3090. 21Gbps effective. 21Gbps /2 = 10500
10500 x 384 /8*2 = 1,008,000 or 1008GB/s

For the 3080. 21Gbps effective. 21Gbps /2 = 10500
10500 x 320 /8*2 = 840,000 or 840GB/s

In 2021, Micron plans to offer 16Gb (2GB) GDDR6X memory chips, which will allow for the creation of Ampere graphics cards with larger memory densities. So a 20GB RTX 3080 is unlikely in 2020 but not impossible.
 
Last edited:
Unless they go like the 3090 and stick half the memory modules on the back on a revised PCB.
The rumor was that could happen and then the rumor was it won't. https://www.techspot.com/news/87247-nvidia-has-reportedly-canceled-20gb-rtx-3080-16gb.html God love the rumor mill. Now its a new 3080 ti back from the dead. https://www.overclockers.co.uk/foru...nching-this-year-999-with-20gb-vram.18897433/ https://www.techpowerup.com/gpu-specs/geforce-rtx-3080-ti.c3581

05:58 - nvidia cancels rtx 3080 20gb, 3070 16gb
gamers nexus

known leaker @kopite7kimi has managed to discover a new gpu name ga102-250-a1 with 9,984 cuda cores and a 384-bit memory bus with gddr6x support. while no other details are available at the moment, the 384-bit bus in an indication that the card may be sport either 12 gb or 24 gb of gddr6x vram. https://www.notebookcheck.net/nvidi...rface-allegedly-in-the-pipeline.499475.0.html

If the RTX 3080 ti was to exist would it not be the most likely to get 20GB of vRAM. Making an almost RTX 3090 makes no sense but thats just my opinion.

A 384-bit memory bus implies 12GB or 24 GB of vRAM.
 
Last edited:
Guys its finally done! We finally have real benchmarks of actual vram usage. I asked a benchmark youtuber to install the new msi beta and do his next benchmarks with the program. Here they are, 8gb should not be a *HUGE* issue at 1440p for atleast a few years even if vram requirements increase, if we also get optimized games at 1440p 8gb could last 4 years if you can turn down textures in some games.
Watch dogs legions without the hd texture pack uses only 5gb at 1440p lol
I think 3080 10gb is definitely not ideal, but its *not* not enough either. Even going into next gen we should be good :p
But games like doom eternal might push the 10gb to its limits, doom eternal uses like 7gb+ at 1440p.. and 7.5gb at 4k
If you need more vram just buy the 6800xt or wait for the 3080ti if you want to go nvidia.

Anyways, here it is!
https://www.youtube.com/watch?v=rVMbkjtY9ko&lc=z23it1gifpylhvnwi04t1aokgxlcmstpmka1w12i3zjerk0h00410
 
Last edited:
Guys its finally done! We finally have real benchmarks of actual vram usage. I asked a benchmark youtuber to install the new msi beta and do his next benchmarks with the program. Here they are, 8gb should not be a *HUGE* issue at 1440p for atleast a few years even if vram requirements increase, if we also get optimized games at 1440p 8gb could last 4 years if you can turn down textures in some games.
Watch dogs legions without the hd texture pack uses only 5gb at 1440p lol
I think 3080 10gb is definitely not ideal, but its *not* not enough either. Even going into next gen we should be good :p
If you need more vram just buy the 6800xt or wait for the 3080ti if you want to go nvidia.

Anyways, here it is!
https://www.youtube.com/watch?v=rVMbkjtY9ko&lc=z23it1gifpylhvnwi04t1aokgxlcmstpmka1w12i3zjerk0h00410
Is that with the texture pack or stock textures?
 
Is that with the texture pack or stock textures?
For watch dogs legion its with the stock textures, with the hd texture pack its like 8.5gb at 4k and 8gb at 1440p

I think if you like to use hd texture packs definitely the 8gb on the 3070 is not enough for the more vram demanding games...

Like its *JUST ENOUGH* for the hd texture pack in watch dog legions for 1440p, and barely enough for 4k it will likely use ram to make up for the extra vram needed 500mb

Just vanilla though 8gb shouldn't be a huge issue im hoping
 
Last edited:
For watch dogs legion its with the stock textures, with the hd texture pack its like 8.5gb at 4k and 8gb at 1440p

I think if you like to use hd texture packs definitely the 8gb on the 3070 is not enough for the more vram demanding games...

Like its *JUST ENOUGH* for the hd texture pack in watch dog legions for 1440p, and barely enough for 4k it will likely use ram to make up for the extra vram needed 500mb

Just vanilla though 8gb shouldn't be a huge issue im hoping
Why would you not want to use the hd textures though?
 
For watch dogs legion its with the stock textures, with the hd texture pack its like 8.5gb at 4k and 8gb at 1440p

I think if you like to use hd texture packs definitely the 8gb on the 3070 is not enough for the more vram demanding games...

Like its *JUST ENOUGH* for the hd texture pack in watch dog legions for 1440p, and barely enough for 4k it will likely use ram to make up for the extra vram needed 500mb

Just vanilla though 8gb shouldn't be a huge issue im hoping
I wish people would stop using ridiculously poorly optimised games as examples.
 
Yer I think the threads anything but productive, why? Because the same tired arguments are being used on page 92 as page1, the same people who dont understand how Vram works or it's allocation and difference between the two memories types and architecture and mostly the amount of miss information being spouted as fact.

Ah yes, you've mentioned this a number of times.

Fundamentally people just dont seem to understand how vRam works on the Amper cards with 6x memory, and compare it to prior gen tech and then spend far to much time and energy proving their right when they proved already fundamentally they don't even understand how 6x ram is stored, accessed and allocated on 6x memory on Ampere.... It's really tiresome but you guys keep chasing yer tails...

I fail to see how that has any bearing on this discussion however. I know enough about 6x. I know enough about PAM4 to get by. What am I missing when you infer that 6x allocates differently and why that has any bearing on the size of the buffer?
 
lol how does this thread continue unlocked while so many good ones got deleted and locked?

This is the 3rd or 4th time you've thread crapped in the 8/10Gb threads. People are having a civilized, on-topic discussion about something they're interested and care about. Some of us want to investigate the question of how much vRAM is sufficient for GPUs because it affects purchasing decisions and because some of us find technical discussions and the opportunity to learn interesting and stimulating. It's an opportunity for us to test games using new methods to measure vRAM properly, gather and share information, debunk false information etc.

Thread crapping, presumably to try and get the thread closed isn't going to work, its going to get you suspended.
 
some will be cache but after 3-5 hours of dawnb going across the large map GPU memory usage has hit 10GB & system memory of 15GB
seems dawn is using 8GB system memory alone :D
d ppoXEPO.png
 
Why would you not want to use the hd textures though?
I mean if in some hypothetical future game coming out in 3 years, the hd texture pack goes above the 8gb vram buffer, you can just not use it and you will be right back at 8gb. If the texture pack pushes the game beyond your gpu's memory buffer than you might not want to use it for better performance.

I wouldn't mind too much if i am not able to use it. I just want the vanilla games to run nicely for 4 years atleast at high/ultra.

I am planning on buying the 3080 10gb for 1440p gaming.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom