• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
Well they might be cheaper, they are using older/slower memory and smaller memory bus, giving them substantially less memory bandwidth. That means they can keep prices under control, it also means they need to find a way to reduce memory bandwidth usage otherwise they're going to starve the GPU of data and have pretty severe bottleneck issues. If they've successfully done that kind of remains to be seen yet.

That's a big gamble, to add more memory that will have no benefit just to sell cards to people that think more GB's = more speed. But then introducing bottlenecks because of that and potentially struggling to keep up in benchmarks where it actually matters, is where it could end in tears. Especially if their solution to low memory bandwidth (infinity cache) turns out to be something that requires a lot of per game optimization to work well, and their reputation for not really the best drivers. I mean we'll have to see it's too much speculation at this point.



Yep. GDDR6 is slower than 6x, it runs at about 16Gbps, where as 6x runs at about 19-20Gbps. One thing you need with high end GPUs which are very fast is you need to be able to serve them with data fast enough to keep them busy, otherwise they get bottlenecked by the memory. That total speed is the memory bandwidth and that is a product of 2 things, the memory speed 16Gbps vs 19Gbps, and the memory bus (the width of the data transfer from vRAM to the GPU) which is 256bit vs 320bit. The memory bandwidth is these 2 multiplied together, with AMD opting for both slower memory and smaller bus width it means their overall memory bandwidth is a lot smaller, about 512GB/sec compared to 760GB/sec on the 3080



I agree. It’s the benchmarks that count.
It AMD are matching or succeeding Nvidia with a higher VRAM limit.. then surely they are the no brainer
 
I agree. It’s the benchmarks that count.
It AMD are matching or succeeding Nvidia with a higher VRAM limit.. then surely they are the no brainer

Depends, Nvidia still has many plus points. A lot of people want the Nvidia for gsync to go with their LG OLED tvs.
Also ray tracing is going to be a lot better on the Nvidia and the new DLS 2.0 technology is great.

Even on my freesync Samsung QLED I would much rather pay the extra for an Nvidia. The compatibility is just so much better. For example to use freesync function all I had to do was to turn on freesync premium in the TV menu and the 3080 done the rest. I only ever had to do this the once on my 3080 and it's worked ever since, no screen tearing, instant response at 4k.

The 5700xt was much more effort getting freesync to work, I had to keep cycling it on and off in the TV menu every time I booted a game up or it didn't kick in.

Same thing with HDR on the 3080 works flawlessly no messing about and last time I checked AMD was still having issues.

Also you know with nvidia that any issues will be dealt with in a reasonable amount of time. I got the 5700xt 9 months after release and was still getting all the same crashes, black screens, other issues that people had been complaining about since launch.
 
Depends, Nvidia still has many plus points. A lot of people want the Nvidia for gsync to go with their LG OLED tvs.
Also ray tracing is going to be a lot better on the Nvidia and the new DLS 2.0 technology is great.

Even on my freesync Samsung QLED I would much rather pay the extra for an Nvidia. The compatibility is just so much better. For example to use freesync function all I had to do was to turn on freesync premium in the TV menu and the 3080 done the rest. I only ever had to do this the once on my 3080 and it's worked ever since, no screen tearing, instant response at 4k.

The 5700xt was much more effort getting freesync to work, I had to keep cycling it on and off in the TV menu every time I booted a game up or it didn't kick in.

Same thing with HDR on the 3080 works flawlessly no messing about and last time I checked AMD was still having issues.

Also you know with nvidia that any issues will be dealt with in a reasonable amount of time. I got the 5700xt 9 months after release and was still getting all the same crashes, black screens, other issues that people had been complaining about since launch.


LG OLED CX has Freesync premium too but great point. I will say Gsync works great for me on CX 95% of the time. Does sometimes keep cycling on and off requiring TV reset.
HDR the 3000 series is massive becase NVIDIA support RGB 10bit 120hz. This is MASSIVE for CX and Samsung owners and if AMD don't support this at launch, they can go to hell lol.
I find this funny as I bad experience with NVIDIA. GTA V crashed on NVIDIA drivers for about 6 months.

I am excited if AMD do match or surpass NVIDIA but I am also of your opinion.. in that I feel safer with NVIDIA due to the fact I know what they do and don't offer, and they offer and the tick the boxes of most of what I want.

Its becoming increasingly likely in my mind I might even hold onto my 2080 and just wait to see what the 3xxx refresh provides.



Great points. Even if AMD do better NVIDIA price/performance/VRAM wise.. NVIDIA have a few unique selling points which will beinteresting to see if AMD have overcome.

For me if the HDR OOTB is easy and freesync works, im on board. If it doesn't... man why AMD.. why...
 
Depends, Nvidia still has many plus points. A lot of people want the Nvidia for gsync to go with their LG OLED tvs.
Also ray tracing is going to be a lot better on the Nvidia and the new DLS 2.0 technology is great.

Even on my freesync Samsung QLED I would much rather pay the extra for an Nvidia. The compatibility is just so much better. For example to use freesync function all I had to do was to turn on freesync premium in the TV menu and the 3080 done the rest. I only ever had to do this the once on my 3080 and it's worked ever since, no screen tearing, instant response at 4k.

The 5700xt was much more effort getting freesync to work, I had to keep cycling it on and off in the TV menu every time I booted a game up or it didn't kick in.

Same thing with HDR on the 3080 works flawlessly no messing about and last time I checked AMD was still having issues.

Also you know with nvidia that any issues will be dealt with in a reasonable amount of time. I got the 5700xt 9 months after release and was still getting all the same crashes, black screens, other issues that people had been complaining about since launch.
Yea I agree, Their software team is much better at fixing issues swiftly, they fixed the crashing onver 2ghz issues very fast.

Also nvidia does have better ray tracing which will be in almost all the games coming out in next gen. Dlss 2.0 is a major selling point which amd probably won't have. Dlss 2.1 released a few weeks ago and it looks really good. If dlss 3.0 becomes very easy to implement then 10gb is more than enough for years to come as dlss reduces vram usage by a gigabyte or two depending on how much the game is using and what dlss setting you are using.

For people who need the CUDA cores they can't go amd, also nvidia has a better encoder for people who want to record their gameplay.

I just don't trust amd after how they messed up the navi launch, I am likely going to buy the 3080, as I will be on 1440p I think I won't run into issues for 90% of the games over the 3-5 years I'll keep the card as long as I drop settings wherever necessary.
 
Well...yes and no, sort of.

As games evolve they will provide nicer looking graphics options but demand more from hardware and thus run at lower frame rates. They'll demand more from vRAM and they demand more from GPU processing speed and those 2 demands kinda increase proportionately to each other. The real question being posed in this thread is will the vRAM of 10Gb be the bottleneck first, or will the GPU be a bottleneck first.

Having said that the 3080 is a very fast card, it can handle 4k no problems and so 1440p being a lot less pixels, it deals with very well. At 1440p many of the games today will have such high frame rates that you can afford to take an FPS hit with future games and still have a playable frame rate. There's less headroom for that at say 4k resolution.

Eventually no matter what card you have, you have to drop some settings to get games in the future to be playable, but what we're really discussing here is: are you dropping settings because you've run out of vRAM, or are you dropping setting because you've run out of GPU processing power? My contention is that demand on both these things in modern gaming engines is proportional and so as you have prettier games in future that become too demanding on the GPU as you lower the settings to maintain a playable frame rate you also lower the demands on vRAM and you stay inside your vRAM budget.

Oh okay, I am personally coming from integrated graphics. So this is going to be a huge upgrade for me, I think a 5000% increase in gpu performance lol.

Anyways, After reading that it looks like cross gen titles and all the titles that are out right now will easily run at 1440p@60+fps at ultra. Maybe even the next gen games coming out in 2 years. Then either the 10gb or the shader processors will become a bottleneck and I will have to drop the settings, and looking how demanding next gen games could be, looks like the gpu grunt will become a bottleneck first if i want 1440p@60fps lol. I guess I'll just drop settings to high and that should do the trick.

Right now 99% of the games use an average of 5gb at 1440p, looks like i have a lot of headroom(not allocation, talking about real usage).
 
Depends, Nvidia still has many plus points. A lot of people want the Nvidia for gsync to go with their LG OLED tvs.
Also ray tracing is going to be a lot better on the Nvidia and the new DLS 2.0 technology is great.

Even on my freesync Samsung QLED I would much rather pay the extra for an Nvidia. The compatibility is just so much better. For example to use freesync function all I had to do was to turn on freesync premium in the TV menu and the 3080 done the rest. I only ever had to do this the once on my 3080 and it's worked ever since, no screen tearing, instant response at 4k.

The 5700xt was much more effort getting freesync to work, I had to keep cycling it on and off in the TV menu every time I booted a game up or it didn't kick in.

Same thing with HDR on the 3080 works flawlessly no messing about and last time I checked AMD was still having issues.

Also you know with nvidia that any issues will be dealt with in a reasonable amount of time. I got the 5700xt 9 months after release and was still getting all the same crashes, black screens, other issues that people had been complaining about since launch.


The G-Sync thing I get, though anyone buying a new monitor gets a compatible one at this point I mean why not have both on one?

The whole DLSS and Raytracing? It's just still not there and personally I don't see it being there this generation of cards either.
I have a 2070 and know how many games I have that use either DLSS or Raytracing? None! It just doesn't exist for the majority of things and I doubt it will for the next 3-4 years.
DLSS and Raytracing are just marketing tools, they advertise it being available for a list of games and most don't have it still 2 years later.
Even with Raytracing on the 3080 will struggle with the games coming out in the next couple of years, Cyberpunk 2077? Rather have 144hz thanks than 60 with a slightly prettier pudle.....
 
However people are making a speculative argument that IN FUTURE games will need more vRAM and therefore to future proof the card so it can run games in max settings in future we need X amount of vRAM. And to have a discussion about that you do actually need to have a discussion about what is going on under the hood. First of all you need to actually measure vRAM accurately to see how much is really being used, then we can make some predictions about the growth of games in their demand for vRAM and extrapolate to take a guess at what we think they might need.
I agree with the methodology, but I'm not convinced about the need for this level of digging. It's fun to talk about but people are starting to make financial choices based on this information, and when money's involved you start to get all sorts of twisted, biased, and incorrect information being posted. That's my concern.

Part of the rebuttals to if you need 10Gb or not have been that games today use more than 10Gb of vRAM and that's wrong. It's wrong because the tools we've used to measure them aren't reporting what the games use, but rather what they allocate. To actually test that claim to see if it's right or not we need accurate measurements that look under the hood.
This is what annoys me a bit; memory allocation behaviour is a great place to start when determining why a process is stuttering. There's a fantastic article written by Lois Bavoil, an Nvidia engineer, wherein he uses Microsoft's GPUView to explain how memory over-commitment (asking for too much) can cause stuttering. It's straight-forward, and everyone here should be able to understand it. I'd take this person's expert opinion over literally everything else written in this thread, simply because it has a root cause and analysis.

Sorry, and more to the point from the post you were referring to. What I was saying is that we'll only know for sure if that 16-20Gb of vRAM was worth it by waiting 2-3 years then measuring vRAM usage of future games (in playable settings). If those games are only using 10Gb and not 20Gb then you know that extra vRAM was a waste of money. You can only do that if you can measure vRAM usage accurately.
For the most part yes, that's quite true. I doubt we'll know with any certainty until late 2021/2022, though I'd posit that console hardware specs are just as good an indicator as this future data. It's important that we don't draw incorrect conclusions though - video memory usage may not be the culprit when it comes to future games performing badly. More memory certainly won't hinder performance as local memory still beats system memory and persistent storage, so future-proofing is a valid excuse to go for a 16GB card over another, for example.

Well written post by the way.
 
I agree. It’s the benchmarks that count.
It AMD are matching or succeeding Nvidia with a higher VRAM limit.. then surely they are the no brainer

For me the primary concern is how well does Infinity Cache make up for the lack of memory bandwidth. If it's a straight win that legitimately lowers memory bandwidth demand and doesn't require any per-application optimization then AMD could be onto a real winner, not needing as fast memory will save them money and allow them to be competitive. Of course if Infinity Cache is kinda wonky and doesn't save much on memory bandwidth unless you really optimize what you're holding in the cache, then it could become very much driver dependent and that for me would be a huge cause for concern. It also means I'm going to be skeptical of performance benchmarks which are cherry picked at launch and wait for a large array of different games to be tested to see if some suffer from that, although honestly I'm always wary of that with both teams because often they'll just pick games that run well on their architecture.

With all that said it's not always about beating Nvidia, it's about offering the best possible experience for your consumers, so if they can make a 16Gb card that's as fast as a 3080 and cheaper then that's a winner (assuming you dont care about any other brand specific features), but if you also made a 10Gb card that beats the 3080 you could do it even cheaper. If 10Gb really is going to be enough there's no win in adding more, it's just more cost to build the card which is always passed onto the consumer at the end of the day.
 
The 5700xt was much more effort getting freesync to work, I had to keep cycling it on and off in the TV menu every time I booted a game up or it didn't kick in.

That's cause AMD doesn't do screen validation. Where as Nvidia runs each screen through a list of 200 tests before they give it gsync.
With Freesync its up to the screen maker to step up and fix any issues, where as Nvidia forces the screen maker to fix the issues or it won't allow gsync.

The benefit is that when you're going with the gsync option, you know you're getting better compatibility.

As a PC gamers I'm used to all the issues that comes with it, but as I get older I also have a lot less time, I certainly don't have time to fluff around - I want stuff to work first time, everytime so I can get in an hour of solid gaming before I have to do something else. Often if I run into time wasting issues, I will just uninstall a game and move on. I must admit I quite often think about ditching the PC for just console gaming
 
Depends, Nvidia still has many plus points. A lot of people want the Nvidia for gsync to go with their LG OLED tvs.
Also ray tracing is going to be a lot better on the Nvidia and the new DLS 2.0 technology is great.

Even on my freesync Samsung QLED I would much rather pay the extra for an Nvidia. The compatibility is just so much better. For example to use freesync function all I had to do was to turn on freesync premium in the TV menu and the 3080 done the rest. I only ever had to do this the once on my 3080 and it's worked ever since, no screen tearing, instant response at 4k.

The 5700xt was much more effort getting freesync to work, I had to keep cycling it on and off in the TV menu every time I booted a game up or it didn't kick in.

Same thing with HDR on the 3080 works flawlessly no messing about and last time I checked AMD was still having issues.

Also you know with nvidia that any issues will be dealt with in a reasonable amount of time. I got the 5700xt 9 months after release and was still getting all the same crashes, black screens, other issues that people had been complaining about since launch.

Sorry, but please don't post if you have no idea how things work.

The LG CX OLED's have fully supported AMD freesync premium for months. I really hate it when people, intentionally or not, spread complete nonsense, when a simple google search would tell you otherwise.
 
Flight sim?

The 11gb 2080Ti still gets beat by the 3080 with "only" 10gb of vram.

I don't think any amount of vram will get the 3080 to a decent performance level.

https://www.guru3d.com/articles-pages/geforce-rtx-3090-founder-review,21.html

This is not a vram limitation.

How well does the 2080 8GB do vs the 1080ti 11GB at 4k in Doom Eternal buddy? Does it show it's usual big lead over the 1080ti, or does the 11GB 1080ti practically match it's performance, due to the 2080 8GB running out of VRAM? Hmm?

I wonder why this is?... Must be a bug as you clearly can't have performance drops due to lack of sufficient VRAM, oh no....... (please note, for the slower minds, my last sentence is sarcasm)

6 month year old games already needing more than 8GB VRAM for 4k. Consoles getting double VRAM in a few weeks..... Just how sufficient is that whopping 10GB of VRAM exactly? ;)

(In before the army of 3080 owners, future owners, defend their precious card with fantastic words.... "My 3080 will be the next 1080ti, it will last for years and years!").
 
Sorry, but please don't post if you have no idea how things work.

The LG CX OLED's have fully supported AMD freesync premium for months. I really hate it when people, intentionally or not, spread complete nonsense, when a simple google search would tell you otherwise.

he probably has a C9, like most OLED owners who jumped into the gsync hype when it was launched. LG refuses to allow freesync on the C9, but technically you can still enable Freesync on the C9 by modifying some code in the TV's developer settings
 
How well does the 2080 8GB do vs the 1080ti 11GB at 4k in Doom Eternal buddy? Does it show it's usual big lead over the 1080ti, or does the 11GB 1080ti practically match it's performance, due to the 2080 8GB running out of VRAM? Hmm?

I wonder why this is?... Must be a bug as you clearly can't have performance drops due to lack of sufficient VRAM, oh no....... (please note, for the slower minds, my last sentence is sarcasm)

6 month year old games already needing more than 8GB VRAM for 4k. Consoles getting double VRAM in a few weeks..... Just how sufficient is that whopping 10GB of VRAM exactly? ;)

(In before the army of 3080 owners, future owners, defend their precious card with fantastic words.... "My 3080 will be the next 1080ti, it will last for years and years!").

I've posted a bunch of sources which debunk this claim in the 8GB vRAM enough for 3070 thread here https://www.overclockers.co.uk/forums/posts/34122548 as well as a deeper analysis of what precisely Doom Eternal is doing with that vRAM (hint, it's reserving it for a memory pool to do texture swaps, it's not increasing texture resolution)
 
Sorry, but please don't post if you have no idea how things work.

The LG CX OLED's have fully supported AMD freesync premium for months. I really hate it when people, intentionally or not, spread complete nonsense, when a simple google search would tell you otherwise.

You nutter.

I never once said the LG CX OLED didn't have freesync.
 
So we are basically worried a bit about 4k, but do we all agree 10gb is more than enough for 1440p?

From what I see, games are using like 4gb-6gb at 1440p for 99% of the games at the highest settings. And around 6gb-8gb at 4k at the highest settings.

With the true next gen games maybe this 1440p usage will move to 6gb-8gb, and 4k might move to something like 8gb-10gb or 8gb-12gb.

This is all purely speculative but I'm hoping these cards last a while... As far as I know even the 780ti lasted like 3 years before it could not run most games at ultra settings anymore.

We are not being gimped as bad as we were by the 780ti it had 35% of vram of ps4's total memory, and the 3080 has 65% of ps5's total memory pool.
 
DLSS 2.1 quality is at times better than native 4k using a much lower resolution for rendering. So you don't really need the vRAM for much higher resolutions, DLSS 2.1 takes care of that problem.

Control is a great example.



DLSS 1.9 was not very good but DLSS 2.1 is the real deal. You can even run 4k on a 2060 now.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom