• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
Well they might be cheaper, they are using older/slower memory and smaller memory bus, giving them substantially less memory bandwidth. That means they can keep prices under control, it also means they need to find a way to reduce memory bandwidth usage otherwise they're going to starve the GPU of data and have pretty severe bottleneck issues. If they've successfully done that kind of remains to be seen yet.

Yep. GDDR6 is slower than 6x, it runs at about 16Gbps, where as 6x runs at about 19-20Gbps. One thing you need with high end GPUs which are very fast is you need to be able to serve them with data fast enough to keep them busy, otherwise they get bottlenecked by the memory. That total speed is the memory bandwidth and that is a product of 2 things, the memory speed 16Gbps vs 19Gbps, and the memory bus (the width of the data transfer from vRAM to the GPU) which is 256bit vs 320bit. The memory bandwidth is these 2 multiplied together, with AMD opting for both slower memory and smaller bus width it means their overall memory bandwidth is a lot smaller, about 512GB/sec compared to 760GB/sec on the 3080

Okay so, what does it mean for games? Will the 3080 use less vram compared to a 6800xt?
Or is 10gb gddr6x like 13-14gb of gddr6?
In practical sense and layman's terms what benefit does having g6x give nvidia?
 
Flight sim?

The 11gb 2080Ti still gets beat by the 3080 with "only" 10gb of vram.

I don't think any amount of vram will get the 3080 to a decent performance level.

https://www.guru3d.com/articles-pages/geforce-rtx-3090-founder-review,21.html

This is not a vram limitation.
Don't waste your time, he will not see reason. He keeps quoting the same two games over and over like a broken record which by the way the 3080 10gb is enough for now. He went and got a Radeon 16gb and look how that turned out. Lol.

Even though I made it clear to him I will be upgrading again when Hopper comes out he still banged on about 10gb won't be enough until then. Like what the ef. It works on every game now, how many games do you really think will come out between now and the release of Hopper that will not work on 10gb and how many of those will be games one will want to buy and play? The answer is a handful at most! In those games I will run on one lower texture setting, job done. Blimey! :D:p
 
Some really good point's you make, its nice when someone comes to the table with something well though out and presented rather than "Nvidia is just better than AMD sux it loozers" that the majority here seem to bring :)
I think between both our views the truth doth lay a bit of both, only Jensen and his board will ever truly know.

Nvidia definitely had to pull the 20gb versions though I had questioned myself when it was on the table "Why? Makes no sense at this point", with such delays to the 10gb it would have been a lynching for Nvidia had they dropped 20gb versions in December, and it does mean next year when Nvidia drop the 7nm Super with 20gb they can go back to charging 2080Ti money and justify it with marketing spin, that the masses will thank them for because Beeeehhhhrrrrr (<-sheep impression ;-0).

Yeah I mean I've been doing this since before Nvidia and ATI were a thing, back in the 3DFX days and learned not to be a fanboy, just discuss the ideas and facts and don't get too attached.

The reason they want to release a 20Gb version is I think because of this very thread. Not literally this thread, but what this thread indicates which is that there's a market demand for more memory. That demand doesn't have to be sensible or logical it just needs to be there, if they can make money from it then it's all profit to them.

I'll quote Darujhistan from the is 8Gb enough for the 3070 thread here to indicate this point.

8GB of VRAM for £500 is cheaping out, simple as that. They should be 16GB cards for that price, at the very least a 12GB card in the 70 class, the 80 class (and AMD equiv's held to the same standard) should be 16GB minimum, whether it is needed (arguable) or sufficient (again arguable) - is neither here not there...

These people simply do not care what is needed or sufficient, they only care about the number being what they expect it to be, no justification needed.

Okay so, what does it mean for games? Will the 3080 use less vram compared to a 6800xt?
Or is 10gb gddr6x like 13-14gb of gddr6?
In practical sense and layman's terms what benefit does having g6x give nvidia?

So the basic way this works is that size of memory and memory speed are not interchangeable, they're 2 separate metrics that both matter for gaming.

1) Memory capacity/size - The size of the memory relates to how much data you can put in there, the number of unique textures and models and things. As you turn up your visual settings in game and make it look prettier you generally increase the amount of vRAM you need to hold those assets. Having more vRAM than a game demands does not give you a performance increase, if a game needs 6Gb and you have 8Gb, then getting more vRAM wont affect your performance, it'll sit there empty doing nothing.

2) Memory speed or data rate - This is the speed/rate at which data can be written in and out of memory. The GPU (the processor on the video card) has to read values out of vRAM to do calculations on them, and then write value back into vRAM as it produces each new frame. As GPUs get faster and faster they need to be able to read data out of memory faster so memory bandwidth needs to increase to keep up. If you don't have enough memory bandwidth your GPU wont be able to get the data it needs to process fast enough and wont be able to run at full speed, you'll have run into a memory bottleneck. Having more memory bandwidth than you need also does not improve performance.

The kicker is of course that memory costs money, having more memory capacity costs more, and having faster memory costs more. And because you do not get benefits by having more (or faster) than what you need, you want to make sure you only have as much on the card as you need, and no more. Otherwise you're paying money for things that provide you no benefit. If your GPU can only use 10Gb of memory having 16Gb costs you more money but provides no benefit.

So to answer your questions knowing all this. The 3080 will need just as much vRAM for the same workload as the 6800XT

10Gb of GDDR6x is not like 13-14Gb of GDDR6. GDDR6x is faster than GDDR6 and so this matters for point 2) above. What we're saying is that the 6800XT will have more vRAM capacity but less vRAM speed.

The practical benefit of GDDR6x for Nvidia is that it is faster, it runs at about 19-20Gbps where as GDDR6 runs at about 16Gbps. So coming back to point 2) above, it's possible this wont be fast enough to supply the GPU with data which will mean the GPU cannot run at full speed. Now I have to stress, this is speculative, there are things AMD are doing to try and mitigate this problem such as Infinity cache which you can google and read about. Whether that works well or not remains to be seen, we'll need to see real benchmarks to see what is happening.
 
Flight sim?

The 11gb 2080Ti still gets beat by the 3080 with "only" 10gb of vram.

I don't think any amount of vram will get the 3080 to a decent performance level.

https://www.guru3d.com/articles-pages/geforce-rtx-3090-founder-review,21.html

This is not a vram limitation.

When measured with the in game dev tools FS2020 at 4k Ultra needs about 9.5Gb of vRAM, and that's confirmed by people testing with MSI Afterburner with the new beta that can read "real" usage. Prior to these measurements it was thought it was using about 12.5Gb, but this is malloc not mem used. Which is why the 3080 tidily beats the 2080Ti because it's not vRAM constrained. At higher frame rates of about 60fps or above it looks like FS2020 is CPU bound which makes sense given its a simulator and has extremely high demands on the primary CPU thread. A lot of this is apparently draw calls showing why DX11 is kind of aging and needs to go. To anyone with a keen eye for benchmarking you can see it's CPU constrained because at lower resolutions like 1080p vs 1440p there's no frame rate difference with the high end hardware.

Those Guru3d numbers seem high. From real in game benchmarks at 4k Ultra we've seen 3080's running at about 25fps not 42fps, I'm not sure if they're completely maxed out in terms of settings when they say "ultra". But this brings into discussion what I've said all along which is that by the time you're anywhere close to 10Gb of vRAM usage the GPU is already struggling to keep up. 42fps in their benchmark is less than ideal, certainly other reviews showing much less, around the 25fps mark is unplayable.
 
Yeah I mean I've been doing this since before Nvidia and ATI were a thing, back in the 3DFX days and learned not to be a fanboy, just discuss the ideas and facts and don't get too attached.

The reason they want to release a 20Gb version is I think because of this very thread. Not literally this thread, but what this thread indicates which is that there's a market demand for more memory. That demand doesn't have to be sensible or logical it just needs to be there, if they can make money from it then it's all profit to them.

I'll quote Darujhistan from the is 8Gb enough for the 3070 thread here to indicate this point.



These people simply do not care what is needed or sufficient, they only care about the number being what they expect it to be, no justification needed.



So the basic way this works is that size of memory and memory speed are not interchangeable, they're 2 separate metrics that both matter for gaming.

1) Memory capacity/size - The size of the memory relates to how much data you can put in there, the number of unique textures and models and things. As you turn up your visual settings in game and make it look prettier you generally increase the amount of vRAM you need to hold those assets. Having more vRAM than a game demands does not give you a performance increase, if a game needs 6Gb and you have 8Gb, then getting more vRAM wont affect your performance, it'll sit there empty doing nothing.

2) Memory speed or data rate - This is the speed/rate at which data can be written in and out of memory. The GPU (the processor on the video card) has to read values out of vRAM to do calculations on them, and then write value back into vRAM as it produces each new frame. As GPUs get faster and faster they need to be able to read data out of memory faster so memory bandwidth needs to increase to keep up. If you don't have enough memory bandwidth your GPU wont be able to get the data it needs to process fast enough and wont be able to run at full speed, you'll have run into a memory bottleneck. Having more memory bandwidth than you need also does not improve performance.

The kicker is of course that memory costs money, having more memory capacity costs more, and having faster memory costs more. And because you do not get benefits by having more (or faster) than what you need, you want to make sure you only have as much on the card as you need, and no more. Otherwise you're paying money for things that provide you no benefit. If your GPU can only use 10Gb of memory having 16Gb costs you more money but provides no benefit.

So to answer your questions knowing all this. The 3080 will need just as much vRAM for the same workload as the 6800XT

10Gb of GDDR6x is not like 13-14Gb of GDDR6. GDDR6x is faster than GDDR6 and so this matters for point 2) above. What we're saying is that the 6800XT will have more vRAM capacity but less vRAM speed.

The practical benefit of GDDR6x for Nvidia is that it is faster, it runs at about 19-20Gbps where as GDDR6 runs at about 16Gbps. So coming back to point 2) above, it's possible this wont be fast enough to supply the GPU with data which will mean the GPU cannot run at full speed. Now I have to stress, this is speculative, there are things AMD are doing to try and mitigate this problem such as Infinity cache which you can google and read about. Whether that works well or not remains to be seen, we'll need to see real benchmarks to see what is happening.

Again agree.

The way the 6x swaps memory across the controller is the perk of the 6x and why AMD using Infinity cache to do the same thing but ofc for the reduced cost, one solution just swaps the memory so fast on demand it doesn't need more than 10GB (This is where most people currently fall down and assume the tech is like last gen, it's moved/changed and what people don't understand) the other has a cache and larger storage to swap the memory in and out, same thing different approach one needs bigger ram volumes the other doesn't (under current and immediate application loads and foreseeable for the life of these cards).

So like you highlight, the majority of people on here spout verbatim 10GB 6X isn't enough for a 3080 and have no clue how the tech has changed or understand how it works, but see the allocated memory hit 9.2GB and assume its about to run out...... Nothing could be further from the truth...


Then the ones who realise they are wrong but can't admit it spout how they run a custom mod in some obscure game from 3 years ago that required 8k textures that possibly 0.1% of the playerbase might use or care about, ironically that 8k texture has zero use to improve visuals outside of screen shots as they wont see it in game with the human eye.

The fact is 10gb for the 3080 is perfectly fine as the card will run out of horsepower way before it runs out of Vram the way things work on this card, and by that point we are purchasing 4xxx , cards...

And still we have idiots making threads "Is 8GB enough for the 3070?", and still the 3080 "Is 10gb enough?" thread continue to fill with idiots who don't understand memory on the 3080 or how it works.

These 3080/70 cards even in the next 4 years will run out of compute power that lowers FPS way before Vram is a problem and the forced reduction in future game quality settings is going to be because of the lack of computation power not Vram and naturally like every single card since the Vuhdoo2 will require reduced in game settings as games evolve and the card ages...

All this sky is falling irks me because the same idiots will be happy for AMD to fair so they can sau "I told you so" and Nvidia jack prices up to £1500 again....
 
Then the ones who realise they are wrong but can't admit it spout how they run a custom mod in some obscure game from 3 years ago that required 8k textures that possibly 0.1% of the playerbase might use or care about, ironically that 8k texture has zero use to improve visuals outside of screen shots as they wont see it in game with the human eye.

The fact is 10gb for the 3080 is perfectly fine as the card will run out of horsepower way before it runs out of Vram the way things work on this card, and by that point we are purchasing 4xxx , cards...

10-15 years ago when I'd have these sorts of discussions people were very familiar with the notion of bottlenecks and understood it was all things in balance. I'm not sure what happened to that notion today, it seems a lot less common.

It's like putting racing tires onto Volkswagen Beetle, cool...but unless you're also going to put a huge engine in there who the hell cares. People that talk about increased vRAM usage of future games but ignore the increased load on the GPU of those same future games are being disingenuous. I think that's only really obvious to those of us who have been through this same cycle time and time again.
 
Yeah I mean I've been doing this since before Nvidia and ATI were a thing, back in the 3DFX days and learned not to be a fanboy, just discuss the ideas and facts and don't get too attached.

The reason they want to release a 20Gb version is I think because of this very thread. Not literally this thread, but what this thread indicates which is that there's a market demand for more memory. That demand doesn't have to be sensible or logical it just needs to be there, if they can make money from it then it's all profit to them.

I'll quote Darujhistan from the is 8Gb enough for the 3070 thread here to indicate this point.



These people simply do not care what is needed or sufficient, they only care about the number being what they expect it to be, no justification needed.



So the basic way this works is that size of memory and memory speed are not interchangeable, they're 2 separate metrics that both matter for gaming.

1) Memory capacity/size - The size of the memory relates to how much data you can put in there, the number of unique textures and models and things. As you turn up your visual settings in game and make it look prettier you generally increase the amount of vRAM you need to hold those assets. Having more vRAM than a game demands does not give you a performance increase, if a game needs 6Gb and you have 8Gb, then getting more vRAM wont affect your performance, it'll sit there empty doing nothing.

2) Memory speed or data rate - This is the speed/rate at which data can be written in and out of memory. The GPU (the processor on the video card) has to read values out of vRAM to do calculations on them, and then write value back into vRAM as it produces each new frame. As GPUs get faster and faster they need to be able to read data out of memory faster so memory bandwidth needs to increase to keep up. If you don't have enough memory bandwidth your GPU wont be able to get the data it needs to process fast enough and wont be able to run at full speed, you'll have run into a memory bottleneck. Having more memory bandwidth than you need also does not improve performance.

The kicker is of course that memory costs money, having more memory capacity costs more, and having faster memory costs more. And because you do not get benefits by having more (or faster) than what you need, you want to make sure you only have as much on the card as you need, and no more. Otherwise you're paying money for things that provide you no benefit. If your GPU can only use 10Gb of memory having 16Gb costs you more money but provides no benefit.

So to answer your questions knowing all this. The 3080 will need just as much vRAM for the same workload as the 6800XT

10Gb of GDDR6x is not like 13-14Gb of GDDR6. GDDR6x is faster than GDDR6 and so this matters for point 2) above. What we're saying is that the 6800XT will have more vRAM capacity but less vRAM speed.

The practical benefit of GDDR6x for Nvidia is that it is faster, it runs at about 19-20Gbps where as GDDR6 runs at about 16Gbps. So coming back to point 2) above, it's possible this wont be fast enough to supply the GPU with data which will mean the GPU cannot run at full speed. Now I have to stress, this is speculative, there are things AMD are doing to try and mitigate this problem such as Infinity cache which you can google and read about. Whether that works well or not remains to be seen, we'll need to see real benchmarks to see what is happening.
Thanks for the explanation!
 
Again agree.


The fact is 10gb for the 3080 is perfectly fine as the card will run out of horsepower way before it runs out of Vram the way things work on this card, and by that point we are purchasing 4xxx , cards...

oh okay, So you think If someone wants to make the card last 3-4 years at 1440p he/she should be good then?

I personally wouldn't mind turning down textures as time goes on :p
These 3080/70 cards even in the next 4 years will run out of compute power that lowers FPS way before Vram is a problem and the forced reduction in future game quality settings is going to be because of the lack of computation power not Vram and naturally like every single card since the Vuhdoo2 will require reduced in game settings as games evolve and the card ages...
Yea that makes a lot of sense
 
I mean we are technically not being gimped too bad right?

When the ps4 was released with 8gb of shared memory, 780ti the flagship had only 3gb of memory which was around 35% of ps4's memory.

ps5 has 16gb of shared memory, and the 3080 has 10gb of super fast ddr6x which is like 65% of ps5's video memory.

Also it has been said numerous times you will run out of horsepower before you run out of vram.

Lets look at next gen games which are going to like run on a totally new engine and might use a lot more polygons per pixel(i might be using some wrong terminology here that's just what i read somewhere) so games will look a lot better and might need more than 10gb of vram at max settings at 4k, but if we need to render that many polygons won't we need better gpus too? Like lets assume a true next gen game that will be very demanding might *use* around 12gb-16gb of vram not allocate at max settings at 4k(this won't happen but for the sake of argument let's say it does), if it is using a new engine, won't the 3080 be too weak to feed that much vram anyway and still get 4k60fps? or 1440p144fps?

You will naturally reduce textures which will bring down the vram and you will be under the 10gb on the 3080.

Then there is DLSS which will likely be in almost every game that will use absurd amounts of vram, dlss brings vram down, so using that will also ensure longevity.

I have read all 78 pages of this thread, to me it looks like 16gb and 12gb cards are just a product of competition, not what games actually will need.
I really doubt any game will need 16gb of vram even at 4k(actual usage not allocation) until the end of this generation lol(until maybe 2026+ when ps6 will be around the corner)

I am gonna say 10gb will be enough for all the cross gen titles coming out in 2021 and 2020, and should still run true next gen games coming out in 2022 at relatively good settings (high/ultra) at 4k. So 2-3 years at high/ultra, depending on the title some might max out 10gb so you might need to drop to high.

And for 1440p it should be enough for the next 3-4 years at ultra(for 90% of the games) i presume, but it really depends on how the new unreal engine 5 and similar engines will turn out to be only time will tell.

Again i feel like if true next gen games really will need 10gb+ memory, 3080 will be too weak to feed that much memory at high refresh rates anyway, so you will be dropping settings.

https://www.resetera.com/threads/msi-afterburner-can-now-display-per-process-vram.291986/page-2

this guy has a rtx 3080, and he benchmarked several games with the msi after burner's new beta and could see the actual usage of vram.

Games are using 4gb-6gb(horizon zero dawn uses 7gb it is an outlier) at 3440*1440 which is like 60% less pixels than 4k. So I doubt even 4k at maxed settings will use any more than 6gb-8gb in any game.

So going by this it looks like the new trend for 4k is going to be 8gb-12gb, so 10gb doesn't seem too bad.

Probably going to be 6gb-10gb for lower resolutions.

according to someone in that thread 10gb is not enough. there using 13-16gb running a heavlily moded skyrim at 4k. and getting 60fps on a 3090, which means a 3080 would not be able to run that.
 
Moders definitely want more than 10gb, vram is a major limitation running high quality Skyrim mods and other games won't be different.

I suspect this new generation won't be much different in vram requirements due to console limitations - for example the Xbox series x only has 10gb of dedicated fast vram. Games on PC won't require much more than that until those PC games ship with better more complicated graphics than those consoles
 
oh okay, So you think If someone wants to make the card last 3-4 years at 1440p he/she should be good then?

I personally wouldn't mind turning down textures as time goes on :p

Well...yes and no, sort of.

As games evolve they will provide nicer looking graphics options but demand more from hardware and thus run at lower frame rates. They'll demand more from vRAM and they demand more from GPU processing speed and those 2 demands kinda increase proportionately to each other. The real question being posed in this thread is will the vRAM of 10Gb be the bottleneck first, or will the GPU be a bottleneck first.

Having said that the 3080 is a very fast card, it can handle 4k no problems and so 1440p being a lot less pixels, it deals with very well. At 1440p many of the games today will have such high frame rates that you can afford to take an FPS hit with future games and still have a playable frame rate. There's less headroom for that at say 4k resolution.

Eventually no matter what card you have, you have to drop some settings to get games in the future to be playable, but what we're really discussing here is: are you dropping settings because you've run out of vRAM, or are you dropping setting because you've run out of GPU processing power? My contention is that demand on both these things in modern gaming engines is proportional and so as you have prettier games in future that become too demanding on the GPU as you lower the settings to maintain a playable frame rate you also lower the demands on vRAM and you stay inside your vRAM budget.

according to someone in that thread 10gb is not enough. there using 13-16gb running a heavlily moded skyrim at 4k. and getting 60fps on a 3090, which means a 3080 would not be able to run that.

It's not clear reading that thread if the person claiming high vRAM usage in skyrim is actually measuring vRAM used rather than vRAM allocated. When we've seen claims of vRAM usage of that size in this thread and we've investigated those claims they've turned out to be wrong. The whole point of that resetera thread is that "real" memory use is much less than what is reported by most popular tools.

Game modding comes with its own set of problems in this discussion, engines are generally designed and optimized around dealing with assets of a certain size and complexity. If you try and cram an old engine with assets it was never optimized to handle you could blow through vRAM budgets. But that vRAM usage you're seeing might not be that sensible. It might be sticking full res 4k textures on assets so far away from your point of view that they're indistinguishable from 1k textures, and a modern engine would better optimize this for reduced vRAM usage by using more aggressive LODs and things of that nature.
 
Moders definitely want more than 10gb, vram is a major limitation running high quality Skyrim mods and other games won't be different.

I suspect this new generation won't be much different in vram requirements due to console limitations - for example the Xbox series x only has 10gb of dedicated fast vram. Games on PC won't require much more than that until those PC games ship with better more complicated graphics than those consoles

if my 1080ti has 11gb..and i play a modded game that is going over 10gb...does the mean a 3080 will crash since it does not have 11gb? or does the speed of the ram = to less ussage of vram?
 
if my 1080ti has 11gb..and i play a modded game that is going over 10gb...does the mean a 3080 will crash since it does not have 11gb? or does the speed of the ram = to less ussage of vram?

Get the MSI beta using this link and follow the instructions to measure the "per process" memory allocated https://www.resetera.com/threads/msi-afterburner-can-now-display-per-process-vram.291986/

if your modded games are between 10-11Gb as memory allocated, the odds are they are way under 10Gb on real usage and will be fine on a 3080.
 
Get the MSI beta using this link and follow the instructions to measure the "per process" memory allocated https://www.resetera.com/threads/msi-afterburner-can-now-display-per-process-vram.291986/

if your modded games are between 10-11Gb as memory allocated, the odds are they are way under 10Gb on real usage and will be fine on a 3080.

There not, but i'm just trying to understand, Since somone was playing skyrim on a 3090 and going over 13-16vram. in extreme cases so its possible.

But say I was going over 10gb in a game, since my 1080ti has 11gb its fine and it wont crash.. but would that still be the case with a 3080? would it crash due to not having that extra GB that the 1080ti has or would it work out as consuming less vram then a 1080ti due to the memory being faster?
 
Last edited:
There not, but i'm just trying to understand, Since somone was playing skyrim on a 3090 and going over 13-16vram. in extreme cases so its possible.

But say I was going over 10gb in a game, since my 1080ti has 11gb its fine and it wont crash.. but would that still be the case with a 3080? would it crash due to not having that extra GB that the 1080ti has or would it work out as consuming less vram then a 1080ti due to the memory being faster?

If a card exceeds its total vram it will have a performance penalty when trying to access the data outside of Vram.
 
there not but im just trying to understand..say a game that was modded was taking over 10gb...since somone was playing skyrim on a 3090 and going over 13-16vram..in extreme cases its is possible. but say i was goign over 10gb since my 1080ti has 11gb its fine.. but would that still be the case with a 3080 or would it work out as consuming less vram then a 1080ti because of the faster ram speed the 3080 has?

No, vRAM speed isn't the limiting factor if you've exceeded your onboard vRAM capacity. The issue is that any assets that cannot fit into the vRAM on the video card have to be cached to disk (HDD/SSD) and those devices in terms of memory speed are insanely slow by comparison, which can lead to very bad performance in game. The vRAM speed is really just the speed between the GPU and the vRAM, you can imagine the connection between the components like this.

HDD/SSD <----> vRAM <----> GPU

The speed of the vRAM is only between the vRAM and the GPU, the bottleneck between the vRAM and the HDD/SSD is how fast you can read assets from the HDD/SSD, they're always orders of magnitude slower.
 
Yeah if this rumour is true about no 20Gb then it is a bit of a setback for me on 1080ti. Would like to have more Ram than previous model.
Could always go AMD but I am on a G-sync monitor plus was looking forward to playing some RT games like cyberpunk, control.

Could always wait for these 7nm cards but could be another 6 months away. :confused:

Just do what I'm doing, wait for Hoppit.

Got plenty of games to keep me going until then.
 
There not, but i'm just trying to understand, Since somone was playing skyrim on a 3090 and going over 13-16vram. in extreme cases so its possible.

But say I was going over 10gb in a game, since my 1080ti has 11gb its fine and it wont crash.. but would that still be the case with a 3080? would it crash due to not having that extra GB that the 1080ti has or would it work out as consuming less vram then a 1080ti due to the memory being faster?

Hey, When you run out of Vram in a game your frame rate will drop drastically. (It doesn't crash)

You could be getting a steady 60fps and then when you run out of Vram it would drop to 5-10 FPS.

You would certainly know when it happens. I have a 3080 that I have had since launch and tested many different games and not one has run out of Vram.

I think people are worried about nothing.
 
Status
Not open for further replies.
Back
Top Bottom