• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
nVidia have been working on 5nm products (including one that seems to be tightly under wraps - no idea if it is even GPU related - that kind of sensitivity is probably for a commercial customer) and have loads of 7nm stuff in production. I don't seem them being on the back foot.

Half the stuff people are talking about when it comes to nVidia and TSMC is utter rubbish that they've talked themselves into being true due to confirmation bias and it has been repeated so many times now accepted as fact.
 
Which mythical unannounced rumour of a 3080 variant are people waiting for this week?

RTX3090ti using GA100 48gb HBM2, 16000 Cuda Cores, 128 SM units, 128 RT cores, 512 Tensor cores clocked at 1.7ghz boost with 500w TDP

estimated performance: 40% faster than the RTX3090
 
My understanding is the 7nm currently is not a true 7nm and more a iteration on the 8nm process, AMD's roadmap is to drop 2021 5nm and why Nvidia is actually on the back foot and knee jerked the 3080 launch, failed 10GB now canceled the 20gb realising it wont improve market position for the extra cost and have moved to market the counter punch with Ti/Super 7nm early 2021, what worries Nvidia most I think is in fact the 5nm launch AMD have planned for 2021 as right now Nvidia are behind.

This really does seem like an Intel/AMD situation, where Nvidia sat back for half a second and got caught napping.

I don't think that Nvidia knee jerked the 30 series launch, I think that the move to Samsung fab because of lower costs resulted in not being able to make the cards as fast as they'd want, in no other respects is it really a knee jerk launch. It's not like this would be the first "paper launch" we've ever seen from either camp, it's really normal that at "launch" of new hardware it's often simply not available. The only difference this time around is that demand is demonstrably higher, and supply is slower almost certainly because of Samsungs fab speed.

It's both true on Samsung and TSMC that they do drop nodes for real real, and then spin off the next few iterations as modifications of the previous technology, which makes sense, but it's just another way that the real difference between 8nm and 7nm is basically nothing.

I wouldn't read too much into rumours of products, and then rumoured cancellations of said products. What we do know is that the driving reason for Nvidia to move to Samsung 8nm was price, TSMC refused to budge on price, Nvidia threatened to go elsewhere, TSMC called their bluff so Nvidia switched. Then TSMC did the sensible thing and dropped their prices realising they're not the only game in town. Now the prices are more reasonable again Nvidia are moving back to them and that likely means re-engineering their future line up to work with TSMC 7nm and likely means delays for unreleased parts. I wouldn't expect to see those new parts in early 2021, if I had to bet I'd say Q3 is more likely.

This whole 16-20Gb card thing is going to end up in tears, I swear. There's going to be benchmarks of games over the next few years, and hopefully the major benchmark channels are being urged towards proper memory usage measurements rather than malloc, and we'll just see a trial by fire. Did spending all that extra money on the memory actually benefit you as a gamer. I mean we've seen some of the bigger channels on this simply step forward and brazenly say that products like the 3090 are simply not gaming cards.

Usually Nvidia does a closed implementation first and then Microsoft adds an equivalent in the DirectX API and then AMD supports it years later, like with RTX/DLSS etc.

If they add a CPU I suspect they will market it under the PhyX brand.

Noteworthy with RTX that it's a suite of ray tracing effects, it needed changes in DirectX API right out the gate, so the DirectX portion of this was done up front and is ready to go, the DXR instructions. DLSS doesn't rely on DirectX to my knowledge, it just uses Tensor cores to implement their deep learning algorithm, it's all done post rendering like all other post processing types of AA or upscaling.
 
It's not that big a long shot if you don't consider ultra graphics & high resolutions a must, you'll be more than happy with a 3080 for many years. I'm in the same boat in a way, I'm back to running a 3440x1440 monitor with an RX 480 while I wait on Big Navi.
My 480's still doing okay all things considered although I have found that I've had to drop the settings a little more than I'd like to maintain good frame rates in games like Dirt Rally 2.0 as the car now looks like it's floating above the textureless ground. :rolleyes:
Well i would absolutely love playing at ultra textures in all the games that have been released upto this point, and ultra/high settings in games that release for the next 2-3 years atleast, after that i don't mind playing at high/medium because ill probably have to if i want 60+fps in the latest AAA games.

I'm not sure if i want to go amd or nvidia, amd seems like a good choice but if they don't implement dlss-like tech(no not RiS) i might go nvidia because only they offer dlss right now.
 
This whole 16-20Gb card thing is going to end up in tears, I swear. There's going to be benchmarks of games over the next few years, and hopefully the major benchmark channels are being urged towards proper memory usage measurements rather than malloc, and we'll just see a trial by fire.
I honestly hope no-one serious in the tech reviewer world places any sort of importance on low level data. Measuring low level instruction calls, allocated vs. used memory, PCI-E transfer speeds, all that sort of guff is useless to us as consumers as we have no idea how they're used, and we have no control over anything but resolution, refresh, and general detail levels. What that translates to under the hood is irrelevant, as we simply need to know which card gives us the best gaming experience in general. Inferring anything from low-level data is risky at best, as not all of it is observable, or out of the ordinary.
 
Last edited:
I mean we are technically not being gimped too bad right?

When the ps4 was released with 8gb of shared memory, 780ti the flagship had only 3gb of memory which was around 35% of ps4's memory.

ps5 has 16gb of shared memory, and the 3080 has 10gb of super fast ddr6x which is like 65% of ps5's video memory.

Also it has been said numerous times you will run out of horsepower before you run out of vram.

Lets look at next gen games which are going to like run on a totally new engine and might use a lot more polygons per pixel(i might be using some wrong terminology here that's just what i read somewhere) so games will look a lot better and might need more than 10gb of vram at max settings at 4k, but if we need to render that many polygons won't we need better gpus too? Like lets assume a true next gen game that will be very demanding might *use* around 12gb-16gb of vram not allocate at max settings at 4k(this won't happen but for the sake of argument let's say it does), if it is using a new engine, won't the 3080 be too weak to feed that much vram anyway and still get 4k60fps? or 1440p144fps?

You will naturally reduce textures which will bring down the vram and you will be under the 10gb on the 3080.

Then there is DLSS which will likely be in almost every game that will use absurd amounts of vram, dlss brings vram down, so using that will also ensure longevity.

I have read all 78 pages of this thread, to me it looks like 16gb and 12gb cards are just a product of competition, not what games actually will need.
I really doubt any game will need 16gb of vram even at 4k(actual usage not allocation) until the end of this generation lol(until maybe 2026+ when ps6 will be around the corner)

I am gonna say 10gb will be enough for all the cross gen titles coming out in 2021 and 2020, and should still run true next gen games coming out in 2022 at relatively good settings (high/ultra) at 4k. So 2-3 years at high/ultra, depending on the title some might max out 10gb so you might need to drop to high.

And for 1440p it should be enough for the next 3-4 years at ultra(for 90% of the games) i presume, but it really depends on how the new unreal engine 5 and similar engines will turn out to be only time will tell.

Again i feel like if true next gen games really will need 10gb+ memory, 3080 will be too weak to feed that much memory at high refresh rates anyway, so you will be dropping settings.

https://www.resetera.com/threads/msi-afterburner-can-now-display-per-process-vram.291986/page-2

this guy has a rtx 3080, and he benchmarked several games with the msi after burner's new beta and could see the actual usage of vram.

Games are using 4gb-6gb(horizon zero dawn uses 7gb it is an outlier) at 3440*1440 which is like 60% less pixels than 4k. So I doubt even 4k at maxed settings will use any more than 6gb-8gb in any game.

So going by this it looks like the new trend for 4k is going to be 8gb-12gb, so 10gb doesn't seem too bad.

Probably going to be 6gb-10gb for lower resolutions.
 
Last edited:
I mean we are technically not being gimped too bad right?

When the ps4 was released with 8gb of shared memory, 780ti the flagship had only 3gb of memory which was around 35% of ps4's memory.

ps5 has 16gb of shared memory, and the 3080 has 10gb of super fast ddr6x which is like 65% of ps5's video memory.

Also it has been said numerous times you will run out of horsepower before you run out of vram.

Lets look at next gen games which are going to like run on a totally new engine and might use a lot more polygons per pixel(i might be using some wrong terminology here that's just what i read somewhere) so games will look a lot better and might need more than 10gb of vram at max settings at 4k, but if we need to render that many polygons won't we need better gpus too? Like lets assume a true next gen game that will be very demanding might *use* around 12gb-16gb of vram not allocate at max settings at 4k(this won't happen but for the sake of argument let's say it does), if it is using a new engine, won't the 3080 be too weak to feed that much vram anyway and still get 4k60fps? or 1440p144fps?

You will naturally reduce textures which will bring down the vram and you will be under the 10gb on the 3080.

Then there is DLSS which will likely be in almost every game that will use absurd amounts of vram, dlss brings vram down, so using that will also ensure longevity.

I have read all 78 pages of this thread, to me it looks like 16gb and 12gb cards are just a product of competition, not what games actually will need.
I really doubt any game will need 16gb of vram even at 4k(actual usage not allocation) until the end of this generation lol(until maybe 2026+ when ps6 will be around the corner)

I am gonna say 10gb will be enough for all the cross gen titles coming out in 2021 and 2020, and should still run true next gen games coming out in 2022 at relatively good settings (high/ultra) at 4k. So 2-3 years at high/ultra, depending on the title some might max out 10gb so you might need to drop to high.

And for 1440p it should be enough for the next 3-4 years at ultra(for 90% of the games) i presume, but it really depends on how the new unreal engine 5 and similar engines will turn out to be only time will tell.

Again i feel like if true next gen games really will need 10gb+ memory, 3080 will be too weak to feed that much memory at high refresh rates anyway, so you will be dropping settings.

https://www.resetera.com/threads/msi-afterburner-can-now-display-per-process-vram.291986/page-2

this guy has a rtx 3080, and he benchmarked several games with the msi after burner's new beta and could see the actual usage of vram.

Games are using 4gb-6gb(horizon zero dawn uses 7gb it is an outlier) at 3440*1440 which is like 60% less pixels than 4k. So I doubt even 4k at maxed settings will use any more than 6gb-8gb in any game.

You didn't read the thread properly then. There are examples of already released games that need > 8GB at 4k to run max settings. Doom Eternal is one example. Flight Sim 2020 is another.

Apply an ounce of common sense to the situation, the market is screaming out for a 3080ti with 20GB of VRAM, on TSMC 7nm process. You can bet your bottom dollar it's coming.

That will be the card that goes down in history of being the "one" to get, just like the 1080ti was compared to the 1080 etc.

Few groups of ppl:

Group1. Have a 3080 already (FE @ £650 being the best option). They'll upgrade to the 3080ti on release, and lose a little of their £650

Group2. Easily influenced people who bought into hype. £800-£900 3080 on order, spending their Autumn F5 and arguing on forums on when it will arrive. Will lose loads of value when 3080ti comes out, and will not last long in 2021 with only 10GB @4k. Note for some people who can't read, 4K is not 1080 or 1440P, 10GB is perfectly fine for those low resolutions)

Group3. Wait for AMD big Navi, see what's what', make a decision then

Group4. 3090 owners. I had one on preorder for a few weeks, then realized the error of my ways and decided to wait for big navi/3080ti and decide then.

Group5. Wait for 3080ti and profit.
 
Last edited:
You didn't read the thread properly then. There are examples of already released games that need > 8GB at 4k to run max settings. Doom Eternal is one example. Flight Sim 2020 is another.

Apply an ounce of common sense to the situation, the market is screaming out for a 3080ti with 20GB of VRAM, on TSMC 7nm process. You can bet your bottom dollar it's coming.

That will be the card that goes down in history of being the "one" to get, just like the 1080ti was compared to the 1080 etc.
Flight sim is like an outlier too, it represents 1% of the games in the gaming industry, while it is worth considering, running that game at maxed out settings uses around 9gb of vram, but you also get like below 30fps, so wouldn't you practically want to play it at a higher framerate and will drop settings? Which will bring it around the 6gb-8gb territory and better fps.

Doom eternal's ultra nightmare settings do use 8gb of vram but i personally cannot see any kind of a difference between ultra and ultra nightmare, the former using vram in the 6gb-8gb territory.

A 20gb card might get released next year but I really doubt any game coming out this generation(until ps6) will ever come close to using that amount of memory.

It will also cost upwards of $1000 very likely so not really good value.

I'm not saying 10gb is enough, they should have definitely added more vram, but its not something that will render the card useless in the upcoming 2-4 years if you want to play AAA games.
 
Flight sim is like an outlier too, it represents 1% of the games in the gaming industry, while it is worth considering, running that game at maxed out settings uses around 9gb of vram, but you also get like below 30fps, so wouldn't you practically want to play it at a higher framerate and will drop settings? Which will bring it around the 6gb-8gb territory and better fps.

Doom eternal's ultra nightmare settings do use 8gb of vram but i personally cannot see any kind of a difference between ultra and ultra nightmare, the former using vram in the 6gb-8gb territory.

A 20gb card might get released next year but I really doubt any game coming out this generation(until ps6) will ever come close to using that amount of memory.

It will also cost upwards of $1000 very likely so not really good value.

I'm not saying 10gb is enough, they should have definitely added more vram, but its not something that will render the card useless in the upcoming 2-4 years if you want to play AAA games.

Fair enough, each to his own. Though sounds like your emotionally invested into the 10GB card, as you're already willing to make sacrifices:

"Doom eternal's ultra nightmare settings do use 8gb of vram but i personally cannot see any kind of a difference between ultra and ultra nightmare"

So you admit Doom eternal exists, isn't an outlier, and that it needs > 8GB to run 4K. Though you say you can't tell the difference, so are happy to lower settings to play.

You can use that same argument to lower resolution from 4k to 1440P, there's be people who say they can't tell the difference. High to Medium, also people who can't tell the difference.

Down and down the rabbit hole until you end up at 1080P, 60Hz monitor from 10 years ago.

3080, 3090 are 4k cards, and not worth the cost for lower resolutions like 1080P and 1440P IMO.
 
Fair enough, each to his own. Though sounds like your emotionally invested into the 10GB card, as you're already willing to make sacrifices:

"Doom eternal's ultra nightmare settings do use 8gb of vram but i personally cannot see any kind of a difference between ultra and ultra nightmare"

So you admit Doom eternal exists, isn't an outlier, and that it needs > 8GB to run 4K. Though you say you can't tell the difference, so are happy to lower settings to play.

You can use that same argument to lower resolution from 4k to 1440P, there's be people who say they can't tell the difference. High to Medium, also people who can't tell the difference.

Down and down the rabbit hole until you end up at 1080P, 60Hz monitor from 10 years ago.

3080, 3090 are 4k cards, and not worth the cost for lower resolutions like 1080P and 1440P IMO.

Yes I am not happy with 10gb either, but it seems like the only option if i want to buy in the next 3 months, and if i want dlss, superior ray tracing and want good drivers out of my gpu. which amd cannot guarantee, their answer is software accelerated ray tracing.

I did all that research so I could justify my gpu purchase. Also I dunno man, doom eternal runs at like 200+fps right at ultra nightmare lol its extremely optimized, even if all the latest games start using 8gb+, that is still under 10gb so not a huge issue(4k) as we are still going to be in the cross gen phase for atleast 1-2 years. And if issues arise they will likely be solved by DLSS.

I am going to play at 1440p personally so I don't think I have a lot to worry about as i don't think any game will require more than 12gb anytime soon at 1440p.
3080 is marketed as a 4k card but doesn't mean it has to be used for 4k. Like barely 5% of the population of users use 4k so most people buying the 3080 will be at 1440p I'd say.

Again it depends on the use case, if you are a heavy modder wait for 3080ti or buy 6800xt with 16gb of vram, if you absolutely hate reducing textures like even in 2 games out of 10 go for the 3080ti or 6800xt.

I am coming from integrated graphics, I have never played at anything other than low. I would say for me either i buy a 3080ti or a 3080 or a 6800xt, each one of them are going to make me a very happy man.

What do you personally think, 10gb ought to be enough for 1440p right? I want it to last atleast 4 years lol
 
nVidia have been working on 5nm products (including one that seems to be tightly under wraps - no idea if it is even GPU related - that kind of sensitivity is probably for a commercial customer) and have loads of 7nm stuff in production. I don't seem them being on the back foot.

Half the stuff people are talking about when it comes to nVidia and TSMC is utter rubbish that they've talked themselves into being true due to confirmation bias and it has been repeated so many times now accepted as fact.


Nice. So when is this 5nm GPU hitting our shelves? :D :D :D
 
I don't think that Nvidia knee jerked the 30 series launch, I think that the move to Samsung fab because of lower costs resulted in not being able to make the cards as fast as they'd want, in no other respects is it really a knee jerk launch. It's not like this would be the first "paper launch" we've ever seen from either camp, it's really normal that at "launch" of new hardware it's often simply not available. The only difference this time around is that demand is demonstrably higher, and supply is slower almost certainly because of Samsungs fab speed.

It's both true on Samsung and TSMC that they do drop nodes for real real, and then spin off the next few iterations as modifications of the previous technology, which makes sense, but it's just another way that the real difference between 8nm and 7nm is basically nothing.

I wouldn't read too much into rumours of products, and then rumoured cancellations of said products. What we do know is that the driving reason for Nvidia to move to Samsung 8nm was price, TSMC refused to budge on price, Nvidia threatened to go elsewhere, TSMC called their bluff so Nvidia switched. Then TSMC did the sensible thing and dropped their prices realising they're not the only game in town. Now the prices are more reasonable again Nvidia are moving back to them and that likely means re-engineering their future line up to work with TSMC 7nm and likely means delays for unreleased parts. I wouldn't expect to see those new parts in early 2021, if I had to bet I'd say Q3 is more likely.

This whole 16-20Gb card thing is going to end up in tears, I swear. There's going to be benchmarks of games over the next few years, and hopefully the major benchmark channels are being urged towards proper memory usage measurements rather than malloc, and we'll just see a trial by fire. Did spending all that extra money on the memory actually benefit you as a gamer. I mean we've seen some of the bigger channels on this simply step forward and brazenly say that products like the 3090 are simply not gaming cards.



Noteworthy with RTX that it's a suite of ray tracing effects, it needed changes in DirectX API right out the gate, so the DirectX portion of this was done up front and is ready to go, the DXR instructions. DLSS doesn't rely on DirectX to my knowledge, it just uses Tensor cores to implement their deep learning algorithm, it's all done post rendering like all other post processing types of AA or upscaling.


How is it going to end up in tears when the 16GB quivalent AMD card costs the same as the NVIDIA 10GB/8GB cards?
In hindsight, there is only going to be one winner when it comes to comparisons sadly for NVIDIA. ;)
 
How is it going to end up in tears when the 16GB quivalent AMD card costs the same as the NVIDIA 10GB/8GB cards?
In hindsight, there is only going to be one winner when it comes to comparisons sadly for NVIDIA. ;)
Does the gddr6x play a role here? Like is 10gb ddr6x comparable to 16gb ddr6?
 
How is it going to end up in tears when the 16GB quivalent AMD card costs the same as the NVIDIA 10GB/8GB cards?
In hindsight, there is only going to be one winner when it comes to comparisons sadly for NVIDIA. ;)

Well they might be cheaper, they are using older/slower memory and smaller memory bus, giving them substantially less memory bandwidth. That means they can keep prices under control, it also means they need to find a way to reduce memory bandwidth usage otherwise they're going to starve the GPU of data and have pretty severe bottleneck issues. If they've successfully done that kind of remains to be seen yet.

That's a big gamble, to add more memory that will have no benefit just to sell cards to people that think more GB's = more speed. But then introducing bottlenecks because of that and potentially struggling to keep up in benchmarks where it actually matters, is where it could end in tears. Especially if their solution to low memory bandwidth (infinity cache) turns out to be something that requires a lot of per game optimization to work well, and their reputation for not really the best drivers. I mean we'll have to see it's too much speculation at this point.

Does the gddr6x play a role here? Like is 10gb ddr6x comparable to 16gb ddr6?

Yep. GDDR6 is slower than 6x, it runs at about 16Gbps, where as 6x runs at about 19-20Gbps. One thing you need with high end GPUs which are very fast is you need to be able to serve them with data fast enough to keep them busy, otherwise they get bottlenecked by the memory. That total speed is the memory bandwidth and that is a product of 2 things, the memory speed 16Gbps vs 19Gbps, and the memory bus (the width of the data transfer from vRAM to the GPU) which is 256bit vs 320bit. The memory bandwidth is these 2 multiplied together, with AMD opting for both slower memory and smaller bus width it means their overall memory bandwidth is a lot smaller, about 512GB/sec compared to 760GB/sec on the 3080
 
Last edited:
I don't think that Nvidia knee jerked the 30 series launch, I think that the move to Samsung fab because of lower costs resulted in not being able to make the cards as fast as they'd want, in no other respects is it really a knee jerk launch. It's not like this would be the first "paper launch" we've ever seen from either camp, it's really normal that at "launch" of new hardware it's often simply not available. The only difference this time around is that demand is demonstrably higher, and supply is slower almost certainly because of Samsungs fab speed.

It's both true on Samsung and TSMC that they do drop nodes for real real, and then spin off the next few iterations as modifications of the previous technology, which makes sense, but it's just another way that the real difference between 8nm and 7nm is basically nothing.

I wouldn't read too much into rumours of products, and then rumoured cancellations of said products. What we do know is that the driving reason for Nvidia to move to Samsung 8nm was price, TSMC refused to budge on price, Nvidia threatened to go elsewhere, TSMC called their bluff so Nvidia switched. Then TSMC did the sensible thing and dropped their prices realising they're not the only game in town. Now the prices are more reasonable again Nvidia are moving back to them and that likely means re-engineering their future line up to work with TSMC 7nm and likely means delays for unreleased parts. I wouldn't expect to see those new parts in early 2021, if I had to bet I'd say Q3 is more likely.

This whole 16-20Gb card thing is going to end up in tears, I swear. There's going to be benchmarks of games over the next few years, and hopefully the major benchmark channels are being urged towards proper memory usage measurements rather than malloc, and we'll just see a trial by fire. Did spending all that extra money on the memory actually benefit you as a gamer. I mean we've seen some of the bigger channels on this simply step forward and brazenly say that products like the 3090 are simply not gaming cards.



Noteworthy with RTX that it's a suite of ray tracing effects, it needed changes in DirectX API right out the gate, so the DirectX portion of this was done up front and is ready to go, the DXR instructions. DLSS doesn't rely on DirectX to my knowledge, it just uses Tensor cores to implement their deep learning algorithm, it's all done post rendering like all other post processing types of AA or upscaling.


Some really good point's you make, its nice when someone comes to the table with something well though out and presented rather than "Nvidia is just better than AMD sux it loozers" that the majority here seem to bring :-)
I think between both our views the truth doth lay a bit of both, only Jensen and his board will ever truly know.

Nvidia definitely had to pull the 20gb versions though I had questioned myself when it was on the table "Why? Makes no sense at this point", with such delays to the 10gb it would have been a lynching for Nvidia had they dropped 20gb versions in December, and it does mean next year when Nvidia drop the 7nm Super with 20gb they can go back to charging 2080Ti money and justify it with marketing spin, that the masses will thank them for because Beeeehhhhrrrrr (<-sheep impression ;-0).

So 7nm Super early 2021 and 5nm Ti version 2022? Then into a full on 5nm next gen 2023?
 
I honestly hope no-one serious in the tech reviewer world places any sort of importance on low level data. Measuring low level instruction calls, allocated vs. used memory, PCI-E transfer speeds, all that sort of guff is useless to us as consumers as we have no idea how they're used, and we have no control over anything but resolution, refresh, and general detail levels. What that translates to under the hood is irrelevant, as we simply need to know which card gives us the best gaming experience in general. Inferring anything from low-level data is risky at best, as not all of it is observable, or out of the ordinary.

Yes, and no. Yes you're right real benchmarks today of games today will tell you how cards today and games today will perform today. And I agree, that's the best way of measuring that.

However people are making a speculative argument that IN FUTURE games will need more vRAM and therefore to future proof the card so it can run games in max settings in future we need X amount of vRAM. And to have a discussion about that you do actually need to have a discussion about what is going on under the hood. First of all you need to actually measure vRAM accurately to see how much is really being used, then we can make some predictions about the growth of games in their demand for vRAM and extrapolate to take a guess at what we think they might need.

Part of the rebuttals to if you need 10Gb or not have been that games today use more than 10Gb of vRAM and that's wrong. It's wrong because the tools we've used to measure them aren't reporting what the games use, but rather what they allocate. To actually test that claim to see if it's right or not we need accurate measurements that look under the hood.

*edit*

Sorry, and more to the point from the post you were referring to. What I was saying is that we'll only know for sure if that 16-20Gb of vRAM was worth it by waiting 2-3 years then measuring vRAM usage of future games (in playable settings). If those games are only using 10Gb and not 20Gb then you know that extra vRAM was a waste of money. You can only do that if you can measure vRAM usage accurately.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom