Associate
Well if its within +/-25% I'd guess it's irrelevant and performs the same.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Well if its within +/-25% I'd guess it's irrelevant and performs the same.
At a cost i believe it is, the instant we accept a $700 GPU is a "compromised GPU" we are #######.
Just to be clear: Turning down settings because of the vram buffer is bad, but turning down settings for any other resource on the card is fine in your view?
but a $700 card has to have the settings turned down sometimes.
Is turning down settings only a "problem" when it's because of vram?
Well if its within +/-25% I'd guess it's irrelevant and performs the same.
But I said (and you even quoted) "only a "problem" when it's because of vram?"When its a $700 card, yes.
The card seems to have been engineered that way. From everything I have read, the situations where the size of the buffer is the weakest link are few and far between.Tbh, I would have to say yes, I would hope that I am having to turn down Vram impacting settings last.
If it was the majoity of games I would agree but when it's one or two games (mainly one) in 2 years which is a rival sponsored title and not even a perticularly good one at that then I think it's a non issue.Tbh, I would have to say yes, I would hope that I am having to turn down Vram impacting settings last. That indicates an appropriately configured product. I would much rather be extracting max utilisation and output from the GPU at all times.
Not so keen on optimising to extract max utilisation from vram at all times.
Like having an under spec airbox chocking your 600hp ferrari, just unnecessary when it otherwise would still have more to give.
I think what @humbug is trying to get at is that turning down settings isn't a problem, but vram shouldn't be the limiting factor on performance on a graphics card of $700. At that price, whilst architecture will determine the performance of the card, vram should be fairly easy and cheap to increase. It is more that at that cost, vram shouldn't be the limiting factor.But I said (and you even quoted) "only a "problem" when it's because of vram?"
If it's ***only*** when it's because of vram that turning down settings is a problem on a $700 card, How do you feel when the same $700 card needs settings turned down...and it has nothing to do with vram?
Not a problem if it's not vram?
Whilst obviously moar is better, I think 10gb is probably about right. The 12gb variant was released with a tweak and a 300 quid uplift. No chance was that ram 300 quid to add, so as a value proposition not worth it to me (if the original 3080 was available at msrp, which is another point entirely).
Bill is sensible. Be like Bill!
I'm pretty sure that's wrong.That's not how it works, it either needs those assets in the buffer or it doesn't.
Memory on GPU's is a hierarchy, just as it is on CPU's.
It starts with the fastest memory of all, your L1 cache, if it doesn't fit in there its ejected to L2, if it doesn't fit in there it gets ejected to L3 cache, RDNA2 GPU's have this, they call it Infinity Cache, its a level 3 cache on the GPU, in the case of your Ampere GPU's its your GDDR6 memory, if it doesn't fit in there it gets ejected to your System RAM, which is very much slower than your GDDR6 buffer, so lower performance, it can even cause frame stalls. Stutters, on your £700 GPU.
Consoles have more accessible memory than most of Nvidia's line up, Halo Infinite is a console game, built for consoles and then ported to PC. Its nothing to do with AMD, your GPU just isn't as good as a console.
Tbh, I would have to say yes, I would hope that I am having to turn down Vram impacting settings last. That indicates an appropriately configured product. I would much rather be extracting max utilisation and output from the GPU at all times.
Not so keen on optimising to extract max utilisation from vram at all times.
Like having an under spec airbox chocking your 600hp ferrari, just unnecessary when it otherwise would still have more to give.
I think what @humbug is trying to get at is that turning down settings isn't a problem, but vram shouldn't be the limiting factor on performance on a graphics card of $700. At that price, whilst architecture will determine the performance of the card, vram should be fairly easy and cheap to increase. It is more that at that cost, vram shouldn't be the limiting factor.
I agree with that, but I'd also point out that in 99% of cases, it's not a limiting factor so don't see it as an issue. It may be in a couple of years however, so longevity may see issues, but in my opinion I don't think it'll be any more of an issue than my 1070 - it had 8gb, and I didn't upgrade due to a vram limitation.
Whilst obviously moar is better, I think 10gb is probably about right. The 12gb variant was released with a tweak and a 300 quid uplift. No chance was that ram 300 quid to add, so as a value proposition not worth it to me (if the original 3080 was available at msrp, which is another point entirely).
I probably stopped reading this thread 200 pages ago and I've had my 3080FE over a year now. 10Gb has been fine. Zero problems with VRAM.
at what res?I probably stopped reading this thread 200 pages ago and I've had my 3080FE over a year now. 10Gb has been fine. Zero problems with VRAM.
1440p 144Hzat what res?
I think at that Res it's fine. I'm on 4k so it wouldn't be enough1440p 144Hz
It would not be an issue at 1440p. So for 1080p and 1440p the answer to the question is yes, 10GB is enough. At 4K the answer is sometimes no.I probably stopped reading this thread 200 pages ago and I've had my 3080FE over a year now. 10Gb has been fine. Zero problems with VRAM.