• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

its very difficult to run scenarios when folks are not presenting any kind of underlying theory to how vram works in most games..

The theory I've forwarded is quite simple. The purpose of putting data into vRAM is so the GPU can use that data to construct the next frame, and the more complex the scene the GPU has to render the longer it takes to do it, and thus the frame rate goes down. That is to say you put more data into vRAM and your performance is going to go down because the GPU has more work to do. Then it becomes a simple matter of bottlenecks. Which gives out first? Do you run out of vRAM with performance overhead left, or do you run out of performance with a vRAM overhead left. And the answer to date in the latter. As we load up modern games into 4k Ultra presets we see the GPUs struggling with frame rate, with vRAM usage way below 10Gb.

The major advancement in our understanding with this whole thing also has to do with now having tools to more accurately report vRAM in use, rather than what is allocated, as the 2 values can differ substantially, and typically do differ.
 
Sems performance isn't so far behind the 3080, at least in this game. 12% at 1080p, 15% at 1440p and 20% at 4k.

This would make sense as the 3080's scaling below 4K is not great because of some factors of the 102 chip as it's more a compute chip than a gaming chip that a 104 traditionally has been.
 
The theory I've forwarded is quite simple. The purpose of putting data into vRAM is so the GPU can use that data to construct the next frame, and the more complex the scene the GPU has to render the longer it takes to do it, and thus the frame rate goes down. That is to say you put more data into vRAM and your performance is going to go down because the GPU has more work to do. Then it becomes a simple matter of bottlenecks. Which gives out first? Do you run out of vRAM with performance overhead left, or do you run out of performance with a vRAM overhead left. And the answer to date in the latter. As we load up modern games into 4k Ultra presets we see the GPUs struggling with frame rate, with vRAM usage way below 10Gb.

The major advancement in our understanding with this whole thing also has to do with now having tools to more accurately report vRAM in use, rather than what is allocated, as the 2 values can differ substantially, and typically do differ.

I dont see any kind of framework..
So theres some assumptions to be made on fixed vram budget, it can include stuff like reservations for frame buffers, drivers, etc. (i dont know, some expert has to give us some data here.. i am just building a framework)
Next i would need some kind of theory on the type of workloads correlated with cache misses to potentially build an argument on variable component of vram
And eventually make a statement like "Guys at 60 FPS at such resolutions with RTX on the probability of exceeding 10 Gb is less than 1%"
till then i wont consider the matter to be conclusively settled (but the thread just refuses to die)
 
I dont see any kind of framework..
So theres some assumptions to be made on fixed vram budget, it can include stuff like reservations for frame buffers, drivers, etc. (i dont know, some expert has to give us some data here.. i am just building a framework)
Next i would need some kind of theory on the type of workloads correlated with cache misses to potentially build an argument on variable component of vram
And eventually make a statement like "Guys at 60 FPS at such resolutions with RTX on the probability of exceeding 10 Gb is less than 1%"
till then i wont consider the matter to be conclusively settled (but the thread just refuses to die)

You're making it probably more complicated than it needs to be. What happens precisely inside vRAM is an interesting discussion but for performance and for measuring if you have enough, it's not too important. All we really need is the observation that as you load more into vRAM performance goes down. And then metrics about what performance does the card get as you approach its vRAM limit. And the answer to that so far has been no game can even approach the 10Gb limit without running into GPU bottlenecks.

The thread is full of interesting conversation about the topic both for and against the idea, not sure why we'd want it to die? Open debate on these things is healthy, especially when we all learn something.
 
We are in this transitional period moving from raster to RT which means we have more silicon in use. I'm not trying to defend Nvidia here, but let's not lose sight of what you get with these GPUs - raster, RT and Tensor all rolled in to one chip. I'd imagine they would still want £350-£400 for a chip solely focused on legacy raster performance.

AMD's hybrid design suits this period more as they can apparently dynamically balance performance between RT and raster all be it at the detriment of RT.

Looks like we now have a choice. If RT is more important then pay a premium to Nvidia, otherwise AMD are looking very promising.

But that is nvidia forcing you to pay for something that you do no need! The consumer doesnt have to pay for the progression, it would be different if people were burning down houses for ray tracing but they are not - its the other way around nvidia want you to buy into it literally..
 
But that is nvidia forcing you to pay for something that you do no need!
I will quote this and if the prices of the 12-16gb cards are higher than expected I can say but AMD are forcing you to buy more vram than you need! :p;):D
 
I will quote this and if the prices of the 12-16gb cards are higher than expected I can say but AMD are forcing you to buy more vram than you need! :p;):D

Having more vram is not the same as buying into something that supports a handful of games and makes the card cost 30% more to make than without it.. just look at the 1060 3Gb and 6Gb versions, you paid a bit more for the 6Gb - this was done by nvidia in Pascal (4 years ago)!!

You also forget (I will remind you) I also mine on my GPU so the extra VRAM is utilised on the computing... :cool::rolleyes::p
 
But that is nvidia forcing you to pay for something that you do no need! The consumer doesnt have to pay for the progression, it would be different if people were burning down houses for ray tracing but they are not - its the other way around nvidia want you to buy into it literally..

It's a choice. If you don't want the best available RT then go AMD. I'm sure there were some people saying Voodoo cards were rubbish at the time :D
 
Having more vram is not the same as buying into something that supports a handful of games and makes the card cost 30% more to make than without it.. just look at the 1060 3Gb and 6Gb versions, you paid a bit more for the 6Gb - this was done by nvidia in Pascal!!
I never said it was the same thing, but still more vram means it costs more, it is not free.

I am not defending Nvidia, I would much rather have the choice like you say. Choice is a good thing, hence why I don’t get why people keep banging on about the 10gb on the 3080 in the manner the do.
 
It's a choice. If you don't want the best available RT then go AMD. I'm sure there were some people saying Voodoo cards were rubbish at the time :D

No, you could also go nvidia with the 1660 for example - that is choice. Do they offer the 3680? No but they should, that would be choice.
 
I never said it was the same thing, but still more vram means it costs more, it is not free.

I am not defending Nvidia, I would much rather have the choice like you say. Choice is a good thing, hence why I don’t get why people keep banging on about the 10gb on the 3080 in the manner the do.

I said you do not need something, I never mention the word free.. are you strawmanning me? :p

Currently when you buy an RTX 3000 card (should one physically exist), you have bought into their ecosystem, no opt out, you must have tensor rtx whatever. Is DLSS free? No it will need paying for somehow.

Do devs need to implement DLSS or ray tracing into the games? No.

So when a game doesnt offer ray tracing or DLSS your GPU is literally wasting those components till you call them.
 
No, you could also go nvidia with the 1660 for example - that is choice. Do they offer the 3680? No but they should, that would be choice.

We don't know what they will offer as the series hasn't really launched yet, but what would be the point of buying such a card when even consoles would out perform it?
 
You're making it probably more complicated than it needs to be. What happens precisely inside vRAM is an interesting discussion but for performance and for measuring if you have enough, it's not too important. All we really need is the observation that as you load more into vRAM performance goes down. And then metrics about what performance does the card get as you approach its vRAM limit. And the answer to that so far has been no game can even approach the 10Gb limit without running into GPU bottlenecks.

The thread is full of interesting conversation about the topic both for and against the idea, not sure why we'd want it to die? Open debate on these things is healthy, especially when we all learn something.

Because that's the only way you can size infrastructure requirement (check SLA on google)
Otherwise, we'd be like "yes i havent come across any game that pushes vram at 60 FPS, hence the conclusion"
Which is fine as an opinion..
but then you are endlessly debating around your opinion without providing additional insight
that kinda looks like deadend to me (for the thread to die)
Can you atleast provide us timeseries data on VRAM usage in some of the heavier VRAM games, so that we can conclude for ourselves (not in this thread)
the chances of exceeding 10gb in real world scenarios?
 
Currently when you buy an RTX 3000 card (should one physically exist), you have bought into their ecosystem, no opt out, you must have tensor rtx whatever. Is DLSS free? No it will need paying for somehow.

Do devs need to implement DLSS or ray tracing into the games? No.

So when a game doesnt offer ray tracing or DLSS your GPU is literally wasting those components till you call them.
Just saw the rest of your post. So what your saying is, that by bringing up the fact that AMD may force you to pay more for going 16gb and not offer a lower cheaper vram option (mainly just to pull your leg) you automatically assumed I did not know all the above? :p

Yeah I know all that, I would prefer an approach that is more flexible like AMD is reportedly doing. Let’s see what AMD bring out and how it performs.
 
NVIDIA GeForce RTX 3070 3DMark performance leaked

u0RLQiG.jpg


https://videocardz.com/newz/nvidia-geforce-rtx-3070-3dmark-performance-leaked
 
Just saw the rest of your post. So what your saying is, that by bringing up the fact that AMD may force you to pay more for going 16gb and not offer a lower cheaper vram option (mainly just to pull your leg) you automatically assumed I did not know all the above? :p

Yeah I know all that, I would prefer an approach that is more flexible like AMD is reportedly doing. Let’s see what AMD bring out and how it performs.

No it was your man wrinkly who if you read that post was quite blase about it:
I'm not trying to defend Nvidia here, but let's not lose sight of what you get with these GPUs - raster, RT and Tensor all rolled in to one chip

I know you know the technology, but what I was highlighting is well you have all that - its not like you can get anything but whats on the PCB, so to sound like nvidia are doing you a favour is surreal as they have already charged you upfront for it when you make the transaction. My point is, if its only in 10 games, you better be playing them round and round again to get your monies worth on medium, hard, ultra because if the game doesnt have it, the hardware is being dormant.
 
I wonder if used 2080Ti's will drop to near pre-3080-release levels again once the 3070 is out (stock levels not withstanding). Will have more OC headroom and an extra 3GB of memory as a bonus.
 
Back
Top Bottom