• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

I paid 1300 for my 2080 ti I think if these new cards are more expensive I’m out and I hope the 30 35% is not true or it’s gonna be a big killer for me .and I’ll not be pre ordering this time The wait was a killer .

you bought a card significantly more expensive than the last gen with poor performance uplift vs previous generations.

not only that, you paid more than you needed to.

what’s wrong with doing it again? Shame you (and others) didn’t have this mindset last time.

the cards will almost certainly be more expensive, thatIs not an issue per se, the problem is if the performance isn’t worth it.
 
Lots of fast system RAM and ultra fast NVMe disks are good things, and if you look at the next gen consoles this is the way they are going, they are going for 5.5GB/sec SSDs because they're going to be more interested in streaming assets off the disk in time for them to be needed in vRAM. You've given an example where you've crippled a PC to having no fast NVMe drive or system RAM.. <snip>
You must acknowledge a problem with your logic, here.

First you tell me those things aren't necessary*, then when I remove them from the system you say the system is "crippled." (*fast sys ram, fast NVME).

You are trying to argue both ways at the same time.

So basically, if you have less VRAM (and you want software to try to compensate for missing hardware) then you need those other bits of hardware. Fast NVME and fast + plentiful sys RAM.
 
You must acknowledge a problem with your logic, here.

First you tell me those things aren't necessary*, then when I remove them from the system you say the system is "crippled." (*fast sys ram, fast NVME).

You are trying to argue both ways at the same time.

So basically, if you have less VRAM (and you want software to try to compensate for missing hardware) then you need those other bits of hardware. Fast NVME and fast + plentiful sys RAM.

I think this is the best answer to some here,as you will have more chance of getting water out of a stone.

8eMMGu8.png
 
8GB VRAM is the absolute bare minimum I would even use for 1440p+ gaming these days let alone 4K, especially with next gen console games on the horizon. I remember Watch Dogs 1 and the 780 Ti, the lousy 3GB of VRAM was just not enough even the day it launched and should have had at least 4GB VRAM at the time. Nvidia just loves skimping on VRAM while AMD are usually pretty generous but not always.

As a result of that this was the experience I had in that game at the time as well as others, a bloody stuttery mess due to the VRAM running out. Titan black and it's 6GB never had an issue because it had plenty. Just hope the 10GB cards don't end up having stuttering issues like this in future titles because of nvidia being cheap.

 
You can't calculate an answer to a question that doesn't makes sense.

Not sure if serious?

Are you saying that any GPU can make use of any amount of vram? (In the context of scaling performance...simply cramming a bunch of stuff into the vram and never using it doesn't count)
 
We *really* need a way to calculate this metric.

We don't have a way. You can't compare two different GPUs with different VRAM.

You need to have the same GPU model but with different VRAM, for example one card has 8gb and the other has 16gb.
OR
You need to be able to allocate VRAM - i.e. the card has 16GB, but lets only allocate 8GB to the game's .exe process to test performance.

If you don't have those two options then its impossible to test what affect VRAM has on performance.

is there any software that can artifically limit how much VRAM the game is allowed to use???
 
Not looking to pay much more than £1250 which is what I paid for my 2080 Ti which I sold a month ago.

If the 3090 is more than that I'll probably get the nearest 3xxx price wise, or even another 2080 Ti.
 
We don't have a way. You can't compare two different GPUs with different VRAM.

You need to have the same GPU model but with different VRAM, for example one card has 8gb and the other has 16gb.
OR
You need to be able to allocate VRAM - i.e. the card has 16GB, but lets only allocate 8GB to the game's .exe process to test performance.

If you don't have those two options then its impossible to test what affect VRAM has on performance

It was shown with some of the GPUs like the GTX960/R9 380 in 2015. The 2GB models were behind the 4GB model at 1080p,and this was an era where the high end Fury X and GTX980 had 4GB of VRAM. We all know how anaemic a GTX960/R9 380 was:
https://www.computerbase.de/2015-12...mm-geforce-gtx-960-frametimes-gta-v-1920-1080
 
Okay. At least now we are getting to my point.

How do we calculate how much vram a given GPU can actually make use of?

Anyone care to hazard even a guess? No one has answered what I thought was a rather simple question.

I kinda did answer this. It's pretty simple. You have a GPU with a certain amount of processing power, you take a bunch of games, you load them up, you keep cranking the games visual setting up, which increases demand on vRAM and also lowers your frame rate. You keep doing this and benching until you're into unplyable frame rate territory, and then you look at what vRAM usage you're at. You do that across a bunch of different games. You pick some sensible average and that's that.

And then all the people can say, hey but I can load up this game such that it's using more vRAM, and sure you can, but you don't have playable frame rates, so no one cares.

You must acknowledge a problem with your logic, here.

First you tell me those things aren't necessary*, then when I remove them from the system you say the system is "crippled." (*fast sys ram, fast NVME).

You are trying to argue both ways at the same time.

So basically, if you have less VRAM (and you want software to try to compensate for missing hardware) then you need those other bits of hardware. Fast NVME and fast + plentiful sys RAM.

That's just a true fact, they're not necessary for the technique of swapping items in and out of vRAM, people have been using slower disks for that for years on open world game engines and it's not a problem at all, just go and play an open world game running off an SATA disk...surely you've done that in the past at some point? How is this controversial and subject of debate? The reason I said you crippled the PC specs is because you shot for something with a small amount of system RAM and that's needed for all sorts of things, the OS, the game itself, any other apps you're running. That would cripple the performance of a game for reasons other than those related to vRAM fetching, and in fact most assets do not go through system RAM to get to vRAM, it's straight from disk to vRAM. Most games will just tell you what the recommended amount of system RAM you need for the game to run and it's typically 16Gb these days, for a fancy high spec AAA game.

With some possible exceptions to that, I think that the idTech5+ engine might actually borrow system RAM because it has an extremely sophisticated version of texture caching which allows it to use like >300Gb of worth of textures. And that's a microcosm of discussion all on it's own because they do some very clever things to make that work, and they largely did that because they wanted to get this super large texture game onto the consoles that without those innovations wouldn't be playable. That likely has a big impact on the consoles moves towards far quicker storage devices because it works really well and you want to leverage those systems as much as humanly possible especially if you're running middle or the road hardware for like 6 years.

The point is we've been doing this for a long time already, it doesn't require super fast storage but you can do the same trick better with faster storage and that's what is probably going to be big in this next generation with the consoles taking the lead and gearing up their hardware deliberately to support it.
 
you bought a card significantly more expensive than the last gen with poor performance uplift vs previous generations.

not only that, you paid more than you needed to.

what’s wrong with doing it again? Shame you (and others) didn’t have this mindset last time.

the cards will almost certainly be more expensive, thatIs not an issue per se, the problem is if the performance isn’t worth it.
I was wondering that msyelf.


I think the only one can answer that is the game makers .
 
Not sure what would actually provide the best results, XBOX Series X or an RTX 3080? I feel like a Series X will be cheaper but will probably be on part with what a PC with a 3080 can do?

Makes it awkward when considering to pre-order an RTX3080
 
Not sure what would actually provide the best results, XBOX Series X or an RTX 3080? I feel like a Series X will be cheaper but will probably be on part with what a PC with a 3080 can do?

Makes it awkward when considering to pre-order an RTX3080

Maybe wait and see how both companies products do this generation?? XBox would be at best RTX2080/RTX2080 Super level performance IMHO.

8GB VRAM is the absolute bare minimum I would even use for 1440p+ gaming these days let alone 4K, especially with next gen console games on the horizon. I remember Watch Dogs 1 and the 780 Ti, the lousy 3GB of VRAM was just not enough even the day it launched and should have had at least 4GB VRAM at the time. Nvidia just loves skimping on VRAM while AMD are usually pretty generous but not always.

As a result of that this was the experience I had in that game at the time as well as others, a bloody stuttery mess due to the VRAM running out. Titan black and it's 6GB never had an issue because it had plenty. Just hope the 10GB cards don't end up having stuttering issues like this in future titles because of nvidia being cheap.


By then they will launch the Super range with more VRAM when GDDR6X is cheaper for them! ;)
 
I kinda did answer this. It's pretty simple. You have a GPU with a certain amount of processing power, you take a bunch of games, you load them up, you keep cranking the games visual setting up, which increases demand on vRAM and also lowers your frame rate. You keep doing this and benching until you're into unplyable frame rate territory, and then you look at what vRAM usage you're at. You do that across a bunch of different games. You pick some sensible average and that's that.

And then all the people can say, hey but I can load up this game such that it's using more vRAM, and sure you can, but you don't have playable frame rates, so no one cares.

I mean ahead of time. The calculations would probably involve some GPU-power metric (Like TFLOPS maybe?), then bus bandwidth, and target frame rate.

A stronger GPU can move more data across a wider bus. If it gets more time to move each chunk of data, (Lower frame rates) each those chunks of data can be larger.
 
With regards to the VRAM conversation have there been any benchmarks showing scenarios where a 1080 Ti gains ground/beats a 2070S or 2080 at higher resolutions/quality settings that could be attributed to the extra 3GB memory? Where it normally wouldn't that is (I know in some games it has an advantage over the 2070S regardless).
 
XBOX Series X won't even be close to a 3080

From what I've read the 3080 will be around 15 TFLOPS FP32 and the X thingy being around 12. So the X thingy will probably be about as fast as 3070 which is kinda what the consoles tend to go for, about mid range hardware on launch date. The big difference here though is that from what I've read the ray tracing on these next gen console cards is very light, not really enough to be used in rendering but more likely for things like 3D audio, I don't think we'll see any mixed rendering effects at least not on performance intensive AAA games. The RTX series has huge portions of the GPUs dedicated to doing non-rasterization work like DLSS and ray tracing, so it's a harder comparison while the rasterization is likely to be ballpark 3070, you're going to get some very nice effects on the PC that the consoles simply cannot do in real time.
 
Back
Top Bottom