• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Not sure what would actually provide the best results, XBOX Series X or an RTX 3080? I feel like a Series X will be cheaper but will probably be on part with what a PC with a 3080 can do?

Makes it awkward when considering to pre-order an RTX3080
xbox games on PC, so buy a PS5 :)
 
That's just a true fact,
I've listened to what you've had to say and, frankly, you've not offered much in the way of data or proof.

Everything you've said basically amounts to, "Guys, you don't need more VRAM. The software will make up for nVidia's lack of VRAM. You just need the rest of your system to pick up the slack."

So in order not to "cripple" the system, we're back to needing a good dollop of fast system RAM (you've indicated that 8 GB isn't enough and I suggested 32 previously), plus a fast NVME. Because we're talking about 2020 and later games, not games I was playing before SSDs were a thing.

I'm not going to discuss playing open-world games back in 2012 before I had my first SSD - it's not even slightly relevant.

The "true fact" is that firstly you said fast system RAM + NVME wasn't necessary but now you've back-tracked and said, well, actually it is. In 2020. Not 2012.

I tell you for sure - it's made more necessary when nVidia skimp on their VRAM, eh? Like always.

Anyway, we'll wait and see what AMD offer. I'm sure if they have good offerings with plenty of VRAM you'll not be too critical of their decision to offer more than nVidia, and scoop up some sales from the green team. I'm sure you'll applaud them and recognise the merit of getting more for your money, aye?
 
Techspot did some tests a few years ago,with the GTX1060 3GB and GTX1060 6GB. At 1440p the GTX1060 6GB system could get away with 8GB of system RAM to reach maximum performance,but the GTX1060 3GB needed 16GB of RAM to do the same. What you would see is worse 1% lows if there wasn't enough system RAM.
 
Techspot did some tests a few years ago,with the GTX1060 3GB and GTX1060 6GB. At 1440p the GTX1060 6GB system could get away with 8GB of system RAM to reach maximum performance,but the GTX1060 3GB needed 16GB of RAM to do the same. What you would see is worse 1% lows if there wasn't enough system RAM.
It's not surprising.

Basically nVidia is saving themselves money by skimping on VRAM, and betting on their customers being willing to up spec the rest of the system until they reach peak performance. Something they may already have done, but it's basically nVidia being cheap and attempting to shift costs away from their GPU, whilst continuing to charge premium prices for decidedly less-than-premium products (in the VRAM dept at least!)
 
I mean ahead of time. The calculations would probably involve some GPU-power metric (Like TFLOPS maybe?), then bus bandwidth, and target frame rate.

A stronger GPU can move more data across a wider bus. If it gets more time to move each chunk of data, (Lower frame rates) each those chunks of data can be larger.

When you say ahead of time, do you mean with games that aren't released yet? First of all there's games out now that can seem to push past 10Gb limit and when we look at their performance it confirms what i'm saying, FS2020 you can crank to 12.5Gb of vRAM usage and it's no where near a playable frame rate. In general though what you do is you measure the relationship between vRAM and GPU speed using a decent sample size of games and then you can draw a trend line through those points and project forward using that known relationship. I don't personally know of any articles or testing that has ever done this on the public side of things (if anyone knows of any please post a link), but you can be absolutely sure that Nvidias engineering team are doing these kind of metrics on both current games and unreleased titles they have access to through collaboration. They're going to do this in order to pick a sensible amount of vRAM for their products. It's in Nvidias best interest to do this because if they over provision vRAM then they're making their products unnecessarily expensive and driving customers away.

You bring up data rate, which is memory clock speed x bus width and the main calculation here is how fast do we need to get that so we're not bottlenecking the GPU. The GPUs job is not to just read stuff from vRAM it's job is to do a bunch of calculations to render the next frame, the difficult part is the trillions of calculations it needs to do, when it comes to memory all you do is provide memory with enough bandwidth to make sure you're not starving the GPU when it's working flat out 100% load.
 
It's not surprising.

Basically nVidia is saving themselves money by skimping on VRAM, and betting on their customers being willing to up spec the rest of the system until they reach peak performance. Something they may already have done, but it's basically nVidia being cheap and attempting to shift costs away from their GPU, whilst continuing to charge premium prices for decidedly less-than-premium products (in the VRAM dept at least!)

Well they have been only hovering at between 60% to 65% margins over the last year, which is more than Apple or Intel AFAIK.

:p
 
I'll have an interesting consideration tomorrow when this is announced, I ended up with a free 2080ti FE (Nvidia's partner who deals with the sales distribution for the UK refunded me in error and the ultimate resolution was me to keep the card, lucky me I know!) and I've sold my 5700xt 50th Anniversary which I had in another system but as a backup really for £380. So I'm already funded with the 5700xt plus £1149 refund for the expected £1400 3090 price.

So I definitely find myself in a fortunate position for this launch but...is the performance difference going to be worth it, I guess time will tell and tomorrow should certainly be interesting that's for sure.
 
I've listened to what you've had to say and, frankly, you've not offered much in the way of data or proof.

Everything you've said basically amounts to, "Guys, you don't need more VRAM. The software will make up for nVidia's lack of VRAM. You just need the rest of your system to pick up the slack."

So in order not to "cripple" the system, we're back to needing a good dollop of fast system RAM (you've indicated that 8 GB isn't enough and I suggested 32 previously), plus a fast NVME. Because we're talking about 2020 and later games, not games I was playing before SSDs were a thing.

Over the last like 20-30 pages of this disucssion, I've used steam hardware stats to talk about average vRAM on people systems, I've used console hardware stats to show how this is mirrored in the console space in an important way. I've used benchmarks of vRAM usage in games showing average of about 5Gb in modern games, rarely more than 8 and certainly those over 10Gb being rare exceptions. And I've used those exceptions like FS2020 where we have high emounts of vRAM usage over that 10Gb mark to show that you cannot play it at that level, you have to lower your settings the GPUs arent anywhere close to fast enough to do that.

More to the point I didn't say anything like the rest of the system would pick up the slack, you paraphrasing it that way shows me you don't really understand what is happening here. I specifically said that you do not need loads of system RAM or fast NVMe drives to do intelligent swapping of game assets, that has been done from the disk for more than a decade in PC games.

You're not "picking up the slack" with other system components, what you're doing is you're only putting into vRAM what you need and when you need it which keeps vRAM usage down and keeps the cost of GPUs down. This doesn't put more demand on RAM because games dont stream assets from RAM they stream is straight from the disk.

I'm not going to discuss playing open-world games back in 2012 before I had my first SSD - it's not even slightly relevant.

Yeah I know you're not because it annihilates your twisted straw man of what I've said. Open world games which features significantly more game assets installed to disk than can fit into vRAM have been a staple of PC gaming for at least a decade, and steaming those assets from disk to vRAM in order to allow us to have lots of game assets without needing 100+Gb of vRAM is just a simple, uncontroversial true fact.

The "true fact" is that firstly you said fast system RAM + NVME wasn't necessary

And they're not. We've been doing this for years. Simple fact.

but now you've back-tracked and said, well, actually it is. In 2020. Not 2012.

No I did not. Quote me where I said that. Stop straw manning my position on this please.

I tell you for sure - it's made more necessary when nVidia skimp on their VRAM, eh? Like always.

Anyway, we'll wait and see what AMD offer. I'm sure if they have good offerings with plenty of VRAM you'll not be too critical of their decision to offer more than nVidia, and scoop up some sales from the green team. I'm sure you'll applaud them and recognise the merit of getting more for your money, aye?

You do understand that by adding more memory the card will cost more. How products are priced in the market is you look at how much it costs to build the card and then add a profit margin on top of those costs. If you put more vRAM on a card the card will cost more. See it seems obvious to me now that you think, that by "skimping" on vRAM that Nvidia is somehow pocketing those costs. They're not, it allows them to sell their cards at a cheaper price. But I'm starting to now see why you think the way that you do. You start out with the conclusion that Nvidia want to skimp on vRAM because they benefit from that somehow and the rest is kind of mental gymnastics where you ignore things that dont fit your narrative.
 
You do understand that by adding more memory the card will cost more. How products are priced in the market is you look at how much it costs to build the card and then add a profit margin on top of those costs. If you put more vRAM on a card the card will cost more. See it seems obvious to me now that you think, that by "skimping" on vRAM that Nvidia is somehow pocketing those costs. They're not, it allows them to sell their cards at a cheaper price.
Just lol.

"Cheaper price". Hilarious. Cheaper than a Ferrari, I'll give you that. Or a private jet. Or a villa in Monaco.

Also you again are adopting two positions at once. Firstly that I "crippled" the spec above by not have a fast NVME (etc), then also again that those things aren't necessary.

I think I'll just leave you to your PR work, whilst waiting for AMD to deliver an alternative to nV's fleecing. We're not going to agree on this, and you've got your defensive position to hold re nVidia.

e: Also you pointed to the fast NVME in the new consoles and said, "This is the way things are going," whilst again saying you don't need one in a PC.

It's really confused messaging on your part.

It's a desperate defence of the indefensible (nVidia's planned obsolescence to keep people upgrading each gen, even at the £800+ price points, which might previously have had some longevity, but not now).

e2: You can also bet your ass that the gen after this (40xx) will have more VRAM, as an incentive to upgrade, and nobody will be saying, "Nah, you'll never need more than 8 GB".
 
Last edited:
You're not "picking up the slack" with other system components, what you're doing is you're only putting into vRAM what you need and when you need it which keeps vRAM usage down and keeps the cost of GPUs down. This doesn't put more demand on RAM because games dont stream assets from RAM they stream is straight from the disk.
In which case I can say for a certainty:

The less VRAM you have the less assets you can use in your scene (potentially affecting things like draw distance etc).

The more you need assets that aren't in VRAM (you say "intelligent swapping" but it's really just a cache miss), you immediately become dependent on the speed of the medium you are getting those assets from, and the speed of the link between the GPU and that medium. If they're in sys RAM you're going to have a better time than if they're on the disk only.

With more VRAM you can have more complex scenes and better draw distance. Or you can have those things without tanking framerates when you don't have an asset in VRAM.

Software isn't magic that makes getting assets from a slower medium somehow as fast as having them already in VRAM. Not having enough VRAM is a limitation. I think everyone can see this.
 
From what I can tell from the image on this wccftech page is that a 20GB 3080 is due in Oct.

16GB would have been the sweet spot for both the 3080 and 3090 and then they wouldn’t have to faff around like this :p.

24GB on the 3090 is extremely pointless outside of rendering workloads and just added cost.

Although I suppose it’s nvidia so milk someone for a 10GB 3080 and then milk them 6 months later when they need more vram.
 
The day is here

It certainly has :D

ibMQlawl.jpg.png
 
Back
Top Bottom