• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Its got nothing to do with being able to afford it or not, its got more to do with the perceived worth.

Having a card that is 40-50% faster than its predecessor at roughly the same price = worth it, This is how it's been for a very long time.

Having a card that is only 30% faster than its predecessor with a price increase of 20-60% = not worth it, even if I can afford the new price.

I could name a thousand things that aren't worth the extra cost. Heinz beans aren't worth the extra money for example. Shelves in every shop are stacked with things that aren't worth the extra money. Graphics are just one of those things. The 3090 is too expensive then get a 3080 or a 3070 or a 3060.
 
Don't undervalue the epeen, I loved my 64gb of ram :)

That is a factor, I have done that myself over the years sadly, most recently putting in 2 NVMe drives in RAID 0 for an extra like 300-400MB/sec of sequential read with most of the rest of the benefit being lost the the bottleneck of the PCI 4x bandwidth. Partly because the cost difference was very very small, but also partly because it's a bit of an epeen thing. Is there enough epeenery to justify a 2nd product line and all the overheads of engineering that go into that and the resulting more expensive card? I dunno, maybe. I suspect not.

Which isn't necessarily true, is it.

If you cache assets you don't need to use them any more than if you loaded everything "just in time". It just means they're available to be used when needed, without delay and without needing to be streamed from a much slower medium.

It's not always true but it's generally true. Bit for bit what you load into vRAM can have significantly different impacts on performance. You can add 1 new texture @ a few MBs and see very minor performance impact, or you could load a few kb pixel shader which does something extremely complex and halfs your frame rate. What is useful is talking about this in aggregate, you only need look at the evolution of GPUs over the last 20+ years to see that as GPUs get faster we add more vRAM.

I did acknowledge what you're saying, in my prior post, where I talked about vRAM being mostly necessary for what you're actually rendering right around you at the moment and that most games engines prefetch and cache assets they know or predict will be needed in future. Although that's really not as big of a deal as you'd imagine. People are correct in saying the vRAM is a way faster source of data than having to fetch from disk, but also 10Gb of vRAM (of which only some smaller subset is actually used for raw game assets) is not a lot of data, you can fill that from a modern disk super fast, relative to human perception.

Caching large volumes of assets you're not using right now is a bit of a waste of vRAM, it's a solution to a problem that costs money because vRAM costs money. Where as software can intelligently use small amounts of cache in an effective and efficient way to achieve basically the same thing, which is what game engines have been perfecting for 20+ years. My GTA V install is 92Gb which when uncompressed in memory after loading would be way more, and my 8Gb video card handles that game just fine, I can zip around the island in a jet and it has zero problems seamlessly streaming assets in/out intelligently.

I'm talking broadly here, what people do in reality is they get a game, they crank up the settings, they check their frame rate, if it's too choppy they lower the settings which lower vRAM usage on aggregate until they get a playable frame rate. If you have a 20Gb card and you lower your setting to be playable and at those settings you're using 10Gb of vRAM then you're wasting the other 10. There is a relationship between the GPU and vRAM and how much vRAM is appropriate for any given GPU, above which you're only adding cost and not really any actual benefit.
 
Remember when everyone said the 1080 Ti was going to be a £850 GPU and a cut down titan because there was no competition then it launched at a cheaper price £719 and offered more performance than the Titan because Nvidia were on the fence with Vegas performance? Well yea here we are again big navi is round the corner and I'd be very surprised if Nvidia release the 3080 at £800
 
Do the new consoles have any system ram? Or are they going to have to use a portion of the 16Gb GDDR as system ram?

If they have to use it as system ram as well, then it might not leave much more than 8Gb as VRAM
The console specs are not directly comparable to PC specs. Differences in APIs, resolution, lack of modding, plus techniques/cheats like checkerboarding will effectively reduce the amount of grunt and mem needed by the consoles.

In other words you need more on PC to run these "ports" properly, as you've come to expect on your 1440p/4k screen, with ultra settings :p

I think it stands to reason that if the consoles have had a major upgrade, that continuing to use the same amount of VRAM on PC GPUs that we've had since 2015 is at best a gamble, at worst planned obsolescence from nVidia.
 
You don't need a blisteringly fast NVME drive ala the consoles to stream from, if the game has the option of simply caching more of its assets.

You will need NVME because even if you have 256 GB VRAM it will take forever to start the game (to have all map data available at moments notice), per GB NVMEs are much cheaper and will remain so.
 
During the last console gen PC had 8GB - 11GB and people saw their VRAM get used up. By these "console ports". When the consoles had way less in terms of GPU grunt and memory/VRAM.

During the next console gen we're sticking with 8GB on PC even tho consoles are now upgraded...

What I'm saying is PC probably needs more than just to barely match console spec. Esp if you want to mod your games.

We've had 8 GB on PC since before 2015 and now things are moving on. The consoles are moving on.

If nVidia refuses to move on there should be nobody justifying this. They're only extracting every last $$$ and giving you the bare minimum in return.

I dont think the bar has been raised at all. Consoles wont use the full spec from the start. The current consoles will last 5-6 years. During that time we'll have the 3090, then in 2022 we'll get the 4090, then in 2024 we'll get the 5090, followed by the 6090 in the year 2026. PCs will be light years ahead. Dont panic.
 
Caching large volumes of assets you're not using right now is a bit of a waste of vRAM, it's a solution to a problem that costs money because vRAM costs money. Where as software can intelligently use small amounts of cache in an effective and efficient way to achieve basically the same thing, which is what game engines have been perfecting for 20+ years. My GTA V install is 92Gb which when uncompressed in memory after loading would be way more, and my 8Gb video card handles that game just fine, I can zip around the island in a jet and it has zero problems seamlessly streaming assets in/out intelligently.
Not sure if you've been following this launch but costs are already going up :p That's mostly to do with nV's reported 60% profit margin tho :p

So now, as well as a £800 GPU, and a new 1kw PSU, we'll also need a brand new best-in-class NVME drive, plus 32 GB fast system RAM, so nV can cache the video assets on other parts of your PC and avoid having to eat into their profits by putting a decent amount of VRAM on their cards.

Great for them, not so great for us :p
 
8eMMGu8.png

Not sure if you've been following this launch but costs are already going up :p That's mostly to do with nV's reported 60% profit margin tho :p

Nvidia was well over 60% last year,but Mellanox cost a few billion USD,which gamers helped out with.

So now, as well as a £800 GPU, and a new 1kw PSU, we'll also need a brand new best-in-class NVME drive, plus 32 GB fast system RAM, so nV can cache the video assets on other parts of your PC and avoid having to eat into their profits by putting a decent amount of VRAM on their cards.

Great for them, not so great for us :p

Nobody needs more than 640KB of RAM!;)
 
Ahh but I do love them tubers that churn out them classic "build a console killer for half the price" vids. Yeah its easy to do that with PC platform especially the longer timeframe passes. :)
 
I could name a thousand things that aren't worth the extra cost. Heinz beans aren't worth the extra money for example. Shelves in every shop are stacked with things that aren't worth the extra money. Graphics are just one of those things. The 3090 is too expensive then get a 3080 or a 3070 or a 3060.

Well up until turing we have had decent uplifts in performance, for pretty much static costs and tiers, so yes the upgrades have been worth it. Now, not so much. It just means I'm waiting longer for that 50% performance uplift for the same price point, because gaming just isn't worth that much of my money despite being able to afford these frankly ridiculous price increases for the performance on offer.

The console specs are not directly comparable to PC specs. Differences in APIs, resolution, lack of modding, plus techniques/cheats like checkerboarding will effectively reduce the amount of grunt and mem needed by the consoles.

In other words you need more on PC to run these "ports" properly, as you've come to expect on your 1440p/4k screen, with ultra settings :p

I think it stands to reason that if the consoles have had a major upgrade, that continuing to use the same amount of VRAM on PC GPUs that we've had since 2015 is at best a gamble, at worst planned obsolescence from nVidia.
]

I appreciate that consoles work at a lower level than PC, but game data that is normally stored in system ram is still needed on the consoles, especially the newer ones that are touting this instant game switching etc. I can easily see them requiring at least 6Gb of that 16Gb total, which would leave about 10Gb for the GPU at best. The Xbox is basically running a version of windows 10.

I agree nvidia seem to be stingy with offering only 10Gb in this day and age on their 80 class GPU. However, this is nvidia we are talking about, so I'm certainly not surprised.
 
I could name a thousand things that aren't worth the extra cost. Heinz beans aren't worth the extra money for example. Shelves in every shop are stacked with things that aren't worth the extra money. Graphics are just one of those things. The 3090 is too expensive then get a 3080 or a 3070 or a 3060.

That kind of talk's not popular round here fella!

"Buy what you can afford and you think is worth it"???

Lynch him!!!
 
Running out of Vram will have a more significant reduction in frame rates (and potential hitching issues) than increasing the amount of work that the GPU has to do due to an increase in assest quality. Your reasoning also seems to be based on the idea that as soon as asset quality increases past 8GB of Vram usage the render times will increase to the point were the frame rate is unplayable on current GPUs.

The current generation of consoles realistically had 4-6GB available to them for the GPU (8GB total). Yet we have people reporting around 8GB used on current AAA titles. We are about to double how much RAM is available to the console GPU, as well as asset streaming direct from their SSDs. Do you honestly think that this won't increase VRAM usage for PC games?

I know, if you have to fetch something from disk that you need right now for the next frame you can get 1-2 second hitches which devastates average frame rate. And modern game engines typically deal with this problem by having some level of prediction of what assets are needed and might be needed soon in order to prevent this. Because game assets typically dwarf available vRAM anyway.

What I'm saying is that render asset quality increases vRAM usage increases and GPU render time increase (frame rate goes down). This is not at all controversial. Load up a game that displayed predicted vRAM usage in the settings menu like GTA V, mess with your settings turning them up and down and it will give you predicted vRAM usage which goes up and down with the settings. As you increase your settings your frame rate goes down. For any given specific GPU with a certain amount of fixed processing power there is a ceiling on maximally useful amount of vRAM to put on the card. Which is how you pick how much vRAM you're going to put on your card when you design it, you don't want less than what can be used or it'll bottleneck the GPU, and you don't want anymore than necessary because it will add cost but no benefit.

Obviously in the real world not all vRAM is equal, I've acknowledged that, shaders are tiny in vRAM but can have large impact on FPS, texture assets are massive in comparison and have much less impact on frame rate. So you go based on aggregates across average users, behaving in average ways in average games. You dont let exceptions and outliers push up the cost of your product for everyone else. And in the big picture people aren't giving lists of a load of games using 8gb+ of vRAM they're giving exceptions. I have nearly 1000 steam games and a 1080 which I typically play games at 4k on and basically most of my games are far far below the 8Gb limit.

I think you're probably over estimating the consoles. On the PS4 of the 8Gb, 2Gb is reserved for the OS/system leaving the devs with 6Gb to do everything else. If you look at the RAM usage of a cross platform engine on the PC say the unreal engine you're looking easily at 2Gb in game. I think the upper bounds of the current gen console for graphics related things (that a PC would use vRAM for) is probably 4Gb but likely less.

Of course the consoles upping their usage will increase the vRAM usage for PC games, I specifically and deliberately made that point, and estimated they weren't likely to go over 8-10Gb. That the really high memory usage examples people are giving here on PC games are exceptions and not the norm. I mean, I'd bet on them not getting anywhere near 10Gb in practice simply based on the fact they'll be using what at launch will be mid range GPUs, and simply cannot load up on high quality assets to that degree and keep a playable frame rate. If you look at the PS5s GPU at about 10TFLOPS as a really rough measure of horsepower, and compare that to a modern PC video card of the same power, like say a 1080/Ti, those cards are 8Gb cards. I have a 1080 (well 2 but SLI doesn't work in a lot of games and doesn't double your usable vRAM anyway) and a 4k monitor and I run out of GPU power way, way before vRAM in basically all the games I own (nearly 1000 steam titles)
 
That kind of talk's not popular round here fella!

"Buy what you can afford and you think is worth it"???

Lynch him!!!

But it hasn't been that has it? It's been the same few saying it's purely "you can't afford it stop moaning", nothing to do with perceived worth. It seems they can't comprehend actually having the cash available, but not purchasing, it's as if its an alien concept.

I didn't buy a 20 series because frankly they offered appalling uplifts in performance for their price, especially the higher tiers. My money was there waiting, but nvidia never got a penny of it because they failed my criteria
 
Well up until turing we have had decent uplifts in performance, for pretty much static costs and tiers, so yes the upgrades have been worth it. Now, not so much. It just means I'm waiting longer for that 50% performance uplift for the same price point, because gaming just isn't worth that much of my money despite being able to afford these frankly ridiculous price increases for the performance on offer.

I appreciate that consoles work at a lower level than PC, but game data that is normally stored in system ram is still needed on the consoles, especially the newer ones that are touting this instant game switching etc. I can easily see them requiring at least 6Gb of that 16Gb total, which would leave about 10Gb for the GPU at best. The Xbox is basically running a version of windows 10.

I agree nvidia seem to be stingy with offering only 10Gb in this day and age on their 80 class GPU. However, this is nvidia we are talking about, so I'm certainly not surprised.

Well moore's law (observation) is basically starting to fail and will fail in the long run and by long run actually not that far from now. And most of the doubling in speed for the same cost we've appreciated in the past has come from innovating the transistor size down smaller per area. Where as while that's still happening now, most of the performance increase is now coming from larger die sizes and more aggressive clocking. I think that's something we'll all have to eventually come to terms with that increasingly we wont be able to expect big leaps at the same cost, those days are getting closer and closer to being over.

I don't really class it as stingy, it sort of implies that Nvidia makes off the more cash or something like that. if they put more vRAM on the cards they wouldn't just eat that cost for us like a champ, they'd pass it onto the consumer with higher video card prices. Their interests are aligned with their customers, they're not going to dump unnecessary amounts of vRAM onto a card if they don't think it'll be used. And as I said, based on how slow the GPUs are in the next gen consoles (about the speed of high(ish) end card from 2 generations ago) which has 8Gb and is mostly unused, I think 10 is a real stretch that'll be the exceptions that'll need that, 8Gb I think is a realistic maximum. Devs are going to load that bad boy up with 10Gb of high end assets and find they have 10fps.
 
We just have to hope that AMD bring the competition and don't skimp on VRAM. To me it's confusing why Nvidia chose this strategy with the VRAM.
It's pretty straight forward really, the less VRAM the cards come with the sooner the need to upgrade and that means more cash for nvidia as most buy their cards.
 
Of course the consoles upping their usage will increase the vRAM usage for PC games, I specifically and deliberately made that point, and estimated they weren't likely to go over 8-10Gb. That the really high memory usage examples people are giving here on PC games are exceptions and not the norm. I mean, I'd bet on them not getting anywhere near 10Gb in practice simply based on the fact they'll be using what at launch will be mid range GPUs, and simply cannot load up on high quality assets to that degree and keep a playable frame rate. If you look at the PS5s GPU at about 10TFLOPS as a really rough measure of horsepower, and compare that to a modern PC video card of the same power, like say a 1080/Ti, those cards are 8Gb cards. I have a 1080 (well 2 but SLI doesn't work in a lot of games and doesn't double your usable vRAM anyway) and a 4k monitor and I run out of GPU power way, way before vRAM in basically all the games I own (nearly 1000 steam titles)
So current gen games can hit 8-10GB but you consider these exceptions.

Future releases will targeting the new console baseline will be using more VRAM, you agree.

So therefore the exceptions will very much start becoming the norm.

But somehow you estimate that 10 GB will be enough on a 2080 Ti class card. And by extension may I presume you think 8 GB will be plenty on the 3070 and 6 GB plenty for the 3060?

I'd love to know where your confidence in these numbers comes from. OK you already said that nVidia knows best, so perhaps that is it?
 
It's pretty straight forward really, the less VRAM the cards come with the sooner the need to upgrade and that means more cash for nvidia as most buy their cards.
What confusing to me is the consumers that actually defend this, and want nVidia to squeeze them for every last penny! "Please Sir, can I have less?"
 
Strange justification for them being stingy on the ram I reckon. Just because you do not see it (in your library of games with your GPU) doesnt mean its not a thing. This is what we mean by sometimes its not a devil's advocate approach, moreso people are quick to excuse companies like nvidia when in fact you should be challenging them to keep them on their toes.
 
Back
Top Bottom