• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Do AMD provide any benefit to the retail GPU segment.

However, i prefer AMD as a company, very much more, Nvidia are just the absolute worst for all the reasons @tommybhoy laid out, and more, AMD are no angels, far from it, but ##### me....
Exactly, I always used Nv, a few SLI setups too until my bro on a 3870 kept banging on about how they were better for the price, so bought a 4870 Toxic then a 5870 through work, I didn't look back from 3 gens of XFire up to 290X's as they decimated Nv for the cost, that had me firmly in the AMD camp, however got fed up not getting all the features, AMD went to **** and almost went to the wall so sold up and went Nv, can't beat em join em, get every feature and haven't looked back since-well until they started flushing textures.:p

So yes, AMD are no angels but if AMD even remotely want to increase market share, they need to be as ruthless as Nv, take control of influencers/press, lock FSR3 to AMD only and launch their 78 series asap.

Personally don't think they give a **** about the PC space, they're already the industry leaders in the AAA game hardware space, taking consoles and PC, there's more AMD in gaming hardware than Nv, they are raking it in, if they were interested in PC, they would have launched 78's already or announced a release hot on the tail of the 4070.
 
They're not interested whatsoever, they just use PC as an additional revenue stream by-product with console/apu being their main concern. Otherwise a 9% market-share (between just 2 companies) is just plain embarrassing.
Exactly, as long as their dgpu profit continues, that's all they care about, their cpu+console division is where they coin it in, dgpu is an afterthought at this stage, I mean FSR has been co-developed PC and console, adding FSR 3 to console massive win for AMD in securing every console going forwards.
 
Last edited:
Exactly, as long as their dgpu profit continues, that's all they care about, their cpu+console division is where they coin it in, dgpu is an afterthought at this stage, I mean FSR has been co-developed PC and console, adding FSR 3 to console massive win for AMD in securing every console going forwards.

Lets hope that actually happens....

*cough* FSR going to be in every console game!!! *cough*

EEAU0F3.gif

Probably will but give it a few years.

EDIT:

Also, if it is ****, which is highly likely, don't expect devs to be adding it till issues are sorted i.e. add another few years :p
 
Last edited:

Oh yes, instead of arguments to combat a point you've stopped down to mockery.

Here you go, a game that looks good texture wise, probably good enough for 4GB vRAM at 1080p for its entire gameplay.



How about another Sony console exclusivity, more open world, which runs fine on 8GB vRAM at 5760x1080 and also doesn't need a crazy fast CPU to decrompress textures?


Heck look at cp 2077 path tracing, runs better on a 3070 8gb than a 7900xtx 24gb, huge dense open world with path tracing, looks better "overall" than anything else currently out. Since the issues shown in forgotten, tlou 1 (although I think most of these issues have been fixed now by the devs, so much for not being a game issue though ;)) and so on are never game issues and 100% down to the hardware then I guess we can say the same for 3070 vs 7900xtx cp 2077 path tracing :p ;) :D

There are plenty of examples with games that have good textures and don't need a beefy vRAM buffer or crazy CPU requirements to unpack an over-the-top compression algorythm for the sake of it. It's just poor managament lifted up at the rank of normality simple because AMD has a slight advantage.
Isn't that after the update which wrecked rasterisation performance on Navi? But if 8GB is enough, how come the RTX4070 series now has 12GB and the RTX4080 has 16GB? Nvidia knows something you don't! :cry:

Because 8GB was fine in the past, is still fine for 1080p, maybe 1440p in games that actually have the slightest optimization process going on, but it won't be in the future if game devs actually bother to bring next gen stuff out. This is not a talk about how a 4xxx series would be fine with 8GB, but how previous graphics cards should be fine in general. look at HZD above, it runs out of grunt on my 2080 8GB card, not vRAM at higher resoltion and is doing so thanks to its proper porting.
FC6 starts fine in 5760x1080 on 8GB, but drops to single digit and stay there thanks to crappy memory management. Even at 1080p where should be fine with the high texture pack (according to in game memory usage meter should be under 7GB if I remember correctly), just adds and adds size to vRAM until it goes over the available memory and stays there. Again, bad memory manangement.

BTW, how much good does the 16/12 GB vRAM of AMD cards in RT/PT games so far? How the 6800xt fairs against the 10GB of 3080?
Apparently even the 20GB of the 7900xt is not doing that well against the aging 3080 once you put it through a proper RT scenario.


cyberpunk-2077-rt-2560-1440.png


Here's one from older review, so we don't have the wreking of performance:

cyberpunk-2077-rt-2560-1440.png
 
Last edited:
Oh yes, instead of arguments to combat a point you've stopped down to mockery.

Here you go, a game that looks good texture wise, probably good enough for 4GB vRAM at 1080p for its entire gameplay.



How about another Sony console exclusivity, more open world, which runs fine on 8GB vRAM at 5760x1080 and also doesn't need a crazy fast CPU to decrompress textures?




There are plenty of examples with games that have good textures and don't need a beefy vRAM buffer or crazy CPU requirements to unpack an over-the-top compression algorythm for the sake of it. It's just poor managament lifted up at the rank of normality simple because AMD has a slight advantage.


Because 8GB was fine in the past, is still fine for 1080p, maybe 1440p in games that actually have the slightest optimization process going on, but it won't be in the future if game devs actually bother to bring next gen stuff out. This is not a talk about how a 4xxx series would be fine with 8GB, but how previous graphics cards should be fine in general. look at HZD above, it runs out of grunt on my 2080 8GB card, not vRAM at higher resoltion and is doing so thanks to its proper porting.
FC6 starts fine in 5760x1080 on 8GB, but drops to single digit and stay there thanks to crappy memory management. Even at 1080p where should be fine with the high texture pack (according to in game memory usage meter should be under 7GB if I remember correctly), just adds and adds size to vRAM until it goes over the available memory and stays there. Again, bad memory manangement.

BTW, how much good does the 16/12 GB vRAM of AMD cards in RT/PT games so far? How the 6800xt fairs against the 10GB of 3080?
Apparently even the 20GB of the 7900xt is not doing that well against the aging 3080 once you put it through a proper RT scenario.


cyberpunk-2077-rt-2560-1440.png


Here's one from older review, so we don't have the wreking of performance:

cyberpunk-2077-rt-2560-1440.png


Horizon gives you low res textures when you run out of vram, I.e 10gb and under GPUs

It's not my fault Nvidia screwed you over and kept giving you 8gb vram that GPUs had 9 years ago - don't blame me if you're in denial just buy a better GPU
 
Last edited:
But between 2009 to 2016 we went from 512MB/1GB to 8GB in the mainstream. We went from 1.5GB/2GB to 11GB at the high end.

Since 2016,we mostly stagnated at 8GB(apart from two cards) in the mainstream.We went from 11GB to 24GB at the high end.

If the mainstream had kept up with the high end,we should have been at 12GB/16GB by last generation.

Yet during the period since 2009,we also had 3.5 console generations too.Also,the processing power of dGPUs has increased massively. An R9 390 8GB has much less processing power than an RTX3070 8GB. SSDs are much faster now,so PCs can read textures much quicker too.

So despite all this we are still stuck with only 8GB? It's not only a limitation of texture size and quantity,but also the amount of details in them. Imagine if desktop PCs,didn't increase system RAM since 2016? Or SSD size?

It's becoming a limitation now. Even as a modder,I tend to have to now mod to keep textures within 8GB!

I'd say there's plenty of detail for an under 4GB vRAM buffer.

 
I'd say there's plenty of detail for an under 4GB vRAM buffer.

Using that logic, you could argue lighting and reflections look good enough so why bother with RT? For very little visual gain you are wrecking performance.
I played most of Cyberpunk 2077 on a non RT card. Atomic Heart looks great and they never included RT despite Nvidia showcasing it in demo.
Games such as HZD look fantastic already but as said above streams low res textures if you don't have enough VRAM. But as you said it looks good so why bother with RT, which then needs you to switch on upscaling, frame generation, etc.
You can't say that just because one brand has an advantage in a certain metric,to then say when the other brand has an advantage. But it's Nvidia's fault for not adding a few more GB of vram, and as I said why are they including more now? Whether you like it not VRAM is going to be required more and more. This was always going to happen once we moved past cross generation titles.
I have an RTX3060TI FE myself and I don't regret my purchase because it was the best value card in the UK in 2021 and I play more older and Indie games. The RDNA2 cards were poorly priced in the UK. But I certainly knew coming into year three, 8GB would start to be an issue. Even when modding some of the Bethesda games some of the newer texture mods, I have to be careful now @KompuKare will know.
 
Last edited:
Horizon gives you low res textures when you run out of vram, I.e 10gb and under GPUs

It's not my fault Nvidia screwed you over and kept giving you 8gb vram that GPUs had 9 years ago - don't blame me if you're in denial just buy a better GPU

It doesn't at my end. Even the in game meter doesn't go over 8GB, it doesn't drop quality in game and once I tune settings for 60fps vRAM usage goes even lower. On the other hand, I don't play with a browser open.

Plus you've ignored all the other examples.

Using that logic, you could argue lighting and reflections look good enough so why bother with RT? For very little visual gain you are wrecking performance. I played must have Cyberpunk 2077 on a non RT GTX1080. Games such as HZD look fantastic already.
Is more or less "little" depending per scene. For me, overall is not that little.

But, moving on your train of thought, TLOU has better texture work than that original A Plague Tale I've posted pictures from? Does it use some film quality assets?
 
Last edited:
HZD had multiple patches to address the texture issue (it was also happening on amd gpus, in fact, more so with the videos I posted a while back). Also, the origin where that originated was some randomer on reddit comparing 2 completely different gen gpus as well as being months apart i.e. before the patches fixed the issue :o
 
It doesn't at my end. Even the in game meter doesn't go over 8GB, it doesn't drop quality in game and once I tune settings for 60fps vRAM usage goes even lower. On the other hand, I don't play with a browser open.

Plus you've ignored all the other examples.


Is more or less "little" depending per scene. For me, overall is not that little.

But, moving on your train of thought, TLOU has better texture work than that original A Plague Tale I've posted pictures from? Does it use some film quality assets?
But again, your argument is that devs should just never get any more VRAM to work with. So there will be an upper limit in the size and quality of textures in the future. That means they need to spend more and more time(as the ue5 dev said) on trying to manage things. Lots of games use multiple texture sets which apply at different draw distances. They also target the better quality textures at where u might look at. If you mod games you will appreciate this.

Then the whole need to use different cells for large internal and external worldspaces.

But Nvidia is increasing vram now? They know what is coming.

But look at some of the UE5 demos where the textures look amazing. So what is the excuse when the next generation engines come out then? Devs start pushing the need to use Directstorage, more VRAM, etc.

But it also goes to how the RT crowd go on about how RT reduces work for devs, but it craters performance for the end user. It also increases vram usage and cpu usage. For what? A bit better reflections and different lighting? Some more reactive puddle reflections?
Then you need to degrade image quality by using some form of upscaling,insertion of lower quality frames, etc. I would argue for most people, that in most so called RT games they would recognise the RT on settings from the FPS dip.

But it is the "future" so expect, just like with increasing vram requirements, it will be a thing.

And as KompuKare jokingly said in one post, a properly lit dungeon would be pitch black. People would complain as they moaned about Doom 3 being too dark!

Edit!!

We wouldn't be having these arguments years ago to the same level. Since 2016 it's quite clear BOTH companies are trying to sell more for less, but it was slowly happening from 2011. The same sorts of requirement jumps in vram happened decades ago, as was the introduction of new features which affected performance. But the difference is that on average we tended to get good jumps in performance and price/performance from top to bottom.

Now companies have to consult the accountants to ask for permission.
 
Last edited:
Games seem to just use brute force which uses a lot more VRAM than is needed. Consoles have 12GB (think, not looked it up) total (RAM+VRAM). The PCIe bus is hardly used which is clearly visible by how little difference dropping to gen 3/2 makes. The only way dev’s will stop this is if they are forced as it takes more work to get asset steaming to work well. Not all games are like this, some use a relatively small amount and still look very good, others take the **** on purpose to get attention.

Edit:
One frame at 4K 32bit = 33.75MB. 8GB can store 242.726 4K 32bit frames.
90%+ of the VRAM is used for assets (Vertex/Index/textures...) buffers. This sort of data can be streamed in and out as needed very quick. Games use more RAM than VRAM because needed assets are kept in RAM so they can be re-sent to the GPU fast if needed. Even keeping assets in RAM is not 100% needed as storage is fast enough to stream the data if done correctly.
 
Last edited:
Games seem to just use brute force which uses a lot more VRAM than is needed. Consoles have 12GB (think, not looked it up) total (RAM+VRAM). The PCIe bus is hardly used which is clearly visible by how little difference dropping to gen 3/2 makes. The only way dev’s will stop this is if they are forced as it takes more work to get asset steaming to work well. Not all games are like this, some use a relatively small amount and still look very good, others take the **** on purpose to get attention.

Edit:
One frame at 4K 32bit = 33.75MB. 8GB can store 242.726 4K 32bit frames.
90%+ of the VRAM is used for assets (Vertex/Index/textures...) buffers. This sort of data can be streamed in and out as needed very quick. Games use more RAM than VRAM because needed assets are kept in RAM so they can be re-sent to the GPU fast if needed. Even keeping assets in RAM is not 100% needed as storage is fast enough to stream the data if done correctly.

Consoles have 16gb shared memory. The PS5 has the 16gb gddr6 memory but also an extra 512mb DDR4 and the OS and background tasks run on this 512mb section leaving the remaining 16gb for the game to use as it chooses, VRAM or RAM the developer has freedom to use the 16gb buffer as they wish


Fun fact, Mark Cerny said several game developers were begging him to put 32gb memory into the PS5 a few years ago when they were designing the machine and doing consultations with game studios to see what they wanted for their future games
 
Last edited:
But again, your argument is that devs should just never get any more VRAM to work with. So there will be an upper limit in the size and quality of textures in the future. That means they need to spend more and more time(as the ue5 dev said) on trying to manage things. Lots of games use multiple texture sets which apply at different draw distances. They also target the better quality textures at where u might look at. If you mod games you will appreciate this.

Then the whole need to use different cells for large internal and external worldspaces.

But Nvidia is increasing vram now? They know what is coming.

But look at some of the UE5 demos where the textures look amazing. So what is the excuse when the next generation engines come out then? Devs start pushing the need to use Directstorage, more VRAM, etc.

But it also goes to how the RT crowd go on about how RT reduces work for devs, but it craters performance for the end user. It also increases vram usage and cpu usage. For what? A bit better reflections and different lighting? Some more reactive puddle reflections?
Then you need to degrade image quality by using some form of upscaling,insertion of lower quality frames, etc. I would argue for most people, that in most so called RT games they would recognise the RT on settings from the FPS dip.

But it is the "future" so expect, just like with increasing vram requirements, it will be a thing.

And as KompuKare jokingly said in one post, a properly lit dungeon would be pitch black. People would complain as they moaned about Doom 3 being too dark!

Edit!!

We wouldn't be having these arguments years ago to the same level. Since 2016 it's quite clear BOTH companies are trying to sell more for less, but it was slowly happening from 2011. The same sorts of requirement jumps in vram happened decades ago, as was the introduction of new features which affected performance. But the difference is that on average we tended to get good jumps in performance and price/performance from top to bottom.

Now companies have to consult the accountants to ask for permission.
My argument is that devs should make the best use possible of the available resources. For instance, making a crappy port with vRAM usage and an overly complicated compressing/decompresing algorithm vs using the same high vram amount, but for some stunning games with normal decompressing needs.

I really do hope that devs will actually make good use of the extra vram available. I don't have a problem with that.

HZD looks the same on single vs multiple monitors texture wise on 8GB card, no drop in quality. Don't know what problems were at launch as it wasn't my coup of tea, but is fine now.

And yes, perhaps it should have been more vram on the cards, but that's rather more easy to justify if you're modding the game (and I tend to agree here) vs (again), a crappy port.

Btw, 8gb is fine with RT on performance mode DLSS 5760x1080p for 30-40 fps. Or path tracing for the same frame rate at 1080p. It lacks power, no vram...

And, on the same 8gb card, the new A Plague tale runs fine as well.

So... use high vram if necey, BUT make it count!
At 1440P I'm having to run reduced graphics settings in Rust, significantly reduced graphics settings to stop VRam over flow hitching.

A game from 2013.

AMD's fault....
It isn't. It was/is just crappy if such an old game needs more than 8gb at 1440p.

Funny thing though is that they've launched 4GB Fury cards (plenty of talks back then with 4gb is enough for 4k), then 8gb cards as Vega and 5700xt. Why, haven't they heard about Rust? Or was it simply because they ignore outliers/ badly optimized games?
 
Consoles have 16gb shared memory. The PS5 has the 16gb gddr6 memory but also an extra 512mb DDR4 and the OS and background tasks run on this 512mb section leaving the remaining 16gb for the game to use as it chooses, VRAM or RAM the developer has freedom to use the 16gb buffer as they wish


Fun fact, Mark Cerny said several game developers were begging him to put 32gb memory into the PS5 a few years ago when they were designing the machine and doing consultations with game studios to see what they wanted for their future games
And why hasn't he putted 32GB?
 
Back
Top Bottom