• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Do AMD provide any benefit to the retail GPU segment.

"Used to?" What makes you think Nvidia aren't behaving like this today as well? Their absolute slew of proprietary value adds would suggest otherwise whereas AMD's equivalents are always open access. You can't tell me that e.g. Control and Cyberpunk's RT settings weren't configured with disadvantaging AMD cards in mind, either. You're basically expecting AMD to bring a knife to a gun fight.

But my issue with Nvidia here is that their problem is actually trivially resolvable: just offer higher VRAM alternatives to their current products. It's an easy thing to do, requires zero rearchitecting and doesn't even cost very much. But they don't want to do that because ultimately Nvidia want those parts to get crippled by VRAM requirements so that you buy their next gen as well... It's just coming home to roost a bit faster than anticipated.
Cyberpunk with rt off runs equally / better on amd cards than on nvidia cards. Tons of amd sponsored games that nvidia runs way worse than the equivalent amd card, and not because of vram. Ac odyssey, dirt, forza, plenty and plenty of examples. So no I don't think nvidia is doing anything shady today.

Yes the rt difference in cp is huge, but that's because the actual rt capabilities of the cards are not close. How do I know? There are independent full path traced benchmarks that show a much bigger difference between nvidia and amd than cyberpunk does. So silly as it sounds, cp works great on amd. Guru 3d tests those, go check a gpu review from them.

If nvidia wanted you to replace your card faster due to vram then their sponsored games wouldn't work absolutely fine and play absolutely great on measly 5gb of vram. It's only amd sponsored games that hog vram, so how exactly is it nvidia trying to make you change the card? Are you saying nvidia is paying amd to make amd sponsored games hog vram?

And of course you are still missing the point. OK, let's say nvidia is now putting 48gb of vram minimum, now what? The amd sponsored games still look like crap while using 20 gb of vram. How does that solve the issue of games hogging vram while looking like trash in comparison to games that don't hog vram

I think you havent played plague tale Requiem. Please do me a favor and go try it. The game puts to shame everything in terms of visuals while using 4.5 to 5.5 gb of vram.
 
Last edited:
Cyberpunk with rt off runs equally / better on amd cards than on nvidia cards. Tons of amd sponsored games that nvidia runs way worse than the equivalent amd card, and not because of vram. Ac odyssey, dirt, forza, plenty and plenty of examples. So no I don't think nvidia is doing anything shady today.

Yes the rt difference in cp is huge, but that's because the actual rt capabilities of the cards are not close. How do I know? There are independent full path traced benchmarks that show a much bigger difference between nvidia and amd than cyberpunk does. Guru 3d tests those, go check a gpu review from them.

If nvidia wanted you to replace your card faster due to vram then their sponsored games wouldn't work absolutely fine and play absolutely great on measly 5gb of vram. It's only amd sponsored games that hog vram, so how exactly is it nvidia trying to make you change the card? Are you saying nvidia is paying amd to make amd sponsored games hog vram?
See you're cherry picking now. Basically all those arguments can be applied to VRAM the other way around, turn down the texture/resolution slider.
 
See you're cherry picking now. Basically all those arguments can be applied to VRAM the other way around, turn down the texture/resolution slider.
I think at this point you are purposefully ignoring the point. Of course you can turn down the textures. The point is, again, for the 19th time, plague tale uses 4.5 to 5.5 gb at 4k Ultra and absolutely puts to shame everything out there in visuals. It's not even a contest. In the meanwhile I saw TLOU using 9.5gb at 720p. I kid you not.

So the freaking question is, why the hell should anyone pay for more vram to have worse visuals? It just doesn't make sense to me. I don't mind amd sponsoring games to hog vram, but at least put that freaking vram to good use and give us good visuals. Not forspoken and godfall for Gods sake.
 
I think at this point you are purposefully ignoring the point. Of course you can turn down the textures. The point is, again, for the 19th time, plague tale uses 4.5 to 5.5 gb at 4k Ultra and absolutely puts to shame everything out there in visuals. It's not even a contest. In the meanwhile I saw TLOU using 9.5gb at 720p. I kid you not.

So the freaking question is, why the hell should anyone pay for more vram to have worse visuals? It just doesn't make sense to me. I don't mind amd sponsoring games to hog vram, but at least put that freaking vram to good use and give us good visuals. Not forspoken and godfall for Gods sake.

It does seem like everyone is putting an awful lot of stock in TLOU, when it seems to me that it is just an incredibly poorly optimised game.
 
It does seem like everyone is putting an awful lot of stock in TLOU, when it seems to me that it is just an incredibly poorly optimised game.
Well most of the games that hog vram are. Godfall, FC6, forspoken, tlou. They are all amd sponsored and they all perform like crap, and not just in the Vram department.

I can brute force TLOU cause of the 16 cores on my 12900k, but if I turn off the ecores and just use it as a normal 8core performance is dreadful. I means avg fps is high but it's so full of microstuttering.
 
I think at this point you are purposefully ignoring the point. Of course you can turn down the textures. The point is, again, for the 19th time, plague tale uses 4.5 to 5.5 gb at 4k Ultra and absolutely puts to shame everything out there in visuals. It's not even a contest. In the meanwhile I saw TLOU using 9.5gb at 720p. I kid you not.

So the freaking question is, why the hell should anyone pay for more vram to have worse visuals? It just doesn't make sense to me. I don't mind amd sponsoring games to hog vram, but at least put that freaking vram to good use and give us good visuals. Not forspoken and godfall for Gods sake.
I'm not really, I just don't agree. Some ports are going to be bad and use resources they shouldn't. That's life.

But in the spirit of criticising both companies, the thing that really grinds my gears with AMD is the whole thing where they nerfed their GPGPU support on desktop cards to try to prop up their professional cards, thereby cutting hobbyists/enthusiasts out. Which is like 70% of the reason that I've got an Nvidia card and will probably buy one again in the future... Just not this gen.
 
Ok, let's put a stop to this silliness. Middle Earth: Shadow of War. In 2017 it required over 8GB@1080p, so I'd say we knew at least since that year that 8GB VRAM was going to age soon.
No one argued ever that game won't use more than 8gb. In fact warzone was using 19gb on my 3090 a couple of years ago. Hogwarts managed to hit 21 gb!

The question is, do they look like a game that requires / uses 20gb of vram, or is it just poor optimization. The only game I've seen that uses lots of vram but it actually looks stunning is forza horizon. Every other vram hog game looks very meh
 
I'm not really, I just don't agree. Some ports are going to be bad and use resources they shouldn't. That's life.

But in the spirit of criticising both companies, the thing that really grinds my gears with AMD is the whole thing where they nerfed their GPGPU support on desktop cards to try to prop up their professional cards, thereby cutting hobbyists/enthusiasts out. Which is like 70% of the reason that I've got an Nvidia card and will probably buy one again in the future... Just not this gen.
Nvidia pulled the same crap with their 90 line which isn't really a titan replacement for productivity workloads as far as I know
 
No one argued ever that game won't use more than 8gb. In fact warzone was using 19gb on my 3090 a couple of years ago. Hogwarts managed to hit 21 gb!

The question is, do they look like a game that requires / uses 20gb of vram, or is it just poor optimization. The only game I've seen that uses lots of vram but it actually looks stunning is forza horizon. Every other vram hog game looks very meh
First of all, looks are subjective so I won't argue on it.
Second factor, genre and number of assets on screen impacts performance and VRAM usage a lot. I posted ME:SoW because it's a typical edge case of a game that puts a lot of stuff on screen so it does require a lot of memory to pull it off.

That said, we're again derailing the thread, IMHO this should have its own separate discussion.

As for AMD, IMHO it has great value on the lower tiers of the market just as it historically did, sub-500€ cards do not have the grunt for RT so it makes sense to prioritize other features there.
 
But my issue with Nvidia here is that their problem is actually trivially resolvable: just offer higher VRAM alternatives to their current products. It's an easy thing to do, requires zero rearchitecting and doesn't even cost very much. But they don't want to do that because ultimately Nvidia want those parts to get crippled by VRAM requirements so that you buy their next gen as well... It's just coming home to roost a bit faster than anticipated.
It's not always as simple as "just offer higher VRAM" though. On the 4070 and 4070 Ti the only option would be to double the number of VRAM chips since they're already using the largest 2GB chips, and that would come with a substantial power draw increase. Whatever it costs Nvidia to buy, you can double or even triple that for the cost to the end user because they're not going to lower their margins specifically for the higher VRAM cards.

Personally I would much prefer a power efficient 12GB 4070/4070 Ti than a power hungry 24GB version for an extra £100. I can quite happily skip the games with lazy devs that can't be bothered to optimize their console ports.
 
It's not always as simple as "just offer higher VRAM" though. On the 4070 and 4070 Ti the only option would be to double the number of VRAM chips since they're already using the largest 2GB chips, and that would come with a substantial power draw increase. Whatever it costs Nvidia to buy, you can double or even triple that for the cost to the end user because they're not going to lower their margins specifically for the higher VRAM cards.

Personally I would much prefer a power efficient 12GB 4070/4070 Ti than a power hungry 24GB version for an extra £100. I can quite happily skip the games with lazy devs that can't be bothered to optimize their console ports.

They could offer more VRAM,by not selling an RTX3060 replacement at the price of higher end dGPUs tiers.The RTX4070TI should be using an AD103 based dGPU with 16GB of VRAM. The reality is this is just another Turing,and if Nvidia does another refresh like they did with Turing,it would involve pushing tiers down again. So a lot of the FOMO people who defended Turing V1 pricing and tiers looked very foolish when Nvidia re-released it. The ones who waited got a much better deal.

PCMR needs to look at how dGPU sales are declining and PC sales in general. It's a buyers market.

If you are spending £600~£800 on dGPUs,and are running a full ATX system,power draw is less of problem and you are more likely to want to turn up settings or use higher resolutions. People struggling with energy bills won't be spending £800 on dGPUs and gaming for dozens of hours a week 24/7. How many are running overclocked CPUs,inefficient motherboards,oversized PSUs,huge monitors and so on. I could even understand if people are building tiny mini-ITX rigs(like I do).

If people are that concerned with power draw,the RX6600XT could be undervolted to run at 95W and you would turn all the settings down to tax the dGPU even less. Or how many desktops can compare to a gaming laptop in efficiency?

We have had high energy costs since late 2021,and yet many "concerned people" were quite happy not to buy more efficient RDNA2 cards which were cheaper. This was the same argument when people used power draw arguments to buy a GTX970(which really only had 3.5GB of fast VRAM) over an R9 390 8GB. Or the same lot who ran full ATX rigs(not even SFF systems) and would rather get a "high end" £190 GTX960 4GB over an AIB partner R9 290 4GB.

It was the same over a decade ago. When ATI was a bit less efficient with the HD4000 series,it was all about Nvidia being better with power. Then when the HD5000 series hammered Fermi with better efficiency,suddenly it wasn't important and all about overclocking and tessellation. In the end this is Apple level marketing really.

In the end,Nvidia wants people to buy their higher end dGPUs. These have 16GB to 24GB of VRAM. So they have no problem the devs will optimise to that level.

So don't think it's all about lazy devs - if the higher end Nvidia dGPUs can handle it,and the consoles can then Nvidia will be fine with it. If AMD then decides to price it's higher VRAM dGPUs cheaper then that it is a good business move from them.
 
Last edited:
It's not always as simple as "just offer higher VRAM" though. On the 4070 and 4070 Ti the only option would be to double the number of VRAM chips since they're already using the largest 2GB chips, and that would come with a substantial power draw increase. Whatever it costs Nvidia to buy, you can double or even triple that for the cost to the end user because they're not going to lower their margins specifically for the higher VRAM cards.

Personally I would much prefer a power efficient 12GB 4070/4070 Ti than a power hungry 24GB version for an extra £100. I can quite happily skip the games with lazy devs that can't be bothered to optimize their console ports.
That's a reasonable point although the margins on them must be fairly massive to begin with. A 192-bit bus on a £800 card has to be some kind of record in GPU miserliness.
 
That's a reasonable point although the margins on them must be fairly massive to begin with. A 192-bit bus on a £800 card has to be some kind of record in GPU miserliness.

People are just rationalising things. Just like I saw people with their faulty iPhones actually thinking they were "holding them wrong".
 
Last edited:
They could offer more VRAM,by not selling an RTX3060 replacement at the price of higher end dGPUs tiers.The RTX4070TI should be using an AD103 based dGPU with 16GB of VRAM.
The name of the card is irrelevant, if they had called it a 4060 it would still have the same drawback of increased power draw to get more than 12GB of VRAM.
 
They could offer more VRAM,by not selling an RTX3060 replacement at the price of higher end dGPUs tiers.The RTX4070TI should be using an AD103 based dGPU with 16GB of VRAM.
But if they used the 103 die for the 70ti the card would have been faster in raster, much faster in RT with a bunch of extra features while being cheaper than the 7900xt. That doesn't really make sense for nvidia, does it?
 
Last edited:
The name of the card is irrelevant, if they had called it a 4060 it would still have the same drawback of increased power draw to get more than 12GB of VRAM.

It is when it comes to performance. An AD103 based RTX4070TI wouldn't need to be clocked as high because it has more shaders,and would perform much better. Plus with a 256 bit memory bus,the power increase won't be as high as you think as they wouldn't need to use as high clocked memory modules either. GDDR6 would have been fine.

If you compare the RTX3060 12GB and the RTX4070 12GB,it consumes 20W more power. If you compare the RTX4070TI 12GB it consumes 100W more power. This is because both cards are clocked higher than they need to,to justify Nvidia selling them at a higher tier. The AD103 based RTX4080 consumes barely 25W more than an RTX4070TI,despite having more of everything and a larger die size too. The RX6800 despite being on an inferior process node,has a 256 bit memory GDDR6 bus with 16GB of VRAM,and consumes a whole 30W more than the RTX4070 12GB. They both have the same memory bandwidth. On TSMC 4N 5NM,Nvidia could totally have made the card have a 256 bit GDDR6 memory bus and it could have used lower clocked GDDR6 which consumes less power. They chose not to do so,because they wanted to save money here to make more margins.

This sort of stuff has been done several times during the last 20 years or so.
 
Last edited:
Back
Top Bottom