• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD RDNA 4 thread

Actually, the opposite. It's much lighter on the VRAM than UE4 (thanks to Nanite) and most games released up to now (AAA), relative to asset quality. I'd worry more for custom engines in that regard (in particular ports from consoles, cough Sony cough). Its problems are the perennial ones for UE, which are how to properly handle open-world asset streaming and not break the game & how to better make use of the CPU. Basically defeating the stutterfest it's plagued by. Moreso for smaller external devs, because there's a lot of ways to trip yourself up as a dev using UE, and it's much harder to actually modify it to your use case to run well when you don't have an elite programming team that specializes in it (like The Coalition).

CDPR for example has completely solved this problem for themselves, and now that they're on UE5 and major contributors to it perhaps that will put UE5 on the right track. Ubisoft also does very well on this front (but their older approaches tend to lean heavier on vram) but then they have huge teams & tune their engines specifically for open-world games. I would say that out of all the current open-world games that have (at least some form of) RT, the best mix of HQ assets + good LoD management + no undefeatable stutter, the best one is Watch Dogs: Legion. And if we look at how that handles vram we can see it's eminently playable on even an 8 GB card (with minimal difference vs ultra HD textures & max streaming budget), so for sure 12 GB will be fine for the remainder of this generation. In fact already the cross-gen titles that aren't completely gimped to low settings (or lacking some key graphical features entirely, like say a basic GI) prove to be more than enough to stress even the PS5/XSX so we're not going to see games that push memory requirements that much higher when all the assets are done with the consoles in mind. If anything we see stuff like Pathtracing being what gets pushed as the stressor option on PC, and that's super heavy on compute but not moreso on vram than basic RT.

For UE5 you can see CPU usage issues but modest vram usage:

And if we look at Remnant 2 which just recently launched as a UE5 title with Nanite (but no Lumen), we can see it do very well with little vram but have very HQ assets on show (which makes sense, it's basically what Nanite exists for):

So are you saying the RTX4080 and RTX4090 would be fine with 12GB? That an RTX4070 8GB might be fine? The RTX4060TI 8GB is perfectly OK? Seems rather weird Nvidia would put so much VRAM on the higher end cards if 12GB is perfectly fine!!

I don't think anyone really believes that. A few observations:
1.)You are only looking at early UE5 titles,and lots of devs say VRAM requirements are going up. Just compare early UE4 titles with current ones. You can't just look at early titles and surmise that is it for the next few years.

2.)What happens when devs want to target even higher resolution textures and simply want to use more individual textures? Is 8GB and 12GB going to be fine at qHD and 4K in all UE5 titles for the next few years?

3.)People buy £800 cards to play games at 4K or to have greater longevity at qHD.Most people I know in the realworld keep dGPUs(especially expensive ones) for 3~5 years.Many on here upgrade every year or two. So you need to consider longer lifespans and that is going to be into newer generations.

4.)The consoles are three years old now. The PS5 PRO is out next year,and its most likely the XBox Series X2 will be out earlier than expected(no XBox Series X refresh). RTX5000 series in 2025. So what happens in 2025,which is barely 18 months away?

5.) We have gone though people saying 256MB was fine(8800GT 256MB vs HD3870 512MB),3GB was fine(GTX1060 and GTX780) and 4GB was fine(AMD Fury X vs GTX980TI),but it wasn't in the end. History has a good track record of saying weird SKUs with imbalanced memory subsystems have problems. The RTX4070TI is one of them.

6.)I have an RTX3060TI 8GB and I correctly surmised 8GB would be an issue at some point(the RX6700XT was nearly £500 so it was poor value). This is why I didn't buy the RTX3070 because I know it would not last any longer. It has the same playability when it hits that wall.

But I never expected that so soon after Nvidia launched the 12GB RTX4070,the RTX3070/RTX3070TI would hit a rock.

So I expect the same of the RTX4070TI when the RTX5070/RTX5070TI 16GB launches in 2025. I also expect the gap between the RTX4070TI and RTX4080 16GB to start growing after this.

7.)Remember again,this isn't a £550 dGPU but an £800 one. Like the RTX3070/RTX3070TI its a fast core gimped by a lack of VRAM/memory subsystem. Once it hits that issue it will be no better than an RTX4070! People need to stop trying to find Diamonds in a coal runoff heap.

The RTX4070TI 12GB and RX7900XT are the RTX4070 12GB and RX7800XT 20GB. They should be £600. But seriously the RTX4070TI 12GB is as cynical as RTX4060TI 8GB. Its the same as the RTX3070TI 8GB IMHO.

The RTX4060 8GB,RTX4070 12GB,RTX4080 16GB and RTX4090 24GB might be overpriced,but they seem somewhat more balanced.This is like Apple putting 8GB on entry level PCs and saying its OK!

Now you might disagree with all the points above. That is your perogative. But I don't agree with you that the 12GB on the RTX4070TI is acceptable because its £800. If I was spending that much money I would rather get an RTX4070 12GB and spend the rest on beer. The RX7900XT is only marginally better because it has at least 16GB. OFC,if Nvidia wasn't so greedy and made the RTX4070TI 24GB,AMD wouldn't even have that.
 
Last edited:
I've never understood why people advocate for minimal hardware resources. I have yet to find anyone who can give me a reasoned explanation for it, its always just doubling down on the given blanket statement. I don't get it, its like they are afraid of progress oor are wierdly defensive of hardware vendors.
 
Last edited:
I've never understood why people advocate for minimal hardware resources. I have yet to find anyone who can give me a reasoned explanation for it, its always just doubling down on the given blanket statement. I don't get it, its like they are afraid of progress oor are wierdly defensive of hardware vendors.
Maybe to justify their purchases?

Personally try to get the best value for my money when buying stuff currently it’s AMD much to the chagrin of some (usual suspects) on this forum.
 
Last edited:
It doesn’t make it better due to the daft pricing but shows just how far Nvidia are ahead in both RT and raster and that they could crush AMD if they wanted to. AMD are now back to where they were with RDNA1 where the 5700/XT were competing with Nvidia’s 70, the difference now is that names have changed and prices have doubled.
But how much is due to chiplet vs monolith?

While I think AMD were too early to go chiplet especially in the midrange, when you have a far smaller market-share taping out lots of monolith is not viable. The real question is what could AMD have done with a 500mm² monolith? I suspect it would have landed between 4080 (AD103 - 379mm²) and 4090 (AD102 - 609mm²). Yes, maybe AMD's approach to RT isn't the best (they don't waste much die space on it, but I'd expect a bigger discount vs Nvidia for that), but in raster they are fine. The main reason they are far behind this gen is the stubborn insistence of chiplets everywhere except the lowest end part.
You keep posting this but only ever look at actual specs rather than the price/perf. It doesn't matter how technically good Nvidia is compared to AMD if price vs price the AMD GPU is better value. If Nvidia are just going to keep pricing the same way they are, then you need to suck it up. We all know the only reason YOU want AMD to drop prices is to force Nvidia to sell you a 4080 at £700. Well good luck with that because Nvidia aren't budging on price. From now on your £600 will get you a xx60 series GPU and you will like it :D

AMD moved up one tier (6800 > 7900) on price and perf
Nvidia moved two tiers. (3080 > 4080) on price and perf
Yes, there is no such thing as bad product just a bad price. There is little point in admiring Nvidia's technical ability as it just makes it easier for them to rip of consumers. AMD took the savings from going chiplet and pocketed it; Nvidia moved all products down two tiers and pocketed the extra profit.
 
But how much is due to chiplet vs monolith?

While I think AMD were too early to go chiplet especially in the midrange, when you have a far smaller market-share taping out lots of monolith is not viable. The real question is what could AMD have done with a 500mm² monolith? I suspect it would have landed between 4080 (AD103 - 379mm²) and 4090 (AD102 - 609mm²). Yes, maybe AMD's approach to RT isn't the best (they don't waste much die space on it, but I'd expect a bigger discount vs Nvidia for that), but in raster they are fine. The main reason they are far behind this gen is the stubborn insistence of chiplets everywhere except the lowest end part.
Going chiplets is fine and all but AMD should have had a backup option if it didn’t work out similar to how 3D vcache on the cpus came later once they had got the standard models working and out the door, even porting RDNA2 to 5nm would have likely yielded better results.
 
Many who are buying it are probably buying the RTX4070TI at £800 over the RTX4070 in the hope it will last longer. However,if the PS5 Pro has more RAM,and the XBox Series X replacement arrives in 2025,with more RAM then it will be another case of a decent core held back by cost cutting. Nvidia will then release a 16GB RTX5070. 16GB of VRAM can't be that expensive if the PS5 was profitable since last year - Nvidia even put GDDR6X on an RTX3060TI because there was too much of it that couldn't be sold.
Don't forget Series S only has 8GB RAM available to devs for both vRAM and system RAM (aka other stuff than you'd put into vRAM).
 
Going chiplets is fine and all but AMD should have had a backup option if it didn’t work out similar to how 3D vcache on the cpus came later once they had got the standard models working and out the door, even porting RDNA2 to 5nm would have likely yielded better results.
Yes while we are often dismissed as armchair CEOs/CTOs, it is not unreasonable to questions: who did their due dilegence and risk analysis? Was the same person who though HBM would be a good idea in a mainstream GPU?

(Yes, I know what HBM was very much an AMD and Hynix thing - but in the end AMD benefited less from all that work than Nvidia simply because it is a enterprise/data centre thing and AMD's presence in data centre GPUs is minimal. And yes, cracking chiplet in GPUs is great engineering but not if it leaves them at a competitive disadvantage - if your volumes are low or tiny then you might save something with less tape-outs and re-usability but why not aim for higher volumes in the first place?)
 
Don't forget Series S only has 8GB RAM available to devs for both vRAM and system RAM (aka other stuff than you'd put into vRAM).
But that didn't stop recent games using more than 8GB of VRAM, and the Series S will be running very low settings with upscaling too.It was the same before when 4GB cards made 2GB ones look poor in 2015. The consoles will be superseded by faster ones in the next two years.

After all we are talking about an £800 card. The whole point of an £800 card is to run at 4K or QHD with minimal upscaling for a few years. Is it just coincidence that the RTX4070 12GB is released and suddenly devs care less about 8GB? What happens if the RTX5070 16GB releases in 2025? The 12GB cards will be pushed down the priority list. This is not such an issue on cheaper cards. But the RTX4070TI is not a cheap card. It has the same memory arrangement as an RTX3060 or RX6700XT!!

If Nvidia put 24gb of the very cheap GDDR6X VRAM on it then it would not have this issue and AMD would have nothing. It reminds me of the RTX3070TI in some ways.

It's almost like both companies are not wanting to step on each others toes!
 
Last edited:
But that didn't stop recent games using more than 8GB of VRAM, and the Series S will be running very low settings with upscaling too.It was the same before when 4GB cards made 2GB ones look poor in 2015. The consoles will be superseded by faster ones in the next two years.

After all we are talking about an £800 card. The whole point of an £800 card is to run at 4K or QHD with minimal upscaling for a few years. Is it just coincidence that the RTX4070 12GB is released and suddenly devs care less about 8GB? What happens if the RTX5070 16GB releases in 2025? The 12GB cards will be pushed down the priority list. This is not such an issue on cheaper cards. But the RTX4070TI is not a cheap card. It has the same memory arrangement as an RTX3060 or RX6700XT!!

If Nvidia put 24gb of the very cheap GDDR6X VRAM on it then it would not have this issue and AMD would have nothing . It reminds me of the RTX3070TI in some ways.

It's almost like both companies are not wanting to step on each others toes!
4070ti is more like an (too) expensive 1440p (ore thereabouts) card if you think in the long run.

4070 would be more along the 1080p area and everything below will require turning down/off details/settings.

12GB should be fine IF devs are not lazy. If they are, sure, 20gb minimum for 1080p to see different blades of grass and rocks as that adds big time visually (said no one, but one dev at some point on MLID show). We really don't need vegetation to be affected by physical, external factors like wind or objects passing through it. What SWThe Force Unleashed did with it's physics system is not something next gen. Neither advanced physics like in Red Faction we don't need. Different blades or grass and different rocks are next gen. :)

People should buy the 7900xt if they're happy with that. There is an option. Most likely RDNA4 will have an option as well. The whole talk about vRAM is pointless since the problematic games, so far, are more a case of bad coding than anything else.

Consoles are weak too. They're barely scraping by with the same poorly made games, higher than 60fps meaning either a relatively low res approach or dismissing it altogether.
 
It's almost like both companies are not wanting to step on each others toes!
While there is some of that, I think Nvidia's modus operandi is to offer as little as they can get away and more importantly (since they are at near 90% market share) get people to upgrade soon. And upgrade again. And again.

They do have to be cautious sometimes. Like when the new consoles and RDNA2 launched. They couldn't quite predict things hence why the 3080 was reasonable (although 10GB for that performance in 2020!). Now that they found out that AMD will only contemplate one lower margin thing - consoles - and isn't interested in volume on desktop GPUs they probably think they can do whatever they want.
 
4070ti is more like an (too) expensive 1440p (ore thereabouts) card if you think in the long run.

4070 would be more along the 1080p area and everything below will require turning down/off details/settings.

12GB should be fine IF devs are not lazy. If they are, sure, 20gb minimum for 1080p to see different blades of grass and rocks as that adds big time visually (said no one, but one dev at some point on MLID show). We really don't need vegetation to be affected by physical, external factors like wind or objects passing through it. What SWThe Force Unleashed did with it's physics system is not something next gen. Neither advanced physics like in Red Faction we don't need. Different blades or grass and different rocks are next gen. :)

People should buy the 7900xt if they're happy with that. There is an option. Most likely RDNA4 will have an option as well. The whole talk about vRAM is pointless since the problematic games, so far, are more a case of bad coding than anything else.

Consoles are weak too. They're barely scraping by with the same poorly made games, higher than 60fps meaning either a relatively low res approach or dismissing it altogether.

But the reality again its an £800 card. The memory configuration is the same as the £250 RTX3060 or RX6700XT. An RTX3060 in 2021 had more expensive VRAM than an RTX4070TI does in 2023.

Even you got a 16GB RTX4080 because even you realise 12GB on an £800 card is kind of rubbish. But the reality is:
1.)RTX4060 8GB =RTX4050 8GB at under £250
2.)RTX4060TI 16GB=RTX4060 16GB at £300
3.)RTX4070 12GB=RTX4060TI 12GB at £400
4.)RTX4070TI 12GB=RTX4070 12GB at under £600
5.)RTX4080 16GB should be under £900
6.)RX7600 8GB=RX7500XT 8GB at £200
7.)RX7900XT 20GB is an RX7800XT 20GB at under £600
8.)RX7900XTX 24GB should be an RX7900XT at under £800

The reality is that from 2009 to 2016 VRAM on the mainstream went from 512MB to 8GB. At the high end it went from 1GB to 11GB.

Now look since 2016. At mainstream we have gone up from 8GB to 12GB in 7 years under £400. Under £750,Nvidia went from 11GB to 12GB. AMD went from 8GB to 20GB. So not even a doubling.

Stagnation - every other consumer product is increasing memory capacities. The laziness is not entirely from devs. Its from Nvidia and AMD being a cartel and skimping on VRAM,especially in an era of dirt cheap VRAM. Consoles shouldn't be even a consideration,but the fact that PCMR is now having issues is telling.

When the XBox 360 and PS3 came out,within two years we had the 8800GT 512MB/HD3870 512MB which cost significantly less than a console,and had asmuch VRAM as the consoles had system RAM and were twice the performance. Nobody cared about bad console ports because the hardware just was quicker.

I personally think Nvidia put as little vram as they can on cards to try to stop people using gaming cards for production work, as they charge extortionate amounts for what is essentially a (example that may not be 100%) 3060 with 16gb of ram for £1000..

It's also because then the next generation,they will sell the RTX5070 16GB. Then many games will suddenly need a bit more than 12GB of VRAM?

Interesting how suddenly 8GB of VRAM had problems,once Nvidia launched the 12GB RTX4070? When the £650 RTX5070 16GB is out,how many will wax lyrical at how it thrashes the £800 RTX4070TI? Then someone will mod 24GB on the RTX4070TI and find it probably is loosing due to lack of VRAM!
:cry:


Low VRAM cards such as the RTX4070TI are OK if they are cheap. Not when they are £800. It's an Apple level move,because Apple does the same thing.
 
Last edited:
But the reality again its an £800 card. The memory configuration is the same as the £250 RTX3060 or RX6700XT. An RTX3060 in 2021 had more expensive VRAM than an RTX4070TI does in 2023.

Even you got a 16GB RTX4080 because even you realise 12GB on an £800 card is kind of rubbish. But the reality is:
1.)RTX4060 8GB =RTX4050 8GB at under £250
2.)RTX4060TI 16GB=RTX4060 16GB at £300
3.)RTX4070 12GB=RTX4060TI 12GB at £400
4.)RTX4070TI 12GB=RTX4070 12GB at under £600
5.)RTX4080 16GB should be under £900
6.)RX7600 8GB=RX7500XT 8GB at £200
7.)RX7900XT 20GB is an RX7800XT 20GB at under £600
8.)RX7900XTX 24GB should be an RX7900XT at under £800

The reality is that from 2009 to 2016 VRAM on the mainstream went from 512MB to 8GB. At the high end it went from 1GB to 11GB.

Now look since 2016. At mainstream we have gone up from 8GB to 12GB in 7 years under £400. Under £750,Nvidia went from 11GB to 12GB. AMD went from 8GB to 20GB. So not even a doubling.

Stagnation - every other consumer product is increasing memory capacities. The laziness is not from devs. Its from Nvidia and AMD being a cartel and skimping on VRAM,especially in an era of dirt cheap VRAM. Consoles shouldn't be even a consideration,but the fact that PCMR is now having issues is telling.

When the XBox 360 and PS3 came out,within two years we had the 8800GT 512MB/HD3870 512MB which cost significantly less than a console,and had has much VRAM as the consoles had system RAM and were twice the performance. Nobody cared about bad console ports because the hardware just was quicker.



It's also because then the next generation,they will sell the RTX5070 16GB. Then many games will suddenly need a bit more than 12GB of VRAM?

Interesting how suddenly 8GB of VRAM had problems,once Nvidia launched the 12GB RTX4070? When the RTX5070 16GB is out,how many will wax lyrical at how it thrashes the £800 RTX4070TI? Then someone will mod 24GB on the RTX4070TI and find it probably is loosing due to lack of VRAM!
:cry:


Low VRAM cards such as the RTX4070TI are OK if they are cheap. Not when they are £800. It's an Apple level move.

I did say it was an overpriced card. They all are, no discussion about that. Should have been more vRAM on it? Sure. Does the 7900xt is faster than the 4080 since it has 20gb vs. 16gb? No. Even 7900xtx does about the same even thought it has 50% more vRAM and it gets destroyed by 4090 with similar vRAM.

Consoles vRAM only matters up to some point, after that there isn't enough grunt in the GPU to matter.

Better question about RDNA4 is if it will finally address AMD weakness with RT.
 
Last edited:
  • Like
Reactions: TNA
I did say it was an overpriced card. They all are, no discussion about that. Should have been more vRAM on it? Sure. Does the 7900xt is faster than the 4080 since it has 20gb vs. 16gb? No. Even 7900xtx does about the same even thought it has 50% more vRAM and it gets destroyed by 4090 with similar vRAM.

Consoles vRAM only matters up to some point, after that there isn't enough grunt in the GPU to matter.

Better question about RDNA4 is if it will finally address AMD weakness with RT.
Or will hardware RT die out to be replaced by software like in Unreal5?

I think hardware RT is too inefficient and waste of resources.
 
Last edited:
I did say it was an overpriced card. They all are, no discussion about that. Should have been more vRAM on it? Sure. Does the 7900xt is faster than the 4080 since it has 20gb vs. 16gb? No. Even 7900xtx does about the same even thought it has 50% more vRAM and it gets destroyed by 4090 with similar vRAM.

Consoles vRAM only matters up to some point, after that there isn't enough grunt in the GPU to matter.
But its quite clear the RTX4080 can use 16GB of VRAM,and by extension so can the RX7900XTX since it has more than 12GB,and has similar performance. But if you look at the TPU rasterised and RT charts,the RTX4070TI seems to loose ground against a cheaper RX7900XT as you increase resolution. The same against the RTX4080. The RX7900XT and RTX4070TI are not that much slower than the RX7900XTX and RTX4080. The RTX4080 doesn't seem to have as much of an issue.

But also even weak cards can be VRAM limited. The GTX1060 3GB and GTX960 2GB/R9 380 2GB showed worse performance than their doubled VRAM variants:

The RX480 8GB outlived the RX480 4GB.


In Racket and Clank the RTX4060TI 16GB gets within distance of an RTX4070! In the Last of Us,the RTX4060TI 16GB is noticeably faster than an RTX4060TI 8GB. In RE4 its faster and in Forspoken. In CB2077 its faster. With DLSS and FG,it is noticeably faster. So if an RTX4060TI can benefit,so will an RTX4070TI. The RTX4070TI is sold as a premium model,the RTX4060TI isn't!

This is why long-term I expect the VRAM will be the limiting factor - the core isn't the issue. We can see this with the DIY doubled VRAM RTX3070 that was made.

Better question about RDNA4 is if it will finally address AMD weakness with RT.

It ultimately will be dependent on what MS/Sony want. If they want to real push RT,then it will be, but only if it appears in a console.

While there is some of that, I think Nvidia's modus operandi is to offer as little as they can get away and more importantly (since they are at near 90% market share) get people to upgrade soon. And upgrade again. And again.

They do have to be cautious sometimes. Like when the new consoles and RDNA2 launched. They couldn't quite predict things hence why the 3080 was reasonable (although 10GB for that performance in 2020!). Now that they found out that AMD will only contemplate one lower margin thing - consoles - and isn't interested in volume on desktop GPUs they probably think they can do whatever they want.

I get that,but I have no clue why PCMR needs to defend any of this from both companies. Especially when the top10 dGPUs on Steam are stagnating in performance,and consoles can be comparable three years later to mainstream hardware. Sales are down over the early 2019 crash when there was a glut of stock from the 2018 mining boom.

Just look at Q4 2019 and Q4 2022. People are quite clearly fed-up. A 4.5 million card drop is enormous!

How is then making the mainstream cards barely better than the old ones,actually helping the PC install base? Or how is limiting VRAM?

image-12.png
 
Last edited:
Or will hardware RT die out to be replaced by software like in Unreal5?

I think hardware RT is too inefficient and waste of resources.
It won't. RT/PT is the holly grail of graphics, you can cheat your way with rasterization only so much.
Lumen can't really render proper reflections unless is doing so in hardware, aka actual ray tracing. Also try to do shadows from each light in a game like CB, GTA, whatever, see how the performance goes while in path tracing is practically "free".

This is only an issue, because most cards can't do RT/PT at decent speeds yet. Too bad nvidia decided to go crazy with the prices with current gen.
But its quite clear the RTX4080 can use 16GB of VRAM,and by extension so can the RX7900XTX since it has more than 12GB,and has similar performance. But if you look at the TPU rasterised and RT charts,the RTX4070TI seems to loose ground against a cheaper RX7900XT as you increase resolution. The same against the RTX4080. The RX7900XT and RTX4070TI are not that much slower than the RX7900XTX and RTX4080. The RTX4080 doesn't seem to have as much of an issue.

But also even weak cards can be VRAM limited. The GTX1060 3GB and GTX960 2GB/R9 380 2GB showed worse performance than their doubled VRAM variants:

The RX480 8GB outlived the RX480 4GB.


In Racket and Clank the RTX4060TI 16GB gets within distance of an RTX4070! In the Last of Us,the RTX4060TI 16GB is noticeably faster than an RTX4060TI 8GB. In RE4 its faster and in Forspoken. In CB2077 its faster. With DLSS and FG,it is noticeably faster. So if an RTX4060TI can benefit,so will an RTX4070TI. The RTX4070TI is sold as a premium model,the RTX4060TI isn't!

This is why long-term I expect the VRAM will be the limiting factor - the core isn't the issue. We can see this with the DIY doubled VRAM RTX3070 that was made.



It ultimately will be dependent on what MS/Sony want. If they want to real push RT,then it will be but only if it appears in a console.
Again, 4070/ti are sold as higher tear than they are. 4070ti isn't really a 4k card. Naturally won't do too well there.
Let's take a look at a U5 game with nanite. 4k is around 10gb, 1440p is around 8.3gb. Plenty of space until 12gb and that's without upscalers. Add some DLSS or FSR and the vRAM usage will fall. That card won't last long at 4k and if Remnant II is something to go about, plenty of cards will have problems in future games regardless of vRAM size at higher resolutions - 3090ti with its 24gigs is only ~ 2fps faster. Kinda wast of 12gigs isn't it? :)

4080 uses about 14.5gb in CB77 at 4k native with path tracing (downsampled to 1080p since I don't have a 4k screen). Add DLSS Performance so you can actually play the game and it drops to around 11.5gb.

Or do you think AMD should add even more than 20-24gb it its next 8800xt card? That if they actually bother to make a x800xt class card. :)


performance-2560-1440.png


vram.png
 
Last edited:
Again, 4070/ti are sold as higher tear than they are. 4070ti isn't really a 4k card. Naturally won't do too well there.
Let's take a look at a U5 game with nanite. 4k is around 10gb, 1440p is around 8.3gb. Plenty of space until 12gb and that's without upscalers. Add some DLSS or FSR and the vRAM usage will fall. That card won't last long at 4k and if Remnant II is something to go about, plenty of cards will have problems in future games regardless of vRAM size at higher resolutions - 3090ti with its 24gigs is only ~ 2fps faster. Kinda wast of 12gigs isn't it? :)

4080 uses about 14.5gb in CB77 at 4k native (downsampled to 1080p since I don't have a 4k screen). Add DLSS Performance so you can actually play the game and it drops to around 11.5gb.

Or do you think AMD should add even more than 20-24gb it its next 8800xt card? That if they actually bother to make a x800xt class card. :)
It doesn't matter whether they are sold at a higher tier or AMD vs Nvidia. It is an £800 card and nobody should find it acceptable it has the same memory layout as a £250~£300 RTX3060/RX6700XT. £800 cards in the realworld are bought by people who want to run games at 4K with minimal upscaling,or want very long lifespans. Like 5 years,etc.

If an RTX4060TI can use more VRAM,and an RTX3060 12GB can beat an RTX4060 8GB its quite clear VRAM/memory subsystems can be a limiting factor. Or it has no increase in memory bandwidth or VRAM over an RTX4070 which costs well under £600. The fact is the statement was made UE5 will never ever use more than 12GB for years.

So that means for the average person who uses their dGPUs for between 3~5 years,there should be no VRAM limitation at 4K for 12GB,and the extra VRAM of competitor products will give zero difference.Also none of this,until the next generation is released stuff. The next generation is only 18 months away. A lot of people skip generations especially if you spend £800 on a new card.

The RTX4080 is only 20% faster than an RTX4070TI,ie,48 TFLOPs vs 40 TFLOPs. You yourself went from an 8GB RTX2080 to a 16GB RTX4080 after 5 years. So even you don't believe 12GB is enough.

Also the first UE4 game was Daylight in 2014.

d_vram.jpg


This is what UE4 games such as Hogwarts Legacy and Jedi Survivor uses now.

DbN0CLl.png


8Oal73q.png

Who are people trying to fool with 12GB will be fine for years with UE5? Over 9 years we went from 3GB to 14GB on Nvidia cards being used with UE4 at 4K,or nearly a 5X increase in VRAM usage at qHD and 4K with UE4. So nobody can make promises that UE5 will only need 12GB for years - if not when the RTX5070 16GB comes out in 2025,we will see how the tune changes.

This is the same argument that the 8GB on the RTX3070/RTX3070TI would be fine for years. It wasn't. It was OK on cheaper cards,not a £600 RTX3070TI.

If people who thought 12GB was fine for years,just bought expensive 12GB cards right now and kept them for 3~5 years. Oh,it never happens. Everyone seems to buy cards with more than that amount of VRAM for other reasons.

Even if Nvidia or AMD make super duper VRAM compression,wait and see how it needs a new generation to work on.

So I am not to agree with the statements about 12GB being fine for years on expensive cards which cost £800. Not even Nvidia themselves believe it,because their higher levels card have more. If some of you want to have the have the last word that 12GB is fine sure. 12GB is sub £600 territory at best,not on a card which was meant to be a £900 RTX4080 12GB(like the other ones stupid upsell too) in an era of dirt cheap GDDR6:
Dk8M1cM.png

Not going to change my view on it.
 
Last edited:
Interestingly the recent Baldurs Gate 3 game uses very little vram and it was developed for PC first then ported to console. Compare that to Diablo 4 which has worse graphics and a smaller world, but uses a huge amount of vram on PC because it was designed for consoles first.

Where as all the other high vram hogs released in recent times are PS5 consolegames that get ported to PC. A lot of these games also have their vram usage decrease several months after their PC release as developers rework things

This to me is an indication that developers are making full use of the memory on consoles which then when they get ported results in them using a high amount of vram of PC because many PC GPUs don't have as much vram as consoles, leading to complaints of poor optimization




Modern multiplatform games using high vram on PC is a symptom of these games being developed for the PlayStation 5 first before getting ported.

This also shows us that developers in general will just use the resources available whether they are required or not because it's less work on optimisation. If your PS5 has as much as 16gb memory available, around 12 to 14 of which ends up dedicated to the video output in most games then developers will use every single megabyte so they don't have to optimise and come up with data compression techniques that may also lower framerate.
 
Last edited:
Back
Top Bottom