• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD RDNA 4 thread

Interestingly the recent Baldurs Gate 3 game uses very little vram and it was developed for PC first then ported to console. Compare that to Diablo 4 which has worse graphics and a smaller world, but uses a huge amount of vram on PC because it was designed for consoles first.

Where as all the other high vram hogs released in recent times are PS5 consolegames that get ported to PC. A lot of these games also have their vram usage decrease several months after their PC release as developers rework things

This to me is an indication that developers are making full use of the memory on consoles which then when they get ported results in them using a high amount of vram of PC because many PC GPUs don't have as much vram as consoles, leading to complaints of poor optimization




Modern multiplatform games using high vram on PC is a symptom of these games being developed for the PlayStation 5 first before getting ported.

This also shows us that developers in general will just use the resources available whether they are required or not because it's less work on optimisation. If your PS5 has as much as 16gb memory available, around 12 to 14 of which ends up dedicated to the video output in most games then developers will use every single megabyte so they don't have to optimise and come up with data compression techniques that may also lower framerate.

Also the consoles like the PS5 have dedicated I/O which doesn't use additional CPU or GPU resources. Directstorage and RTX I/O actually need dGPU resources,ie,that is resources your dGPU can't use for rendering. The problem is the first console refresh is next year,and MS might just release the XBox Series X2 earlier than expected. If this is happening with the old generation of consoles,then what happens with the newer ones? I am concerned if the new consoles have much faster CPUs,because the current consoles are CPU limited IMHO.

Another problem,is VRAM paging into system RAM. Reviewers mostly test these cards on DDR5 and PCI-E 5.0 systems,so are a best case scenario.

Plus as devs literally said,having to muck around with cheapskate dGPU companies limiting VRAM increases for 7 years,costs them more upfront. So why should devs have to be forced to spend even more money because companies CBA to even add a few more GB of VRAM to an £800 card? Why should games studios margins be decreased to increase dGPU companies margins? For a few dollars of extra VRAM on already overpriced cards?

From 2009 to 2016,we saw increases from 512MB of VRAM at the mainstream to 8GB. This was a 16X increase. At the highend we went from 1GB to 11GB/12GB,ie,12X. We went from 6GB/8GB to 12GB at the mainstream over 2016 to 2023. At the high end we went from 11GB/12GB to 24GB,ie,2X.

So mainstream VRAM increases are now lagging relative to the high end unlike 2009~2016. Even if Nvidia made the RTX4070TI 24GB,that is barely a 2X increase over the GTX1080TI after 7 YEARS! An RX7900XT having 20GB isn't a big deal,not after such a long time and with GDDR6 being so cheap.

All of this corporate cost cutting and upselling to increase margins. No wonder consumer dGPU sales are at record lows.

If this nonsense continues for the next few years,it's going to be dark days for the mainstream dGPU market.
 
Last edited:
No wonder AMD changed plans:

The leaked diagram showcases a large package substrate that accommodates four dies: three AIDs (Active Interposer Dies) and one MID (Multimedia and I/O die). It appears that each AID would house as many as 3 SEDs (Shader Engine Dies). This complex configuration represents the alleged RDNA4 architecture, or at least a segment of the GPU that was intended for future release. Notably, the diagram only presents one side of the design, omitting the complete picture. MLID notes that there should also be memory controller dies on each side, although their exact number remains unknown.

The proposed Navi 4C GPU would have incorporated 13 to 20 chiplets, marking a substantial increase in complexity compared to RDNA3 multi-die designs such as Navi 31 or the upcoming Navi 32. Interestingly, a similar design was identified in a patent titled “Die stacking for modular parallel processors” discovered by a subscriber of MLID, which showcased ‘Virtual Compute Die’ interconnected through a Bridge Chip.

13~20 chiplets?

AMD-NAVI-4C-HERO.jpg

 
Last edited:
It doesn't matter whether they are sold at a higher tier or AMD vs Nvidia. It is an £800 card and nobody should find it acceptable it has the same memory layout as a £250~£300 RTX3060/RX6700XT. £800 cards in the realworld are bought by people who want to run games at 4K with minimal upscaling,or want very long lifespans. Like 5 years,etc.

If an RTX4060TI can use more VRAM,and an RTX3060 12GB can beat an RTX4060 8GB its quite clear VRAM/memory subsystems can be a limiting factor. Or it has no increase in memory bandwidth or VRAM over an RTX4070 which costs well under £600. The fact is the statement was made UE5 will never ever use more than 12GB for years.

So that means for the average person who uses their dGPUs for between 3~5 years,there should be no VRAM limitation at 4K for 12GB,and the extra VRAM of competitor products will give zero difference.Also none of this,until the next generation is released stuff. The next generation is only 18 months away. A lot of people skip generations especially if you spend £800 on a new card.

The RTX4080 is only 20% faster than an RTX4070TI,ie,48 TFLOPs vs 40 TFLOPs. You yourself went from an 8GB RTX2080 to a 16GB RTX4080 after 5 years. So even you don't believe 12GB is enough.

Also the first UE4 game was Daylight in 2014.

d_vram.jpg


This is what UE4 games such as Hogwarts Legacy and Jedi Survivor uses now.

DbN0CLl.png


8Oal73q.png

Who are people trying to fool with 12GB will be fine for years with UE5? Over 9 years we went from 3GB to 14GB on Nvidia cards being used with UE4 at 4K,or nearly a 5X increase in VRAM usage at qHD and 4K with UE4. So nobody can make promises that UE5 will only need 12GB for years - if not when the RTX5070 16GB comes out in 2025,we will see how the tune changes.

This is the same argument that the 8GB on the RTX3070/RTX3070TI would be fine for years. It wasn't. It was OK on cheaper cards,not a £600 RTX3070TI.

If people who thought 12GB was fine for years,just bought expensive 12GB cards right now and kept them for 3~5 years. Oh,it never happens. Everyone seems to buy cards with more than that amount of VRAM for other reasons.

Even if Nvidia or AMD make super duper VRAM compression,wait and see how it needs a new generation to work on.

So I am not to agree with the statements about 12GB being fine for years on expensive cards which cost £800. Not even Nvidia themselves believe it,because their higher levels card have more. If some of you want to have the have the last word that 12GB is fine sure. 12GB is sub £600 territory at best,not on a card which was meant to be a £900 RTX4080 12GB(like the other ones stupid upsell too) in an era of dirt cheap GDDR6:
Dk8M1cM.png

Not going to change my view on it.

PS4 and XBOX had no 4GB variants.

In your examples, also look at what FPS the cards can deliver now in those crappy games and if they are actually playable at the settings producing the vRAM usage posted by you. Basically, they're not that great and you have to drop settings meaning dropping vRAM usage.




Even 4090 can't deliver a steady 60fps experience.
Now drop to around 1440p where the 4070ti should be and vRAM usage goes down by a lot.

Further more, look at how much vRAM AMD uses even at 1440p. You can see the same high usage in your screenshots.
9gb vs 10.9 for the 12gb cards... that's almost 2gb extra or 21% more.



Yes, you could add 24gb or whatever on the 4070ti, maybe it will help further down the road (3-4 years from now if you keep it that much), at the very least to alleviate poor choices in game design. You're still not gonna play at 4k due to lack of grunt. But make it even easier, don't buy it! Take the 7900xt or a 6xxx series 16GB from AMD if you think that will help.

With that said, to launch 8gb cards now is the best joke ever from both companies. As a joke, perhaps AMD should have launched with at least 10GB those cards since it needs more anyway :p.

I'm not sure how all this will translate for 8xxx / 6xx series, but besides adding more vRAM (where needed), you still need a lot of power - even more so where questionable games are released.

LE: I went with 4080 instead of 4070ti because the later doesn't have the grunt. vRAM was only secondary.
 
Last edited:
The sad thing is almost no games have actual 4k textures.

We set the resolution of our PC games to 4k but the actual game's texture files are still often a much lower resolution

People think their game they are playing is a 4k bluray on their 4k TV when they are just watching a 1080p bluray on their 4k tv.
 
Last edited:
Take for example the just released performance review from tpu of atlas fallen


See how the vram usage doesn't really change much between 1080p and 4k? Yeah that's because the game doesn't actually have 4k textures, you're rendering the same low resolution assets no matter what resolution monitor you use

So if one day we ever get to a point where our games use actual 4k textures then just imagine how much vram we'll need :cry:

 
Last edited:
Or will hardware RT die out to be replaced by software like in Unreal5?

I think hardware RT is too inefficient and waste of resources.
I suspect that we are in the pre-T&L days of GPUs and that RT will have to be done cleverer. Much cleverer.

Silicon nodes just aren't advancing enough anymore (and transistors/cost is barely moving) whereas brute force RT requires that entry-level has RT performance of ~4090. Consoles are not going to dedicate that much silicon to the GPU, so a different approach will have to be found.

Maybe a mix of raster and RT as currently RT seems to be too keen to abandon all the raster techniques. Unsure about the technicalities but lets not forget that Nvidia's tensor sensors were created for the big cashcows of AI and data centre and they have to justify their presence on consumer cards hence their upscallers using them. I wouldn't be surprised if Nvidia RT is similar - brute forcing RT because it gives them advantage. It is even possible that Nvidia have though of cleverer ways of doing RT but like the excess tessellation: as long as the present model sells Nvidia card they do not care!

When the XBox 360 and PS3 came out,within two years we had the 8800GT 512MB/HD3870 512MB which cost significantly less than a console,and had asmuch VRAM as the consoles had system RAM and were twice the performance. Nobody cared about bad console ports because the hardware just was quicker.
Nodes were moving like crazy back then though: XBox360 was 90nm at first, 8800GT was 65nm. The console was only ~200 million transistors whereas G92 was 750 million. And it wasn't even that large. No idea what cost/transistors was back then but it must have been cheap enough for Nvidia to have sold a 324mm2 part for $200 or so.

That is largely over, huge dies are expensive, what is relatively cheap is VRAM, and Nvidia (and to a far lesser extend AMD) could be far more generous with VRAM but then in Nvidia's case that is very much planned obsolescence and their reward for that tactic is near 90% marketshare.
 
Well, if you're gonna use RT, and then wonder why you have to keep buying new cards...

;)

7900xt (20gb card) will do worse. If now you're at the limit for around 60fps or less, how's that gonna last 3-4 years at the same settings so that a lot more vRAM makes sense? Next gen cards have to bring a lot more to the table, besides vRAM.

+

performance-3840-2160.png



Take for example the just released performance review from tpu of atlas fallen


See how the vram usage doesn't really change much between 1080p and 4k? Yeah that's because the game doesn't actually have 4k textures, you're rendering the same low resolution assets no matter what resolution monitor you use

So if one day we ever get to a point where our games use actual 4k textures then just imagine how much vram we'll need :cry:


Not only vRAM, but also storage space. Say hello to 500gb games :))
Anyway, with the exception of direct storage, how many games try (at least) to mimic streaming to overcome possible vRAM issues or actually allow more complex assets? Just dump everything into memory seems to be the mantra.
 
Last edited:
I suspect that we are in the pre-T&L days of GPUs and that RT will have to be done cleverer. Much cleverer.

Silicon nodes just aren't advancing enough anymore (and transistors/cost is barely moving) whereas brute force RT requires that entry-level has RT performance of ~4090. Consoles are not going to dedicate that much silicon to the GPU, so a different approach will have to be found.

Maybe a mix of raster and RT as currently RT seems to be too keen to abandon all the raster techniques. Unsure about the technicalities but lets not forget that Nvidia's tensor sensors were created for the big cashcows of AI and data centre and they have to justify their presence on consumer cards hence their upscallers using them. I wouldn't be surprised if Nvidia RT is similar - brute forcing RT because it gives them advantage. It is even possible that Nvidia have though of cleverer ways of doing RT but like the excess tessellation: as long as the present model sells Nvidia card they do not care!



This. RT isn't working and it's never going to as is.
 
7900xt (20gb card) will do worse. If now you're at the limit for around 60fps or less, how's that gonna last 3-4 years at the same settings so that a lot more vRAM makes sense? Next gen cards have to bring a lot more to the table, besides vRAM.

+

performance-3840-2160.png





Not only vRAM, but also storage space. Say hello to 500gb games :))
Anyway, with the exception of direct storage, how many games try (at least) to mimic streaming to overcome possible vRAM issues or actually allow more complex assets? Just dump everything into memory seems to be the mantra.


But that's not everybody's extreme experience though, that's not everybody wanting to play 4K ultra+RT, in fact that's actually a tiny minority. And as I implied earlier if you simply have to play everything at absolute extrme settings then you're never going to have enough GPU, and you're always going to be looking at charts and thinking "Gee, how much have I gotta spend in 2 years for anything playable?"

So its a bit self-inflicted, it's a bit bad tech implementation (I am Ray-Tracing destroyer of frame rates), it's a bit predatory NV/AMD giving you less for more gen on gen, and there's bad game dev coupled with lousy console ports.

Yeah you're right storage is now also getting out of hand and will probably get even worse!
 
@Poneros those textures Remnant 2 look quite flat and not high res. They also seem to be lacking any Specular, Gloss and Metallic maps. I can see Albedo, Normal and Dirt maps but that's about it, its why they look flat.
I'm currently playing Sniper Elite 5, the textures in that look miles better than they do in your example.
Did you check in-game? I can assure you that's not true, and certainly when we look holistically at a scene taking into account the LoD differences Remnant 2 will do much better. SE5 does great work with textures tho, as Rebellion does in general, so it's good if they are competitive with each other in the first place.

PS: you would only use asset streaming of you didn't have enough VRam to begin with and its not cost free, even with the fastest CPU's and SSD's.
You always use asset streaming in a modern game, there's just never going to be enough vram to bypass that. So it's not a question of 'if', it's a question of how well do you do it. Unfortunately UE4 was awful for that.

The best performance is always streamed directly from VRam. Ninite is a method of reconstruction tessellated geometry on the fly, its does away with LOD assets swapping.
Yes, and how does it "appear" in vram? In reality even with DirectStorage there's still lots of traffic with CPU & RAM.

So are you saying the RTX4080 and RTX4090 would be fine with 12GB? That an RTX4070 8GB might be fine? The RTX4060TI 8GB is perfectly OK?
What I'm saying is very simple and clear: The 4070 Ti's 12 GB is enough for it to play (in all currently available games) well at a settings level commensurate with its compute capabilities.

Seems rather weird Nvidia would put so much VRAM on the higher end cards if 12GB is perfectly fine!!
Premium products need to deliver on premium expectations. Needs to be more than fine.

I don't think anyone really believes that.
Belief is not relevant, I'm talking about real results.

A few observations:
1.)You are only looking at early UE5 titles
And what should we look at, titles years from now when the user will be looking to already have upgraded by then? I can't judge reality by hypotheticals because they're infinite.

,and lots of devs say VRAM requirements are going up.
And they are, but that doesn't mean 12 GB won't be enough, especially on UE5.

Just compare early UE4 titles with current ones.
Why? It's irrelevant. Nanite changes the game when it comes to vram usage, it can't be compared like that. Moreover if you go test with the highest quality assets in UE5 vs lower ones you can see the memory differences are minimal, thanks to Nanite. So going forward an increase in AQ (which isn't feasible anyway) won't dramatically increase vram requirements.

You can't just look at early titles and surmise that is it for the next few years.
And you can't use the past to predict the future, but I can use the present to judge the present. Besides, why should anyone care if in 4 years, let's say, a 4070 Ti user might run into a title (which he might not even care about) where it turns out he has to turn down textures a notch? In the meantime he'd have enjoyed the card for 4 years!

2.)What happens when devs want to target even higher resolution textures and simply want to use more individual textures? Is 8GB and 12GB going to be fine at qHD and 4K in all UE5 titles for the next few years?
What happens is Nanite. You should look into it.

3.)People buy £800 cards to play games at 4K or to have greater longevity at qHD.Most people I know in the realworld keep dGPUs(especially expensive ones) for 3~5 years.Many on here upgrade every year or two. So you need to consider longer lifespans and that is going to be into newer generations.
That's fine, but people should make sure their expectations align with reality though.

4.)The consoles are three years old now. The PS5 PRO is out next year,and its most likely the XBox Series X2 will be out earlier than expected(no XBox Series X refresh). RTX5000 series in 2025. So what happens in 2025,which is barely 18 months away?
What happens to people who will have enjoyed a card for 2 years when a new console launches, the baseline is still Series S, and games are planned (and take) 4+ years? My guess - nothing.

5.) We have gone though people saying 256MB was fine(8800GT 256MB vs HD3870 512MB),3GB was fine(GTX1060 and GTX780) and 4GB was fine(AMD Fury X vs GTX980TI),but it wasn't in the end. History has a good track record of saying weird SKUs with imbalanced memory subsystems have problems. The RTX4070TI is one of them.
It's not one of them, we have data to prove the opposite. That you think it may in the future prove to be one of them, well, that's just pure speculation.

But I don't agree with you that the 12GB on the RTX4070TI is acceptable because its £800.
It's acceptable if it people buy it, but my points merely revolve around where it was better than the 7900 XT (and I've argued it is), then whether 12 GB is enough (all our data shows that it is).

If I was spending that much money I would rather get an RTX4070 12GB and spend the rest on beer.
Sure, and that's up to the individual to judge how to spend their money. I'm not saying people should or shouldn't buy it.

I've never understood why people advocate for minimal hardware resources.
No one is advocating for that.

Interestingly the recent Baldurs Gate 3 game uses very little vram and it was developed for PC first then ported to console. Compare that to Diablo 4 which has worse graphics and a smaller world, but uses a huge amount of vram on PC because it was designed for consoles first.
BG3 uses little vram because the assets are lower resolution and much less complex, moreover it's not an instanced shared world with other players (plus some of Blizz's own failings on the programming side, but that isn't the bulk of the issue), and in terms of rendering features on show they're also fewer and of lower quality. When you engage in split-screen then you can see memory requirements go up accordingly, and since this is also a title where that feature is mandatory on consoles (particularly the Series S) then it's understandable why you see lower vram requirements for solo gameplay. Plus to think that BG3 wasn't developed with consideration for consoles from the start is misguided since they had planned to release it simultaneously on consoles (and were developing with that in mind), it's only in the past year that they've decided to forego that and instead focus on one release at a time.
 
Last edited:
I suspect that we are in the pre-T&L days of GPUs and that RT will have to be done cleverer. Much cleverer.

Silicon nodes just aren't advancing enough anymore (and transistors/cost is barely moving) whereas brute force RT requires that entry-level has RT performance of ~4090. Consoles are not going to dedicate that much silicon to the GPU, so a different approach will have to be found.

Maybe a mix of raster and RT as currently RT seems to be too keen to abandon all the raster techniques. Unsure about the technicalities but lets not forget that Nvidia's tensor sensors were created for the big cashcows of AI and data centre and they have to justify their presence on consumer cards hence their upscallers using them. I wouldn't be surprised if Nvidia RT is similar - brute forcing RT because it gives them advantage. It is even possible that Nvidia have though of cleverer ways of doing RT but like the excess tessellation: as long as the present model sells Nvidia card they do not care!


Nodes were moving like crazy back then though: XBox360 was 90nm at first, 8800GT was 65nm. The console was only ~200 million transistors whereas G92 was 750 million. And it wasn't even that large. No idea what cost/transistors was back then but it must have been cheap enough for Nvidia to have sold a 324mm2 part for $200 or so.

That is largely over, huge dies are expensive, what is relatively cheap is VRAM, and Nvidia (and to a far lesser extend AMD) could be far more generous with VRAM but then in Nvidia's case that is very much planned obsolescence and their reward for that tactic is near 90% marketshare.

Yet,PCMR excuse makes for low VRAM amounts on £800 cards. It reminds me exactly of Apple and people saying low RAM and storage amounts on Apple PCs is fine.

But apparently now 12GB of VRAM is fine for 4K for the next few years,because Nvidia wants to sell you an £800 RTX4070TI. Yet none of the people defending 12GB has fine for 4K for years,own a 12GB dGPU. If these people bought 12GB dGPUs to play games at 4K for the 3~5 years then it might show some actual belief.

But the issue is that Nvidia is not only skimping on VRAM but relative transistor increase per tier for their around 300MM2 dGPUs. If you compare their

So compare GA106(just under 300MM2) and AD104(just under 300MM2):
1.)https://www.techpowerup.com/gpu-specs/nvidia-ga106.g966
2.)https://www.techpowerup.com/gpu-specs/nvidia-ad104.g1013

3X the amount of transistors,and a 2.4X increase in performance of fully enabled AD104 over a slightly disabled GA106:

Now compare GA106 to TU116:
1.)https://www.techpowerup.com/gpu-specs/nvidia-ga106.g966
2.)https://www.techpowerup.com/gpu-specs/nvidia-tu116.g902

Both similar die sizes. A nearly 2X increase in transistors:

A nearly 50% increase in performance. So there you go - quite easy for Nvidia to have a 40% to 50% performance increase with a smaller die and still get lots of money.

This is almost mirrored by AMD. Navi 32 should be the Navi 22 replacement,and there should be another 40% increase(RX6800XT from RX6700XT)

The reality BOTH are just doing enough to appear to be competing.



What I'm saying is very simple and clear: The 4070 Ti's 12 GB is enough for it to play (in all currently available games) well at a settings level commensurate with its compute capabilities.

Since most people keep cards for 3~5 years,that means the 12GB will be fine for that time at 4K with maximum settings.OK.

But oh wait....current games at
settings level commensurate....

So don't tell me you are already having get out clauses now? But despite the RTX4080 16GB only have 20% TFLOPs than an RTX4070TI,it is nearly 30% faster at 4K,and slightly over 20% faster at qHD. Weird that. So basically that distance won't increase and won't change in the next few years. OK,good to hear.

But how do you know it will be OK in a game 6 months from now? Or two years So you don't - you surmise it might be OK.



Premium products need to deliver on premium expectations. Needs to be more than fine.


Belief is not relevant, I'm talking about real results.

But you just stated a belief and a promise to everyone on this forum. Since most people keep cards for 3~5 years,that means the 12GB will be fine for that time at 4K with maximum settings? OK.
And what should we look at, titles years from now when the user will be looking to already have upgraded by then? I can't judge reality by hypotheticals because they're infinite.
No - people keep dGPUs for 3~5 years. Maybe you upgrade every year or two but most don't.

But you are looking at
hypotheticals

because you promised 12GB of VRAM will be fine for years.
And they are, but that doesn't mean 12 GB won't be enough, especially on UE5.

But how do you know that - you not a developer. UE4 VRAM usage went up a lot too.

Why? It's irrelevant. Nanite changes the game when it comes to vram usage, it can't be compared like that. Moreover if you go test with the highest quality assets in UE5 vs lower ones you can see the memory differences are minimal, thanks to Nanite. So going forward an increase in AQ (which isn't feasible anyway) won't dramatically increase vram requirements.


And you can't use the past to predict the future, but I can use the present to judge the present. Besides, why should anyone care if in 4 years, let's say, a 4070 Ti user might run into a title (which he might not even care about) where it turns out he has to turn down textures a notch? In the meantime he'd have enjoyed the card for 4 years!

But that is your

hypotheticals

So again,how do you know that your belief it will be fine? Don't callout others for belief when you are stating nothing but belief as fact.

What happens is Nanite. You should look into it.

More belief - maybe you should readup on game development. For someone who purports themselves as an expert,even UE4 saw a huge increase in VRAM usage over the years.


That's fine, but people should make sure their expectations align with reality though.

No people should align their expectations with cost and stop defending low VRAM amounts and upselling. I called out AMD repeatedly for its nonsense,especially as some want to defend "inflation" for why an RX7900XT wasn't a £600(or less) RX7800XT or Navi 32 being sold as an RX7800XT.Nvidia doesn't get a free pass.

The fact that you are defending an RTX4070TI 12GB which has the same memory subsystem as a £500 card is ridiculous,which has the same memory subsystem as an RTX3060 which is £250 during an era of cheap VRAM.

@Joxeon has stated this many times.
What happens to people who will have enjoyed a card for 2 years when a new console launches, the baseline is still Series S, and games are planned (and take) 4+ years? My guess - nothing.
It's not one of them, we have data to prove the opposite. That you think it may in the future prove to be one of them, well, that's just pure speculation.

But all you are stating is pure speculation with no real data to prove anything. But when data disproves what you are saying its down to "bad devs" or "poor console ports" or something else. That you think it may be in the future 12GB to be fine on a £800 dGPU, well, that's just pure speculation.


It's acceptable if it people buy it, but my points merely revolve around where it was better than the 7900 XT (and I've argued it is), then whether 12 GB is enough (all our data shows that it is).

No,because you are making a promise 12GB will be fine for years on an £800 dGPU. Not a £500 one,but one closing onto £1000.

An RTX4070 having 12GB might be OK,but not a card costing £100s more.But I now get you really want to buy an RTX4070TI.

I remember people were saying 8GB was fine on the RTX3070/RTX3070TI for years or even 10GB on the RTX3080. Yet how is that working out? Oh,yes many people just quietly upgraded for "other reasons". How many times these arguments happen. The same thing happens all the time.

But my own experience is that all the low VRAM is OK crowd never show faith in the products. They just happen to upgrade to higher VRAM cards very quickly because of "other reasons" or don't bother with them.

So you will say 12GB is fine,until the RTX5070 16GB is out in 18 months.

So again I don't agree with anything you are saying. You can attempt to "address" my points but none of your arguments really sway me. Just like you never really addressed what @ICDP said. But Nvidia does not agree with you. The RTX4080 16GB is barely 20% faster and has more VRAM and the RTX4090 has 24GB.

OFC,you can go on for the next 10 pages trying to sell 12GB is all you ever will need on a premium priced dGPU. Just like 512MB,1GB,2GB,4GB,6GB,8GB,etc was all you ever needed.

You do you!:)

adm5N0o.jpg
 
Last edited:
Some of you have a lot of time on your hands

I find it interesting how one user spun a thread about RDNA4 information,into how 12GB is all you ever want after this comment:

For someone who is not interested in AMD maybe they should just go and get that RTX4070TI they lust over? :p
 
Last edited:
Back
Top Bottom