• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Not much excitement for RDNA3, compared to the RTX 4000 series

Pff RT is a nice to have but not a game breaker for me. Tried to get a 6800Xt at the time, just couldnt get one, AMD site was awful for UK users. Ended up with a 3080 FE which I am more than happy with.

Keys for AMD will be availability, performance and price. Absolutely no need to match the 4090, they just need to own the mid to higher range at decent prices.

It's not a game breaker but it's a distinguishing factor.

Both GPU manufacturers at their higher tier of pricing will have enough rasterisation performance for most people who aren't running something crazy like 8K or triple 4K.

So the distinguishing factors have to be:
1. Price
2. Raytracing support
3. Upscaling technologies
4. Addictional features

Price is probably the most important factor but for others, RTX features will be the icing on the cake.


If AMD can't match points 2, 3 and 4; then ideally it'd be very nice if they at least excel in point 1. Its not like it'll be hard to undercut the overpriced 4080 (unless the cost of wafers is truly crazy and NVDA aren't taking advantage but its just a realisation of modern times).

For the sake of £100 - £150, for a £1,000 purchase, I personally don't mind paying 10% more for 2, 3 and 4.
 
So in summary, you'll be buying nVidia no matter what?


I'm buying the most powerful GPU possible as I run triple 4K screens.

I don't mind if its AMD or NVDA. But I'm am very certain AMD will not match NVDA so a 4090 seems like my best bet sadly. On the higher tier games (performance wise), I feel a bit more confident in being able to use DLSS or FSR to try and alleviate that load, and the motion interpolation NVDA have might help me with the more demanding games, if they support them.

I'm probably in a slightly more unique situation to most people in that I genuinely need the horse power if I am going to run 3 4K screens.

I'd happily move to AMD, especially if they have better multi-monitor support, because NVDA surround is awful. However I don't think many or any outlets are reporting AMD will be competing with the 4090 and matching its feature set.
 
I had a look at relative die sizes of the new Nvidia dGPUs,and I think the AD104 naming is throwing people off. Its no longer the second dGPU in the line-up,but now the third....just like a 106 series used to be! This somewhat confirmed by the relative die area compared to the top dGPU and the memory bus too.

4NM/5NM

The AD104 is 48.5% the area of the AD102(192 bit bus). AD103(RTX4080 16GB and 256 bit bus) is 62.3% of the area of the AD102.

8NM:

The GA106 was 44% the area of the GA102(192 bit bus). The GA104(256 bit bus) was 62.4% of the area of the GA102.

14NM/12NM

The TU106 was 59% of the area of the TU102(192 bit bus). The TU104(256 bit bus) was 72.3% of the area of the TU102.

The GP106 was 42.45% the area of the GP102(192 bit bus). The GP104(256 bit bus) was 66.67% of the area of GP102.

28NM:

The GM206(128 bit bus) was 38% the area of the GM200. The GM204(256 bit bus) was 66% of the area of the GM200.

The GK106 was 39% of the area if the GK110(192 bit bus). The GK104(256 bit bus) was 52.4% the area of the GK110.

45NM:

The GF106(128 bit bus) was 45.4% the area of the GF100/GF110. The GF104(256 bit bus) was 63.8% the area of the GF100/GF110.
The AD104 is the smallest 104 series dGPU relative to the 102 series dGPU in the last 12 years. The 104 series has now shifted to the third position where the 106 used to be,because of the 103 series.

All the 106 series/third in line-up dGPUs had less than 256 bit memory buses. The second line-up dies always had at least a 256 bit memory bus.By modern standards,its relative size is closer to a 106 series dGPU. The 104 series tends to 2/3 the area of the top chip. The "AD104" is basically an "AD106" rebranded one level up because there is an AD103.When it gets released,lets see performance relative to the RTX4090. I expect it will be more like a 60 series dGPU.

So if Nvidia make it an RTX4070 series,they have managed to fool people into pushing a small core up another level! :mad:

But my concern is RDNA3 might not be as good as we think,as Nvidia usually tries these stunts when they know they can get away with it! :(
 
Last edited:
I'm buying the most powerful GPU possible as I run triple 4K screens.

I don't mind if its AMD or NVDA. But I'm am very certain AMD will not match NVDA so a 4090 seems like my best bet sadly. On the higher tier games (performance wise), I feel a bit more confident in being able to use DLSS or FSR to try and alleviate that load, and the motion interpolation NVDA have might help me with the more demanding games, if they support them.

I'm probably in a slightly more unique situation to most people in that I genuinely need the horse power if I am going to run 3 4K screens.

I'd happily move to AMD, especially if they have better multi-monitor support, because NVDA surround is awful. However I don't think many or any outlets are reporting AMD will be competing with the 4090 and matching its feature set.

So buy a 4090. Majority are not that interested in a £1700+ card. :)
 
So buy a 4090. Majority are not that interested in a £1700+ card. :)

I thought that too yet its sold out :(

Was literally at work, hovering on the buy button at 2:20PM, saw loads in stock and thought 'I'll wait till I get home and have a proper think about it..."
And bam, sold out!
 
Yeah if they push power consumption then it could be strong, outside of rt rdna2 was very competitive.

Where are you seeing these rumours of weak performance though?
As we know 4090 is dominating the charts in almost every instance.
I am saying the even with 50% gen on gen compared with Nvidia it is gonna be poor. Both Nvidia and Samsung are on similar fab nodes with TSMC so there is no edge in that respect. Also the 50% gen or gen is likely to be for the top SKUs and lower stacks will have smaller gains

From 4090 we know the new TSMC fab is extremely efficient. For like to like GPU demand comparison there is a HUGE efficient gain.

Anyway looking at 6700XT or 6800XT to their supposed equivalent 3070 and 3080. They are behind by a decent margin. So with a 50% boost and assuming the 4080 12GB is actually the 4070, I don’t think the 7700XT is gonna get close to 4070 and the 6800XT might be close to 4080 16GB but the RT performance will hit hard.

I think AMD needs to be looking at much larger gains on raster than 50% cos that’s what Nvidia done with 4090 over the 3090ti. The alternative to that is 50% gain with no increase in original MSRP price and the 7900XT needs to be below £1k anyway let’s see what they got.

I ain’t holding my breathe for AMD or Nvidia to come up with value offerings this time round. Especially AMD, you just need to see their zen 4 pricing to see that they are not interested “value for money”.
 
Last edited:
I'm really interested to see where the bargain bin end of the markets starts pricewise for this gen. I shan't be buying one as I tend to be 60/70 kind of guy, but I wonder if they'll end up where ther old mid range wasm as it's clear NV have already moved the mid-range up a price tier, and gimped the 4080, well the 4080 that's not a 4060ti that is...
 
Yup, don't need a crystal ball for the sun coming up and don't need a crystal ball for AMD offering inferior RT performance and feature set compared to Nvidia too.

I know, that's my sarcastic point. You were implying you made some magical prediction nobodoy else did. Yet every leak for months was saying Nvidia would be better for RT.
 
It's not a game breaker but it's a distinguishing factor.

Both GPU manufacturers at their higher tier of pricing will have enough rasterisation performance for most people who aren't running something crazy like 8K or triple 4K.

So the distinguishing factors have to be:
1. Price
2. Raytracing support
3. Upscaling technologies
4. Addictional features

Price is probably the most important factor but for others, RTX features will be the icing on the cake.


If AMD can't match points 2, 3 and 4; then ideally it'd be very nice if they at least excel in point 1. Its not like it'll be hard to undercut the overpriced 4080 (unless the cost of wafers is truly crazy and NVDA aren't taking advantage but its just a realisation of modern times).

For the sake of £100 - £150, for a £1,000 purchase, I personally don't mind paying 10% more for 2, 3 and 4.

You are entitled to your preference and of course on what you judge to be VFM.
 
I'm really interested to see where the bargain bin end of the markets starts pricewise for this gen. I shan't be buying one as I tend to be 60/70 kind of guy, but I wonder if they'll end up where ther old mid range wasm as it's clear NV have already moved the mid-range up a price tier, and gimped the 4080, well the 4080 that's not a 4060ti that is...

Well Nvidia has rebranded a 106 series class dGPU as the new AD104. Everything from die size,memory bus size,relative transistor count and shader count,makes the AD104 more an RTX4060 or RTX4060TI. But Nvidia is trying to sell it as an RTX4070 or an RTX4080 with a price increase. So either we will get a very cut down AD104 based RTX4060 or more likely a 107 series rebranded AD106. I really like to see the relative performance of the "RTX4060" we get versus the RTX4090/RTX4090TI.

I expect people will says its "faster" than the previous generation so its an insta 21/10 buy,but then ignore how the percentage performance improvements overall in the mainstream from 2016 have slowed down.

A GTX1060 was half the performance of a GTX1080TI and qHD at 1080p. Lets see if the RTX4060 we get is even half the performance of the RTX4090/RTX4090TI.
 
Last edited:
All rumour points to lacklustre performance…probably the reason why there are no leaks
BS - at least 50% performance improvement according to AMD. Generally, AMD don't fib about their performance estimates, nor do they advertise them loudly either.

They have said power consumption will increase too, so they are clearly scaling their design up, not just building RDNA2 on 5nm.
 
Last edited:
Imagine getting riled up over Fake Frames. We'll see how it is in November with less rasterization performance, embarrassingly poor RT performance and lack of features.

Edit: Can't wait for you guys to be singing AMD's praises when AMD comes up with their inferior version 2 years down the line which will perform worse than Nvidia's current solution.
Are you worried RDNA3 will best your RTX 3090 lol. Lets be honest, it's quite likely.

AMD just isn't that far behind in RT.
 
Last edited:
So if Nvidia make it an RTX4070 series,they have managed to fool people into pushing a small core up another level! :mad:

:(
I mean, it helps to be realistic. I don't care about the branding, just what they price the Founders Editions at.

It's not going to £400-£500 for a card that has better performance than the RTX3080 (as much as I wish it could be), even if RDNA3 is brilliant.
 
Last edited:
I know, that's my sarcastic point. You were implying you made some magical prediction nobodoy else did. Yet every leak for months was saying Nvidia would be better for RT.

I was alluding to the fact that you don't need a crystal ball (I don't remember who brought up needing a crystal ball to predict NVIDIA will have superior RT performance to AMD).
 
Yeah, me too and neither am I. What do we win when we are proved right?
I know, that's my sarcastic point. You were implying you made some magical prediction nobodoy else did. Yet every leak for months was saying Nvidia would be better for RT.

Oh so you know that rdna3 is gonna be all these things? Can i take a lend of your crystal ball for the lottery numbers?


For context, this was the post I was responding to when I 'predicted' the inferior performance from AMD.

Gerard seemed to think I needed a crystal ball to predict what I did.
 
BS - at least 50% performance improvement according to AMD. Generally, AMD don't fib about their performance estimates, nor do they advertise them loudly either.

They have said power consumption will increase too, so they are clearly scaling their design up, not just building RDNA2 on 5nm.
Let’s see what happens. I think if they can get more they would have advertised or blown that trumpet as hard as they could. For me I think 50% is probably average (or a upper limit) with some odd titles seeing disproportionate increase. AMD is guilty of messaging figures as much as Intel and Nvidia.

Also let’s play the marketing game here. 4090 review is out. Other than the pricing of the card being stupidly high and DLSS3 and frame generation making up weird frames and increase input latency. Every aspect of that card is pretty impressive.

Why would AMD market machine not putting anything in the news cycles to counter this especially to divert valuable high end customers away from Nvidia. The only fathomable reason is that they cannot compete so happy let people to speculate and let people wait on the false hope that someone great will come along with better pricing than the 4090.
 
Last edited:
For context, this was the post I was responding to when I 'predicted' the inferior performance from AMD.

Gerard seemed to think I needed a crystal ball to predict what I did.

Wrong, im talking about the upcoming release, not the previous release which was pretty much known wouldn't have balls out rt performance. You didn't "predict" anything, you're trying to this time around when not much is known about rt performance.
 
Last edited:
Back
Top Bottom