• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The AMD RDNA 4 Rumour Mill ***

It is the same, people are bought into the "GeForce experience" ecosystem. The drivers are better, DLSS is better, the RT is better, it's not different.

Again, I agree that branding matters, but that's not the only thing that keeps people on iPhones. There can be a huge sunk cost with app store purchases, customization, accessories, etc.

Vendor lock-in is simply not a thing with graphics cards. There is absolutely no switching cost to pulling out an NV card and replacing it with an AMD one (or vice versa) in <5 minutes.

DLSS is a feature, not an ecosystem lock-in.

I'd be interested in comparing sales data between the 4070 +4070 super (combined) vs the AMD 7800Xt.

The German source below actually shows AMD cards outselling NV ones in Q1 2024 - take with a pinch of salt, as it's just based on one retailer who may have a vendor alignment (I know nothing about them). As @eeii and @NZXT30 said above, I suspect the real volume is in OEM desktops and laptops... we as PC builders are merely the sideshow :(

To answer your specific question for that time period, looks like it's roughly 45% share for the 4070+S vs. 55% for the 7800XT. I'm sure you could dig back historically to build up a full data set if you really wanted to :p

a8y6dU0.png


 
Last edited:
Me thinks the reason AMD let go of the high end is to concentrate on RDNA 4 in APUs and laptop versions of GPUs. It’s a huge market they are missing out on.
 
I wonder how far out next gen consoles are. Even if they are 2 years away still. They might still be based on rdna 4. As there will be a fair lead time between designing the soc and launching the console. So maybe that's part of AMDs focus
 
Me thinks the reason AMD let go of the high end is to concentrate on RDNA 4 in APUs and laptop versions of GPUs. It’s a huge market they are missing out on.
i rather suspect that rdna3.5 will have a long life on mobile APU's, and that the successor will be UDNA when ray tracing improvements and a mode shrink make RT a viable prospect on a 25W power budget.
 
Again, I agree that branding matters, but that's not the only thing that keeps people on iPhones. There can be a huge sunk cost with app store purchases, customization, accessories, etc.

Vendor lock-in is simply not a thing with graphics cards. There is absolutely no switching cost to pulling out an NV card and replacing it with an AMD one (or vice versa) in <5 minutes.

DLSS is a feature, not an ecosystem lock-in.
GSync would probably be the closest thing to vendor lock-in we've had in a while, but even that's probably not so much of an issue these days.
 
I would have thought
I wonder how far out next gen consoles are. Even if they are 2 years away still. They might still be based on rdna 4. As there will be a fair lead time between designing the soc and launching the console. So maybe that's part of AMDs focus
PS6 at least will be udma. The development of both tends to run in parallel and with the PS5 pro having just been released I suspect that it's a good 3 years away. Xbox next is apparently 2026 so I guess that could still be rdna 4
 
GSync would probably be the closest thing to vendor lock-in we've had in a while, but even that's probably not so much of an issue these days.

DLSS 3 and FG was not available for Ampere and older gens, so whilst is was not a lock-in, its an ecosystem that allows this. Although it was enough of a massage to get people to upgrade so depends on how you look at it.
 
Last edited:
DLSS 3 and FG was not available for Ampere and older gens, so whilst is was not a lock-in, its an ecosystem that allows this. Although it was enough of a massage to get people to upgrade so depends on how you look at it.

We should also not forget PhysX as a vendor lock in. In both cases the open source model won out. G-Sync is still a thing but it now supports Freesync displays.
 
In fairness only frame gen was locked to 4000 series. Older cards still got DLSS 3.7 and Ray Reconstruction.

To be fair he did mention older gen. Even the FG lockout for Ampere seems excessive as AMD were able to release an open source variant for many older GPUs. This seems to be something that a lot of people seem to forget. AMD brought decent levels of FG and upscaling to masses using older gen GPUs. Intel were able to bring arguably better (than FSR) upscaling in XeSS to the older gen GPUs as well. The first iteration of FSR was far superior than the original early DLSS which was terrible.

I can understand new improved features that require new hardware, that is part of progress. It’s when the new “features” are sold on nothing but future promises, or behind unnecessary hardware upgrades, that I think “not at that price”. At least this time DLSS4 and the new FGx will work with older games automatically.

I just anticipate that the usual channels (on this forum and the wider web) will declare DLSS3 obsolete and “not good enough” and FSR4 as “too little too late”. Yet until now DLSS3 was declared “better than native”. The same voices were surprisingly silent when Nvidia locked G-Sync, PhysX, DLSS and FG behind a hardware paywall.

So I wait to see how FSR4 ends up as well as how AMD improve or support improvements with older versions of FSR in pre RDNA4 hardware. Because the cynic in me is… well, cynical. ;)
 
Last edited:
In fairness only frame gen was locked to 4000 series. Older cards still got DLSS 3.7 and Ray Reconstruction.

At the time I interpreted it. Twitter post too. It may have smoothed out to now but initially it was not.

Ampere and Turing can theoretically leverage DLSS 3, but they won't yield the same benefits. Catanzaro said that DLSS 3 likely won't boost frame rates on Ampere or Turing — on the contrary, owners would probably experience laggy gameplay and bad image fidelity.

Bit irrelevant now as I am looking at a Blackwell upgrade, but that is beside the point, they do this often enough so you notice it..
 
At the time I interpreted it. Twitter post too. It may have smoothed out to now but initially it was not.



Bit irrelevant now as I am looking at a Blackwell upgrade, but that is beside the point, they do this often enough so you notice it..

Exactly, AMD seem to be taking the same approach to FSR 4. Let’s see how it plays out because I think they are dipping their toes in the water to test how people react to their “news”.

It’s always just enough ifs, buts and maybes that push people towards getting that “upgrade”.
 
Last edited:
More rumours on price.I wish they would hurry up and announce it

That matches the ocuk listing, so that's interesting.


https://www.overclockers.co.uk/pc-c...cs-cards/amd-radeon-rx-9070-xt-graphics-cards

link for those who missed it
 
The 9070 was deliberately renamed to mimic Nvidia's line up, Frank Azor just said it, Nvidia's naming scheme is recognisable and AMD wants to be clear about what class of GPU people are looking at, so i'm guessing the 9070 is a ##70 class and the 9070 XT is a ##70 Ti.

The leaked performance is wrong, it is faster than that.

 
Last edited:
So $540 with the 12% VAT removed. Though to be honest it’s hardly a big reveal considering the 9070 XT is the replacement for the $475 7800 XT and that 9070XT is an AIB variant.
 
The 9070 was deliberately renamed to mimic Nvidia's line up, Frank Azor just said it, Nvidia's naming scheme is recognisable and AMD wants to be clear about what class of GPU people are looking at, so i'm guessing the 9070 is a ##70 class and the 9070 XT is a ##70 Ti.

The leaked performance is wrong, it is faster than that.


Yeah their naming scheme slide confirmed this was the case. A lot of people understandably mistakenly assumed it was a performance comparison.

I wonder if the next gen AMD will do a Roman Numeral (X for 10 hybrid) name. The X070, or just X70 for example?

If they undercut the 5070 by $70 - $80 with better performance and the 5070Ti with similar or better performance for $150, then I would say they are on the right ballpark.
 
Last edited:
Yeah their naming scheme slide confirmed this was the case. A lot of people understandably mistakenly assumed it was a performance comparison.

I wonder if the next gen AMD will do a Roman Numeral (X for 10 hybrid) name. The X070, or just X70 for example?

Yes, that slide is nothing to do with positioning performance, its about positioning naming.

Also, FSR 4 is on the fly ML and RDNA 3 doesn't have the ML performance to do it, but they are looking at optimising it to try and get it working in some form on RDNA 3, the ML performance on RDNA 4 is massively higher than it is on RDNA 3, at CES they talked about Strix Halo having 2X the ML performance of the 4090, tho i'm sure that is very cherry picked.
 
Back
Top Bottom