• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD RDNA3 unveiling event

Soldato
Joined
6 Feb 2019
Posts
17,857
Genius indeed.

In europe they’re equal in price (1300 euros ) while the 4080 is better.

Also, nah better to just keep trying AMD cause you know, fanboysm. I mean, they got no cards to process the RMA so better to wait around with that dud huh?



Also true.


In Germany the RTx4080 can currently be purchased for cheaper than the 7900xtx
 
Last edited:
Associate
Joined
19 Sep 2022
Posts
563
Location
Pyongyang
The xtx will further inch up in price. A large quantity of units have been taken out of supply. Maybe explore a liquid cooling block (which i personally consider to be waste of money)
 
Last edited:
Associate
Joined
27 Aug 2008
Posts
1,877
Location
London
Jensen, is that you? :D

Shhh, not so loud, I'm trying to get a feel for any interest in a GeForce leather range.
Just $299, billions of rays per second, maxed out multi-bounce indirect lighting effects, a feast for the eyes! And my bank account.
 
Last edited:
  • Haha
Reactions: TNA
Soldato
Joined
6 Feb 2019
Posts
17,857
Benchmarking the RDNA3 architecture


Some pretty interesting bits in there you won't see in other reviews. I found the exploration of AMD's claimed dual issue compute abilities to be quite interesting

"RDNA 3’s dual issue mode has limited impact. It relies heavily on the compiler to find VOPD possibilities, and the compilers are frustratingly stupid at seeing very simple optimizations. For example, the FMA test uses one variable for two inputs, which should make it possible for the compiler to meet dual issue constraints. But obviously the compiler didn’t make it happen."

"It’ll be interesting to see how RDNA 3 performs once AMD has more time to optimize for the architecture, but they’re definitely justified in not advertising dual issue capability as extra shaders cores. Typically, GPU manufacturers use shader count to describe how many FP32 operations their GPUs can complete per cycle. In theory RDNA3's dual issue would double FP32 throughput per WGP with very little hardware overhead besides the extra execution units. But it does so by pushing heavy scheduling responsibility to the compiler. AMD is probably aware that its compiler technology is not up to the task. AMD can optimize games by replacing the shaders with hand-optimized assembly instead of relying on this compiler code generation."


So that's perhaps where RDNA3's rumoured gaming performance went missing - in order for a game to make full use of RDNA3 architecture, game specific hand written code needs to be used. Perhaps games need to built from the ground up for it, or otherwise AMD needs to do it themselves by releasing hand written per game optimizations for its drivers.

AMD's software and driver teams are going to be very, very busy in 2023 if AMD intends to eventually unlock RDNA3's full performance potential
 
Last edited:
Soldato
Joined
14 Aug 2009
Posts
2,889
Benchmarking the RDNA3 architecture


Some pretty interesting bits in there you won't see in other reviews. I found the exploration of AMD's claimed dual issue compute abilities to be quite interesting

"RDNA 3’s dual issue mode has limited impact. It relies heavily on the compiler to find VOPD possibilities, and the compilers are frustratingly stupid at seeing very simple optimizations. For example, the FMA test uses one variable for two inputs, which should make it possible for the compiler to meet dual issue constraints. But obviously the compiler didn’t make it happen."

"It’ll be interesting to see how RDNA 3 performs once AMD has more time to optimize for the architecture, but they’re definitely justified in not advertising dual issue capability as extra shaders cores. Typically, GPU manufacturers use shader count to describe how many FP32 operations their GPUs can complete per cycle. In theory RDNA3's dual issue would double FP32 throughput per WGP with very little hardware overhead besides the extra execution units. But it does so by pushing heavy scheduling responsibility to the compiler. AMD is probably aware that its compiler technology is not up to the task. AMD can optimize games by replacing the shaders with hand-optimized assembly instead of relying on this compiler code generation."


So that's perhaps where RDNA3's rumoured gaming performance went missing - in order for a game to make full use of RDNA3 architecture, game specific hand written code needs to be used. Perhaps games need to built from the ground up for it, or otherwise AMD needs to do it themselves by releasing hand written per game optimizations for its drivers.

AMD's software and driver teams are going to be very, very busy in 2023 if AMD intends to eventually unlock RDNA3's full performance potential
Same thing it was previously when only using Mantle you could get the best out of their cards.
Considering the market share AMD has is pretty safe to say that not a lot of studios will bother to code specifically for them.
Best of luck to the driver's team and to players hoping that their favourite games get the necessary attention.
 
Soldato
Joined
21 Oct 2012
Posts
10,852
Location
London/S Korea
Same thing it was previously when only using Mantle you could get the best out of their cards.
Considering the market share AMD has is pretty safe to say that not a lot of studios will bother to code specifically for them.
Best of luck to the driver's team and to players hoping that their favourite games get the necessary attention.
That’s a very wild assumption. Completely ignoring that AMD is used in all consoles except the Switch and their implementations are all open source so cheaper.
 
Soldato
Joined
14 Aug 2009
Posts
2,889
That’s a very wild assumption. Completely ignoring that AMD is used in all consoles except the Switch and their implementations are all open source so cheaper.
Open source is meaningless unless is actively pushed by them is not going to make much of a difference.

Consoles are different. Back in r290 day i only got the best of that card with Mantle. The hardware scheduler and async compute worked best only when it was properly coded for them. Usually it wasn't and you could see that when dx11 code stumbled due to draw calls and and such, in heavy scenes, where nVIDIA was more stable.

LE: consoles are RDNA 2 , not RDNA3, so... pointless.
 
Last edited:

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
28,278
Location
Greater London
Same thing it was previously when only using Mantle you could get the best out of their cards.
Considering the market share AMD has is pretty safe to say that not a lot of studios will bother to code specifically for them.
Best of luck to the driver's team and to players hoping that their favourite games get the necessary attention.

I don't get why AMD are not interested in gaining market share, even if it is at the expense of some profits for a gen or two. Surely they would have improved market share quite a bit had they called the 7900 XTX a 7800 XT and priced it $799 at most.

In one or two gens they may end up at 5% market share the way they going :cry:
 
Soldato
Joined
21 Oct 2012
Posts
10,852
Location
London/S Korea
Open source is meaningless unless is actively pushed by them is not going to make much of a difference.

Consoles are different. Back in r290 day i only got the best of that card with Mantle. The hardware scheduler and async compute worked best only when it was properly coded for them. Usually it wasn't and you could see that when dx11 code stumbled due to draw calls and and such, in heavy scenes, where nVIDIA was more stable.
It is being actively pushed. We are seeing FSR and the rest appearing in most games.

Consoles aren’t that different. Only the operating system. How the game operates on the hardware is just the same.

AMD will optimise their drivers for all games over time. As has been pointed out in reviews the games that AMD performed really well on had driver optimisations so we know the potential is there.
 
Last edited:
Caporegime
Joined
4 Jun 2009
Posts
31,325
I wish people would stop saying amd being in consoles will benefit them, we (including me! :p) have been saying that since the day they got into consoles yet here we are still waiting for it... if anything, nvidia actually benefit more when looking at sonys/ps ports :cry: Does make you wonder though if amd weren't in consoles, how much worse would games run on the desktop side? :eek:


I did raise the concern with this chiplet design a while back and who would have to optimise for it to get the best from it:

Of course drivers could improve it but I suspect there are going to be quite a few issues with the chiplet design, only natural given first time with new tech and there will be work to be done by not just amd but also developers of games etc. to get the best from it, essentially much like crossfire/sli required good support from both amd/nvidia and developers. Problem is how long will it take for amd/developers to iron out the kinks and get the best from this new design.....

And even before that but can't find the post now.

Open source is meaningless unless is actively pushed by them is not going to make much of a difference.

Consoles are different. Back in r290 day i only got the best of that card with Mantle. The hardware scheduler and async compute worked best only when it was properly coded for them. Usually it wasn't and you could see that when dx11 code stumbled due to draw calls and and such, in heavy scenes, where nVIDIA was more stable.

AMD prefers open source as it means they can do an over the fence approach and let others do the work for them (this has always been made clear ever since Roy made a comment more or less saying that), this is why open source is great as you get a whole community looking into your code and able to recommend improvements i.e. create a branch from the main branch, add, remove, change etc. code then create a pull request and get someone from amd (or others that are assigned the perms) to review the code changes and if all looks good, they merge it into the main branch. That and also because of amds marketshare, they wouldn't be able get away with closed source.
 
Last edited:
Soldato
Joined
14 Aug 2009
Posts
2,889
I don't get why AMD are not interested in gaining market share, even if it is at the expense of some profits for a gen or two. Surely they would have improved market share quite a bit had they called the 7900 XTX a 7800 XT and priced it $799 at most.

In one or two gens they may end up at 5% market share the way they going :cry:
They have a "holly" duty, apparently, towards their investors to maximize profits now, with not much regard to the future. Since they are "luxury" items, perhaps we are seeing this wrong - the less people have them, the more their value increases! So AMD GPUs are actually worth more than Nvidia's due to lower market share! :))
It is being actively pushed. We are seeing FSR and the rest appearing in most games.

Consoles aren’t that different. Only the operating system. How the game operates on the hardware is just the same.

AMD will optimise their drivers for all games over time. As has been pointed out in reviews the games that AMD performed really well on had driver optimisations so we know the potential is there.
Consoles are RDNA 2, not 3.
 
Soldato
Joined
21 Oct 2012
Posts
10,852
Location
London/S Korea
They have a "holly" duty, apparently, towards their investors to maximize profits now, with not much regard to the future. Since they are "luxury" items, perhaps we are seeing this wrong - the less people have them, the more their value increases! So AMD GPUs are actually worth more than Nvidia's due to lower market share! :))

Consoles are RDNA 2, not 3.
At the moment. The Xbox and PS5 updates are expected to be RDNA 3.

On profits AMD make big money still. They exceeded Nvidia on gaming on the last financial results ($1.60bn vs $1.57bn). Nvidia become a small player for personal computing and consoles once you look at whole market where Intel is obviously the dominant player for both CPU and GPU on PC and then AMD for PC and consoles. If anything that is creating pressure on Nvidia as it’s their market to lose and they do not operate in the other personal computing market segments. Nvidia know this so tried to buy ARM but that fell through. Nvidia do win again once you look at industrial/enterprise, AI and cloud computing which is arguably their biggest business.
 
Last edited:
Soldato
Joined
14 Aug 2009
Posts
2,889
At the moment. The Xbox and PS5 updates are expected to be RDNA 3.

On profits AMD make big money still. They exceeded Nvidia on gaming on the last financial results ($1.60bn vs $1.57bn). Nvidia become a small player for personal computing and consoles once you look at whole market where Intel is obviously the dominant player for both CPU and GPU on PC and then AMD for PC and consoles. If anything that is creating pressure on Nvidia as it’s their market to lose and they do not operate in the other personal computing market segments. Nvidia know this so tried to buy ARM but that fell through. Nvidia do win again once you look at industrial/enterprise, AI and cloud computing which is arguably their biggest business.
Expected is one thing. Released with games actually made with that arch in mind that do make use of RDNA3 (and to what degree?) is another. Untile then the next or next-next series of cards will be out.
 

GAC

GAC

Soldato
Joined
11 Dec 2004
Posts
4,688
At the moment. The Xbox and PS5 updates are expected to be RDNA 3.
and the consoles will still cost the same no doubt but will turn a better profit for amd, sony and ms, new process smaller die's better cooling savings all around. apart from the buyer, because money :D
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
28,278
Location
Greater London
They have a "holly" duty, apparently, towards their investors to maximize profits now, with not much regard to the future. Since they are "luxury" items, perhaps we are seeing this wrong - the less people have them, the more their value increases! So AMD GPUs are actually worth more than Nvidia's due to lower market share! :))

l5Uv2sn.jpg
 
Suspended
Joined
17 Mar 2012
Posts
48,325
Location
ARC-L1, Stanton System
Benchmarking the RDNA3 architecture


Some pretty interesting bits in there you won't see in other reviews. I found the exploration of AMD's claimed dual issue compute abilities to be quite interesting

"RDNA 3’s dual issue mode has limited impact. It relies heavily on the compiler to find VOPD possibilities, and the compilers are frustratingly stupid at seeing very simple optimizations. For example, the FMA test uses one variable for two inputs, which should make it possible for the compiler to meet dual issue constraints. But obviously the compiler didn’t make it happen."

"It’ll be interesting to see how RDNA 3 performs once AMD has more time to optimize for the architecture, but they’re definitely justified in not advertising dual issue capability as extra shaders cores. Typically, GPU manufacturers use shader count to describe how many FP32 operations their GPUs can complete per cycle. In theory RDNA3's dual issue would double FP32 throughput per WGP with very little hardware overhead besides the extra execution units. But it does so by pushing heavy scheduling responsibility to the compiler. AMD is probably aware that its compiler technology is not up to the task. AMD can optimize games by replacing the shaders with hand-optimized assembly instead of relying on this compiler code generation."


So that's perhaps where RDNA3's rumoured gaming performance went missing - in order for a game to make full use of RDNA3 architecture, game specific hand written code needs to be used. Perhaps games need to built from the ground up for it, or otherwise AMD needs to do it themselves by releasing hand written per game optimizations for its drivers.

AMD's software and driver teams are going to be very, very busy in 2023 if AMD intends to eventually unlock RDNA3's full performance potential

It does make you wonder if AMD have forgotten that they had a dual issue architecture once before and moved away from it for good reason.
 
Back
Top Bottom