• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD RX 7900XT, 90% to 130% faster than 6900XT, MCM, Q4 2022.

I thought it was shown when RDNA 2 was first released that the gains in efficiency between RDNA 1 and 2 was too great to be down to just being on a better node?

RX 6800 (None XT) used about 10% more power than the 5700XT with performance of about +60% on the same node.

Again as with the Intel 14nm vs GloFo 14nm i'd like to see people try to explain that away without agreeing architecture would have anything to do with it.
 
I am not saying Nvidia is going to be more efficient or that AMD is going to be more efficient.

All I am saying is that we don't know and we can't use Rumours or the current Gen of cards to predict which company will have the better power efficiency.

And by power efficiency, I mean performance per watt.
 
I am not saying Nvidia is going to be more efficient or that AMD is going to be more efficient.

All I am saying is that we don't know and we can't use Rumours or the current Gen of cards to predict which company will have the better power efficiency.

And by power efficiency, I mean performance per watt.

Sure, and architecture matters just as much if not more than what node its on when it comes to power efficiency, i'll say it again, the 1800X, 8 cores 16 threads had the same performance as Intel's 8 core 16 thread equivalent but used half the power despite Intel being on an arguably better node, that is a massive difference in architectural efficiency.
Its one of the reasons Intel cannot keep up with AMD in Data-centre today, not even close. Intel are at least 2 generations behind.
 
RX 6800 (None XT) used about 10% more power than the 5700XT with performance of about +60% on the same node.

Again as with the Intel 14nm vs GloFo 14nm i'd like to see people try to explain that away without agreeing architecture would have anything to do with it.

RDNA2 performance per Watt was hugely increased over RDNA1. Part of that was imo the 7nm process but also architectural improvments for sure. AMD had not focused on power efficiency for many generations of GPU and had previously been behind Nv in this metric, why Nv were the better choice for laptop gpus for many years. It looks like they woke up and made all the changes they needed in a single generation and took the lead in this area to a significant degree.

We can discuss till the cows come home but until the cards are in the hands of independant reviewers we just will not know.
 
RDNA2 performance per Watt was hugely increased over RDNA1. Part of that was imo the 7nm process but also architectural improvments for sure. AMD had not focused on power efficiency for many generations of GPU and had previously been behind Nv in this metric, why Nv were the better choice for laptop gpus for many years. It looks like they woke up and made all the changes they needed in a single generation and took the lead in this area to a significant degree.

We can discuss till the cows come home but until the cards are in the hands of independant reviewers we just will not know.
5700XT and RX 6800 are on the same node :)
 
Sure, and architecture matters just as much if not more than what node its on when it comes to power efficiency, i'll say it again, the 1800X, 8 cores 16 threads had the same performance as Intel's 8 core 16 thread equivalent but used half the power despite Intel being on an arguably better node, that is a massive difference in architectural efficiency.
Its one of the reasons Intel cannot keep up with AMD in Data-centre today, not even close. Intel are at least 2 generations behind.

Why are you comparing with Intel and CPUs? Nvidia isn't intel and CPUs aren't GPUs.

Second, I never said that RDNA 2's architecture was good or bad. And I know it's important. But, we know that the 8nm node that Ampere is using isn't even close to been as good as the original TSMC 7nm process that AMD used in RDNA 1. The node that RDNA 2 is even better again.

You can't fully compare the the architectures and say how much more efficient one is than the other without been on the roughly the same node. Even your CPU example is proof of that. We don't know how much more efficient AMD's architecture actually is than Intel's in that case. On the same node AMD's efficiency lead would be even greater.

Surely you have to admit, that Nvidia's Ampere GPUs would be much more power efficient if they were on the same node as AMD's RDNA 2?
 
5700XT and RX 6800 are on the same node :)


I just checked and you are right. Had slipped past me what node the 5700 cards were on, I assumed it was a lesser one because performance was mediocre.

AMD did really shake some special sauce on RDNA2 and really caught up in a single gen. Very impressive.
 
I just checked and you are right. Had slipped past me what node the 5700 cards were on, I assumed it was a lesser one because performance was mediocre.

AMD did really shake some special sauce on RDNA2 and really caught up in a single gen. Very impressive.

My original rumour:

90% to 130% faster than 6900XT (Rasterization)
375 watts to 450 watts

Not at all unreasonable, lets say 400 watts at 100% faster. its on 6nm and more architectural improvements not even half as good as RDNA1 to RDNA2. 2X as fast at 400 watts...absolutely.
 
5700XT and RX 6800 are on the same node :)

I don't think they are on the same process. They are both called N7 but I believe that's because TSMC just kept the N7 name for the improved process to avoid confusion. There definitely was node improvements between RDNA 1 and 2.
 
I just checked and you are right. Had slipped past me what node the 5700 cards were on, I assumed it was a lesser one because performance was mediocre.

No, you were right, there was definitely a node difference too. And you are doubly right because RDNA 2 was a big jump architecturally as well.
 
Why are you comparing with Intel and CPUs? Nvidia isn't intel and CPUs aren't GPUs.

Second, I never said that RDNA 2's architecture was good or bad. And I know it's important. But, we know that the 8nm node that Ampere is using isn't even close to been as good as the original TSMC 7nm process that AMD used in RDNA 1. The node that RDNA 2 is even better again.

You can't fully compare the the architectures and say how much more efficient one is than the other without been on the roughly the same node. Even your CPU example is proof of that. We don't know how much more efficient AMD's architecture actually is than Intel's in that case. On the same node AMD's efficiency lead would be even greater.

Surely you have to admit, that Nvidia's Ampere GPUs would be much more power efficient if they were on the same node as AMD's RDNA 2?

I was using it as an example of just how much an architectural difference can make.

While on the subject of Intel, these two have history and with it Intel know too well that AMD with a bit of money for R&D are deadly, they are an extremely capable lot, underestimate them at your peril.
 
Oh I totally think the TSMC 7nm process is excellent. So many excellent power efficient chips use it that I cannot ignore it is a large factor. AMD have also really focused on efficiency on all their products and that also makes a big difference as well.

Looking at the past can only give us so many clues to what the future will be. I will just wait and see and make a decision when I have all the facts. Power useage will be one of the main factors in what my next gpu will be, I cannot justify a 600w gpu for my own personal use.
 
I'm looking forward to a proper fight between Nvidia and AMD, this next round will be and i think AMD's hardware will be better but its not just about that.
 
No, you were right, there was definitely a node difference too. And you are doubly right because RDNA 2 was a big jump architecturally as well.
The slide says it is the same. If there where any efficiency gains in the node between RDNA 1 and 2, It was probably so small that AMD didn't even bother to mention it.

da856e5d-34ec-48eb-a03a-df62c3b8abf0.PNG
 
A recent 6nm RDNA2 GPU that should not be mentioned because its recycled E-Waste would run 3Ghz clocks if AMD had not put a 2870Mhz clock limit on it

I think RDNA3 will be the first 3Ghz+ GPU, out of the box.
 
Assuming you are talking about current GPUs, I believe it is both. Weighted more towards architect as AMD stripped out the compute stuff that was in GCN.
Ok makes sense, all those tensor and cuda cores must take up some space. Thinking about it I did hear something about AMD splitting their cards. Makes sense if you're just looking at it for gaming. Keep the compute for the professional users.
 
Back
Top Bottom