• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
I had to google, but isn't AMD's 6800XT at 300W/7nm, while Nvidia's 3080/8nm sits at 320W?
It's AMD making the perf/watt gains though as the 3080 is 25~30% faster than a 2080ti while using 25~30% more power.

Whereas the 6700XT is 35% faster than the 5700XT while using just 2% more power.

This was achieved on the same 7nm node too while nvidia went from 12nm to 8nm.
 
Last edited:
Look how insanely good RDNA2 can be @perf/watt if you tune it:
https://www.youtube.com/watch?v=rZz6BXXGjVc

I think the reported power usage is for the chipset only but that is under 200W for the Big Navi. That means with more optimizations they can add more hw in the same power budget.
I had to google, but isn't AMD's 6800XT at 300W/7nm, while Nvidia's 3080/8nm sits at 320W?
This is from a website friendly to Nvidia:

https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/36.html
 
Look how insanely good RDNA2 can be @perf/watt if you tune it:
https://www.youtube.com/watch?v=rZz6BXXGjVc

I think the reported power usage is for the chipset only but that is under 200W for the Big Navi. That means with more optimizations they can add more hw in the same power budget.

This is from a website friendly to Nvidia:

https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/36.html

At 2.05 to 2.1Ghz that's downclocked a bit, but not much, add 10% to that you're at 2.3Ghz.

130 to 135 Watt's, that is stupidly good.
 
At 2.05 to 2.1Ghz that's downclocked a bit, but not much, add 10% to that you're at 2.3Ghz.

130 to 135 Watt's, that is stupidly good.
135W for the chipset, probably around 200W for the whole board. But it is still good and it shows there is still room for AMD to improve the perf/watt. Nvidia has this problem, even if they will move on a much smaller node, they will need to improve the perf/watt a lot. AMD just needs to add more hardware to their chips.
 
I can run the 6900 XT at stock reference clock speed of 2350Mhz (actual speed in game) at only 0.950v-0.975v. Stock voltage is 1.175v. This reduction in voltage reduces power draw considerably. I have never seen a GPU accept such a huge undervolt and is still able to maintain stock performance.
 
Don't forget all of Nvidia ampere cards can undervolt extremely well too, 3080s can use at least 100w less power for pretty much the same performance and I think the 3090 is similar too. Of course amd are still better for power efficiency this round regardless of undervolting or not though.

Do the tensor cores use more power?
 
135W for the chipset, probably around 200W for the whole board. But it is still good and it shows there is still room for AMD to improve the perf/watt. Nvidia has this problem, even if they will move on a much smaller node, they will need to improve the perf/watt a lot. AMD just needs to add more hardware to their chips.

Ah i see, yeah i forgot that just the power of the GPU, not the memory.
 
I have never seen a GPU accept such a huge undervolt and is still able to maintain stock performance.

Yes but yours is a bloody good sample m8! I run 2490/2500 in game at 1.1v, seems to give more consistent performance at that setting for me (not a great sample but still decentish)
 
Don't forget all of Nvidia ampere cards can undervolt extremely well too, 3080s can use at least 100w less power for pretty much the same performance and I think the 3090 is similar too. Of course amd are still better for power efficiency this round.

Do the tensor cores add to power draw for nvidia?

Yes, I run my 3080 at 900mV/1920Mhz. That works with CP2077 at 1440p running Psycho RT, DLSS Quality. I can clock it higher when no RT is in use, while keeping the same undervolt, but I have not tried with no DLSS.

Who is buying a high end GPU though and worrying about power efficiency? Didn't the r290 use a fair bit of power, I bought one of these also :)
 
Don't forget all of Nvidia ampere cards can undervolt extremely well too, 3080s can use at least 100w less power for pretty much the same performance and I think the 3090 is similar too. Of course amd are still better for power efficiency this round regardless of undervolting or not though.

Do the tensor cores use more power?
Of course every piece of hardware is using power if/when it is used.
In fact i think this is what makes the difference in this generation of cards. AMD designed a chipset that works fine on a low power budget. The power consumption did not increased a lot compared with the 5700xt.
Nvidia went berserk and allowed a huge power budget and put more hardware in their cards. They use much more power than Turing which was on a even worse node.

The problem is you can't do this every time. Will the 4080 use 500W? And the 4090 use 4 power connectors? :D
So they will have to improve the perf/watt a lot and maybe this is why the next gen of cards from Nvidia will come later. For AMD is easier, but as i said they have other problems. Their hardware is their smallest problem.
 
Yes, I run my 3080 at 900mV/1920Mhz. That works with CP2077 at 1440p running Psycho RT, DLSS Quality. I can clock it higher when no RT is in use, while keeping the same undervolt, but I have not tried with no DLSS.

Who is buying a high end GPU though and worrying about power efficiency? Didn't the r290 use a fair bit of power, I bought one of these also :)
It is not about worrying, it is a hint about how the things will evolve in the future. You can't increase the power draw by a lot, you will need to stop somewhere.
 
I am not talking about power consumption. I am talking about the perf/watt because this can be a hint about the future.

AMD was on the same node and they had a huge perf gain vs their older generation. So they increased the perf/watt a lot.
Nvidia moved to a better node, they had a huge perf gain vs Turing but also a good increase in power used. So they had a much worse perf/watt increase vs AMD considering they also moved to a better node.

And it is fine if they use more power. The problem is you can't increase the power used forever, somewhere you need to stop and work on perf/watt. Some of this will be solved by moving to the 5nm node but for a dramatic increase in performance, they will need a huge increase in perf/watt.
AMD is not looking too bad, i mean they can move to 5nm and use 80-100 more watts worth of hardware to match Nvidia.
 
Power consumption effects heat which in turn effects fan noise so it's still an important factor even if you don't care about the elecy bill.

I've pretty much always run my 3080 undervolted and with the summer coming doing so will become even more of a necessity.
 
Its not as if Ampere (8nm) is on the same node as Turing (12nm)

Undervolting is simply a case of you binning it, there is a set silicon quality limit AMD / Nvidia bin their chip to, its a balance of overall yields and power consumption, but every chip gets the same voltage, that voltage is determined by the worse quality chips, a better quality chip will run lower voltage, you have some headroom to overclock or undervolt, unless you are unlucky enough to get one of those that's at the quality the voltage is binned for, then you get little of nothing.

Its the same with CPU's. :)
 
Look here at 8k in Doom Eternal

3090 420W average
6900xt 311W average

So again, AMD does not have a hardware problem. They can easily catch Nvidia by adding a lot more hardware and use the same power Nvidia does, it was probably a bad idea to bet on low power usage while Nvidia went full berserk but that may help them in the future.
Nvidia needs to work on perf/watts because they can't increase the power usage forever and the transition to 5nm will not be enough for a good performance increase in the next generation.
 
Look here at 8k in Doom Eternal

3090 420W average
6900xt 311W average

So again, AMD does not have a hardware problem. They can easily catch Nvidia by adding a lot more hardware and use the same power Nvidia does, it was probably a bad idea to bet on low power usage while Nvidia went full berserk but that may help them in the future.
Nvidia needs to work on perf/watts because they can't increase the power usage forever and the transition to 5nm will not be enough for a good performance increase in the next generation.


Ouch Nvidia more than 100% faster in tomb Raider :D
 
Last edited:
Makes no sense, the 3080 arrived as speculated and well in advance of RDNA2.



Do I consider RT more important than raster performance. Yes! Rasterisation has hit levels above and beyond what is required. I'd have been happy woth 1080Ti raster performance and double the RT cores on my 3080. I've already said this before. Rasterisation should be considered legacy by now, something that only IGPUs hold on to.



No? Care to explain that one? :cry:



I didn't mention Quake. I did mentioned Quake 2 RTX, a fully path traced engine and arguably our best example of raytracing within a game so far. I'd agree most gamers couldn't care less about as they have no understanding of what it is. Ignorance plagues tech forums.



Yes Nvidia offers the better product, which was partly the point of my original post.

I'm not a pro gamer though so FPS in legacy titles/engines has no interest for me. Both AMD and Nvidia offer cards that provide more FPS than requried in that respect. I simply want new tech that doesn't leave games looking as though thay are made from cardboard cutouts.


For me RT is important because more and more games are releasing with it.

and we're still at a stage where RT takes up most of the frametime on each frame, therefore in any games that does RT, RT gpu performance has much more importance than rasterisation - so I place a lot of value on RT performance and it's only going to increase.

and for people who think that we'll soon get to th stage where RT performance doesn't matter, we're not even close. Almost everything in a game can be rendered to life like image quality with RT, we're only at the tip of the iceberg now, RT has incredible scaling - as GPU's get better at doing RT, the amount of RT effects and quality of those effects put into games will just get more and more so we're not going to hit the point where it flattens out for a long time - maybe in 10 years RT performance won't matter anymore
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom