• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What do gamers actually think about Ray-Tracing?

Who forces them? Even in best times, when AMD had clearly better GPUs, people preferred to buy NVIDIA. So, people got exactly what they worked so hard for for so many years. :)
Not really a valid argument given I preferred to buy a 9800Pro, X800GTO, X1950Pro, HD3870, HD4850, HD5850 back in the day because as you say, AMD clearly had better GPUs. Yes, yes they did. I bought them. I don't buy them now, see what I mean?

That also makes your second point not really applicable either. AMD thought very much about monies when they priced their inferior range so close to the better performing chips at launch with the latest generation, a bit of self awareness and honesty about their product wouldn't have gone amiss there. But no, almost as bad as NVidia.

So yes, I would much rather both teams make GPUs that compete against one another, I've had GPUs from both. It was easy, get whichever performed best. I still do that now.
 
The way I see this going is if AMD can get a hold of the burgeoning handheld PC market and work their graphics cards up from there then I can see them making money and gaining a bit of market share. If I was in AMD's position forget the high end, it's clearly high margin for Nvidia but the numbers are really in the 4070 series/7800XT/7900GRE and under market. AMD might be a bit more power hungry at the high end but they are also quite performant and power frugal at the lower end of the scale.

True but i also don't think its dramatically different.

Gaming:
RTX 4070: 201 Watts
RX 7800 XT: 250 Watts

This is board power, so the RX 7800 XT has 1 more memory controller, 2 more memory IC's and the reference power design is 10 phase on the 7800 XT vs 6 phase on the 4070, all of that uses power and the 7800 XT is on 5nm vs 4nm for the 4070.
And this is not for nothing, there are games where the 4070 falls off a bit more at 4K vs 1080P and 1440P than the 7800 XT does, because the higher memory bandwidth helps a little more sometimes at very high res, the down side is that costs power even if you don't need it.

The 7800 XT is a beefier GPU on a beefier PCB built on a less advanced node vs the 4070.

The RX 7800 XT is also 7% faster.

AMD at an architectural level is no less efficient than Nvidia, IMO. The 7900 XTX has the same sort of PCB, the same 384 Bit IMC and the same 24 GB as the 4090, its 80% the performance of the 4090 for 80% the power.
 
Last edited:
There is one more critical thing that people need to understand.

AMD have Hardware Thread Scheduling, they have Asynchronous Compute Engines (ACE) 4 of them on my GPU, they are effectively like mini CPU cores on the GPU, they use power.

Nvidia don't have those, they use a Software Based Thread Scheduling, in other words they use your CPU to do it, that also costs power , it just isn't calculated with the GPU power, Unlike AMD's ACE units.
 
AMD have Hardware Thread Scheduling, they have Asynchronous Compute Engines (ACE) 4 of them on my GPU, they are effectively like mini CPU cores on the GPU, they use power.

Nvidia don't have those, they use a Software Based Thread Scheduling, in other words they use your CPU to do it, that also costs power , it just isn't calculated with the GPU power, Unlike AMD's ACE units.
ACE is also why AMD performs better than Nv(like for like) in FSR3 FG.

If AMD don't **** up FSR 3.1 launch and partner with DF/HUB/GN they should be revealing on all AMD hardware for maximum marketing exposure.
 
True but i also don't think its dramatically different.

Gaming:
RTX 4070: 201 Watts
RX 7800 XT: 250 Watts

This is board power, so the RX 7800 XT has 1 more memory controller, 2 more memory IC's and the reference power design is 10 phase on the 7800 XT vs 6 phase on the 4070, all of that uses power and the 7800 XT is on 5nm vs 4nm for the 4070.
And this is not for nothing, there are games where the 4070 falls off a bit more at 4K vs 1080P and 1440P than the 7800 XT does, because the higher memory bandwidth helps a little more sometimes at very high res, the down side is that costs power even if you don't need it.

The 7800 XT is a beefier GPU on a beefier PCB built on a less advanced node vs the 4070.

The RX 7800 XT is also 7% faster.

AMD at an architectural level is no less efficient than Nvidia, IMO. The 7900 XTX has the same sort of PCB, the same 384 Bit IMC and the same 24 GB as the 4090, its 80% the performance of the 4090 for 80% the power.

4070 uses less than 200w, avergae looks to be around 185-195w.


I haven't looked at cpu power usage stats, do you got any stats to show this?

The main issue with rdna 3 from launch was it had much higher power consumption especially when more than 1 monitor were connected, think it's been improved now but still not quite there yet, a far cry from the original claims of Lisa Su though:



At the end of the day though, it doesn't matter all this as evidenced, amds current gen gpus are only matching equilvalent 3+ year old nvidia gpus in RT, this is not a good look no matter what factors are in play. Hopefully with the rumours of rdna 4 changing the rt approach, amd will close this gap as it's not good having 1 brand dominating in this area for anyone.
 
True but i also don't think its dramatically different.

Gaming:
RTX 4070: 201 Watts
RX 7800 XT: 250 Watts

This is board power, so the RX 7800 XT has 1 more memory controller, 2 more memory IC's and the reference power design is 10 phase on the 7800 XT vs 6 phase on the 4070, all of that uses power and the 7800 XT is on 5nm vs 4nm for the 4070.
And this is not for nothing, there are games where the 4070 falls off a bit more at 4K vs 1080P and 1440P than the 7800 XT does, because the higher memory bandwidth helps a little more sometimes at very high res, the down side is that costs power even if you don't need it.

The 7800 XT is a beefier GPU on a beefier PCB built on a less advanced node vs the 4070.

The RX 7800 XT is also 7% faster.

AMD at an architectural level is no less efficient than Nvidia, IMO. The 7900 XTX has the same sort of PCB, the same 384 Bit IMC and the same 24 GB as the 4090, its 80% the performance of the 4090 for 80% the power.

IMG-6585.jpg

I don’t think you are right about this one.
 
Reference designs.
This measured with voltmeter at the 12v rail, not software.

uDHBtsp.png

Looking at TPU, seems weird this, so in gaming, it's what you show, yet the "maximum" reading for a 4070 is 197w vs 252w? "RT" is 187w vs 250w and the "spikes 20ms" is 235w vs 339w?


What about the cpu power being more with nvidia specs? Where is this coming from?
 
Looking at TPU, seems weird this, so in gaming, it's what you show, yet the "maximum" reading for a 4070 is 197w vs 252w? "RT" is 187w vs 250w and the "spikes 20ms" is 235w vs 339w?


What about the cpu power being more with nvidia specs? Where is this coming from?

197 watts vs 201 watts is margin of error, its 2%, this is not proof of a conspiracy on TPU's part.

Nvidia measure power as "Total Graphics Power" that being 200 watts, AMD measure power as "Total Board Power" that being 260 watts. There is crucial difference, Nvidia are measuring the graphics core, its the traditional way of doing it as what you're doing is communicating the cooling needed to cool the core.
However these days the cooler isn't just cooling the core, at least in the case of the RX 7800 XT the cooler is cooling the core, the power stages and the VRam, all of that is power that feeds in to the cooler, i don't know to what extent the reference cooler of the 4070 is cooling the graphics card, i would think and hope just like AMD its also cooling the power stages and VRam IC's.

But in any case Nvidia are citing just the GPU die, excluding the memory IC's and power stages, AMD are measuring simply the whole board, which even includes RGB LED's and fans...
 
Last edited:
Looking at TPU, seems weird this, so in gaming, it's what you show, yet the "maximum" reading for a 4070 is 197w vs 252w? "RT" is 187w vs 250w and the "spikes 20ms" is 235w vs 339w?


What about the cpu power being more with nvidia specs? Where is this coming from?
It also matters which part of a game and under what settings you're testing. At times, Path Tracking can be less power hungry than Raster for instance.
 
I should add, i'm in no way saying TPU are under reporting, they most defiantly are not, they are measuring the power consumption at the board voltage rail's, that is 100% accurate.

There is a reason they do this, software reporting like MSI OSD is inherently inaccurate, on top of that you also don't know if its reading or reporting all sensors, if the sensor array is even complete to give you a complete picture of power consumption.

At source hardware power reporting with a voltmeter over MSI OSD everytime...
 
Last edited:
I should add, i'm in no way saying TPU are under reporting, they most defiantly are not, they are measuring the power consumption at the board voltage rail's, that is 100% accurate.

There is a reason they do this, software reporting like MSI OSD is inherently inaccurate, on top of that you also don't know if its reading or reporting all sensors, if the sensor array is even complete to give you a complete picture of power consumption.

At source hardware power reporting with a voltmeter over MSI OSD everytime...

I haven't really read into it but get what you are saying although I don't think MSI AB is that inaccurate as TPU figures are somewhat in line with what TPU show, if we assume margin of error and the variance of different testing scenarios.

What is mentioned by calin is very true though in the sense some scenes/games can be considersably more demanding or hardly demanding at all, iirc, metro ee is the most demanding game in terms of guzzling power and really pushing the temps/fan speeds on my 3080.
 
I haven't really read into it but get what you are saying although I don't think MSI AB is that inaccurate as TPU figures are somewhat in line with what TPU show, if we assume margin of error and the variance of different testing scenarios.

What is mentioned by calin is very true though in the sense some scenes/games can be considersably more demanding or hardly demanding at all, iirc, metro ee is the most demanding game in terms of guzzling power and really pushing the temps/fan speeds on my 3080.

I have no argument with what @Calin Banc said.

I know what you mean about Metro, normally i'm clocking an average of 2.7 Ghz with a board power of 270 watts, i've added 10% power to my GPU, or board, i don't know if AMD are increasing the power to the GPU or the whole board when you move that slider, i suspect its the whole board, but its clocking a lot lower, around 2.4 Ghz and about 10 watts higher than usual, usual being about 260 watts, its pushing it hard and right up to the set board power limit, normally AMD like to run about 10 watts under that, TPU show that and this is also my experience.
 
Not really a valid argument given I preferred to buy a 9800Pro, X800GTO, X1950Pro, HD3870, HD4850, HD5850 back in the day because as you say, AMD clearly had better GPUs. Yes, yes they did. I bought them. I don't buy them now, see what I mean?

Do you identify as "people", though? If not, then you're in a minority (same in my case, as I had quite a few of ATI/AMD cards back in the days, clearly better models than NVIDIA's ones at that time). What minority does is largely irrelevant in the business world, though. Again, people have spoken and we are where we are because of that.

That also makes your second point not really applicable either.

My second point is my first point (as in, both are the same point), though. Sales numbers of both companies from the past and now do not lie.

So yes, I would much rather both teams make GPUs that compete against one another(...)
Maybe in some more distant future, but unlikely to happen anytime soon in the high-end. Mid-range seems to be still a place of competition, though.
 
Do you identify as "people", though? If not, then you're in a minority (same in my case, as I had quite a few of ATI/AMD cards back in the days, clearly better models than NVIDIA's ones at that time). What minority does is largely irrelevant in the business world, though. Again, people have spoken and we are where we are because of that.



My second point is my first point (as in, both are the same point), though. Sales numbers of both companies from the past and now do not lie.


Maybe in some more distant future, but unlikely to happen anytime soon in the high-end. Mid-range seems to be still a place of competition, though.
I think we largely agree, we both did the same thing back then because we knew the AMD card was the best bang for buck. And you're right, those sales figures don't lie. I just wish it was like back in the 9700Pro days, where if you were in the know about what was good, you got a real bargain. Also back then, unlocking cards with video BIOS flashes that unlocked pixel pipes and higher vcores.

Nothing like that now :(
 
There is one more critical thing that people need to understand.

AMD have Hardware Thread Scheduling, they have Asynchronous Compute Engines (ACE) 4 of them on my GPU, they are effectively like mini CPU cores on the GPU, they use power.

Nvidia don't have those, they use a Software Based Thread Scheduling, in other words they use your CPU to do it, that also costs power , it just isn't calculated with the GPU power, Unlike AMD's ACE units.
AMD's GPU also use more power outside the cores, memory etc. - on the Infinity Fabric itself. I remember old article on anadtech about Ryzen 7 2700X and IF using in full stress close to 28W of power. That in itself even before 7k series had been released was known to cost quite a significant portion of the power budget, which could also explain why 7k series isn't as power efficient as NVIDIA's counterparts (aside just process changes).

Good point about NVIDIA using power of the CPU for scheduling - I haven't thought about it like that before.
 
It also matters which part of a game and under what settings you're testing. At times, Path Tracking can be less power hungry than Raster for instance.
PT still uses a lot of the CPU for calculations and then GPU has to wait for CPU to finish, which lowers power use of said GPU. Much more rare to happen in raster, with sensibly fast CPU.
 
I think we largely agree, we both did the same thing back then because we knew the AMD card was the best bang for buck. And you're right, those sales figures don't lie. I just wish it was like back in the 9700Pro days, where if you were in the know about what was good, you got a real bargain. Also back then, unlocking cards with video BIOS flashes that unlocked pixel pipes and higher vcores.

Nothing like that now :(
Good old times, eh? :) Sadly, corporations learned their lessons and now it's a race to the top - with pricing, whilst everything else is locked down.
 
I haven't really read into it but get what you are saying although I don't think MSI AB is that inaccurate as TPU figures are somewhat in line with what TPU show, if we assume margin of error and the variance of different testing scenarios.

What is mentioned by calin is very true though in the sense some scenes/games can be considersably more demanding or hardly demanding at all, iirc, metro ee is the most demanding game in terms of guzzling power and really pushing the temps/fan speeds on my 3080.

I'd say the like of MSI or GPU-Z are relatively good. You can have a more comprehensive view in GPU Z regarding the power draw, including the board draw -which seems similar to MSI AB (although I'm not sure the OSD i've set up is is from MSI or from hwinfo ).
Anyway, this is my card with a TDP of 320w. The most I've seen of GPU-Z as I left it in the background to register the highest value is around 320.9W, after playing for a while. That's simply a spike, at most is tends to stay around 307w-308w when heavy loaded or just a bit over that otherwise. That seems to be about the same as presented by TPU, even for maximum power. Ergo, software is pretty decent and I don't see why it wouldn't be - it GPU Z you can set the reading time to 0.1 second if your crazy about these things... After all let's not forget one thing - > the card "auto tunes" itself in terms of power in frequencies based on such values - they need to be pretty accurate. And it does that pretty good.

As a side note, as it gets hotter, I wouldn't want to be with a 4090 + a heavy clocked power hungry CPU pulling another 200w or so just by itself.
 
Last edited:
PT still uses a lot of the CPU for calculations and then GPU has to wait for CPU to finish, which lowers power use of said GPU. Much more rare to happen in raster, with sensibly fast CPU.
Yeah, it depends from where the limitation comes exactly. However, with the 4xxx from what I've saw, you can save a power by limiting your fps or just the power draw itself without much performance loss.

quick example: about 55% power saved for 45% loss. This is by default, no custom tuning. Oh, an a bit of saving of the CPU, of course.

 
Last edited:
Back
Top Bottom