• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD really need to fix this.

It's been an issue since they abandoned Terrascale/VLIW (5870/6870 days) and moved to GCN and then RDNA. I don't know the exact technical reason but VLIW has idle power draw of around 2 watts on the higher end stuff but when they switched to GCN that idle power efficiency went out the window.

I guess it comes down to manpower, budget and time at the end of the day, if AMD is late to the market they suffer financially for it (think Vega 1 with its HBM2 memory, large die and 300 watt power draw ended up battling a 1080 GTX, it was likely sold with a tiny margin).
 
Last edited:
Sadly another reason why I'd not consider another AMD GPU. I bought their flagships for years, more recently with 390X, Fury, Vega64, Radeon 7. All had high hopes of driver improvements that would eliminate high power draw and improve performance. That sadly never fully materialises.

Much safer option is to buy Nvidia. You get launch day 100% performance, stability and much lower idle/multi-monitor/media playback power consumption. Oh and the ability to turn on RT in games with playable FPS at 4K.
 
In non graphics intensive games use Radeon Chill - has the reviewer not heard of this? It's been around since 2016.

Even with Nvidia cards, they boost too high too in older games.

On my RTX3060TI, I reduce the power target, use a custom curve and use a framerate cap to reduce power power consumption a decent amount.

Seems to be slow news day in the tech press ATM. Weird this concern with power consumption never existed when the RX6000 series was better performance/watt than the RTX3000 series.
 
Last edited:
Sadly another reason why I'd not consider another AMD GPU. I bought their flagships for years, more recently with 390X, Fury, Vega64, Radeon 7. All had high hopes of driver improvements that would eliminate high power draw and improve performance. That sadly never fully materialises.

Much safer option is to buy Nvidia. You get launch day 100% performance, stability and much lower idle/multi-monitor/media playback power consumption. Oh and the ability to turn on RT in games with playable FPS at 4K.
to be fair vega was easily undervolted to be fairly efficient, had mine for around 3yrs, though outside of undervolting its efficiency was woeful
 
Last edited:
Honestly i can't see them *fixing this until they release next gen cards. I guess this is news and/or people like to point to it as a fail because AMD said chiplets would reduce power draw but the opposite has happed, people expectations weren't meet and that's on AMD for setting those expectations in the first place.

*I doubt they'll fix it, it will just be better.

I suspect the reason they can't simply fixing it with a driver/firmware update is because, much like CPUs with chiplets, one of the big power draws is the IF connecting all the chiplets. IDK enough about how GCN and chiplets work in relation to that but i suspect they can't downclock or shutdown parts of the interconnect, that they can't for example shutdown half of the MCDs to save power. Maybe it's something they can address with the 8000 series but i don't know enough about the design to know how downclocking/shutting down a load of the MCDs along with the cache and memory attached to each of those would effect things.

I guess it could be done but i think something like that would have to be designed into the architecture itself as the main compute die has to know what to do when parts of the cache and memory gets switched on/off.
 
Last edited:
In non graphics intensive games use Radeon Chill - has the reviewer not heard of this? It's been around since 2016.

Even with Nvidia cards, they boost too high too in older games.

On my RTX3060TI, I reduce the power target, use a custom curve and use a framerate cap to reduce power power consumption a decent amount.

Seems to be slow news day in the tech press ATM. Weird this concern with power consumption never existed when the RX6000 series was better performance/watt than the RTX3000 series.

What's Radeon chill and why hasn't AMD implemented this solution into its driver already so it just does it automatically if it's so great


Edit: oh it's a dynamic frame rate limiter; why would someone use it in an esports game though, no matter how small it is, it must have extra input lag because you cannot instantly transition between say 30fps to save power and 120fps because you flicked 180 degrees to scope an enemy, part of your frames were generated at a lower fps so potentially look longer to spot the enemy and react than if you were rendering at 120fps.
 
Last edited:
In non graphics intensive games use Radeon Chill - has the reviewer not heard of this? It's been around since 2016.

Even with Nvidia cards, they boost too high too in older games.

On my RTX3060TI, I reduce the power target, use a custom curve and use a framerate cap to reduce power power consumption a decent amount.

Seems to be slow news day in the tech press ATM. Weird this concern with power consumption never existed when the RX6000 series was better performance/watt than the RTX3000 series.
Radeon Chill isn't the solution to this problem. If the 7000 series is boosting more than it needs to, or drawing more than it needs to, then the problem is with the 7000 series - the video notes the 6950 and the 3090 were pretty much matched in the same situations. I suspect it's a feature of moving to chiplets and some teething issues associated with it tbh.

I wonder if they tested with freesync enabled. I remember my 580 being clock happy when freesync was on, to the extent that older games I'd see it just max out clock speed, and even in video playback I'd see the clocks go over 1ghz whereas with freesync off I'd see it downclock to 500mhz. Made the difference between it having to turn it's fans on/off or just never turning them on.

Radeon chill, imho, is a nice feature. But I felt there should be an advanced setting to enable a curve of fps ranges rather than min/max.
 
In non graphics intensive games use Radeon Chill - has the reviewer not heard of this? It's been around since 2016.

Even with Nvidia cards, they boost too high too in older games.

On my RTX3060TI, I reduce the power target, use a custom curve and use a framerate cap to reduce power power consumption a decent amount.

Seems to be slow news day in the tech press ATM. Weird this concern with power consumption never existed when the RX6000 series was better performance/watt than the RTX3000 series.
You can only turn on nvidia features to make their cards better you are not allowed to do that with AMD...
 
You can only turn on nvidia features to make their cards better you are not allowed to do that with AMD...
Yep.

What's Radeon chill and why hasn't AMD implemented this solution into its driver already so it just does it automatically if it's so great


Edit: oh it's a dynamic frame rate limiter; why would someone use it in an esports game though, no matter how small it is, it must have extra input lag because you cannot instantly transition between say 30fps to save power and 120fps because you flicked 180 degrees to scope an enemy, part of your frames were generated at a lower fps so potentially look longer to spot the enemy and react than if you were rendering at 120fps.

I literally use a combination of custom curves, power limits and framerate limits on my RTX3060TI in many games. I have saved upto 50W IIRC.

Radeon Chill isn't the solution to this problem. If the 7000 series is boosting more than it needs to, or drawing more than it needs to, then the problem is with the 7000 series - the video notes the 6950 and the 3090 were pretty much matched in the same situations. I suspect it's a feature of moving to chiplets and some teething issues associated with it tbh.

I wonder if they tested with freesync enabled. I remember my 580 being clock happy when freesync was on, to the extent that older games I'd see it just max out clock speed, and even in video playback I'd see the clocks go over 1ghz whereas with freesync off I'd see it downclock to 500mhz. Made the difference between it having to turn it's fans on/off or just never turning them on.

Radeon chill, imho, is a nice feature. But I felt there should be an advanced setting to enable a curve of fps ranges rather than min/max.

According to some of the comments here they seem to want to imply it's all AMD cards forever not just the RX7900 series with it's chiplet design. The video needs to have a clearer title really.

Although I am not surprised after seeing the issues with Zen1,and with Navi 32 being AWOL.

However, I find my RTX3060TI boosts too high on less intensive games. I literally power limited the card by 50% in Fallout4 and it makes no difference in framerates but the fan is much quieter. If not it was going up and down all the time. Since it is in an NCase M1 right next to me on my desk it was definitely noticeable.
 
Last edited:
You can only turn on nvidia features to make their cards better you are not allowed to do that with AMD...
There are no technologies limiting the fps to the same as the amd card in that video in order to limit the power usage. It's an apples to apples comparison, throwing in Radeon Chill is not an apples to apples comparison.

If someone suggested locking the nvidia card to 60fps while running the amd card at full throttle people would say it's an unfair comparison - because it would be.

Looking at this point https://youtu.be/HznATcpWldo?t=161 you need to ask - what is nvidia doing that allows it to run at the same performance as the amd card while using 150w less. Or on the flip side - what is amd not doing? Why is the amd card utilisation so much higher? Is it an architectural flaw? Is it a software flaw? Is nvidia doing some magic on either of the two mention fronts?

According to some of the comments here they seem to want to imply it's all AMD cards forever not just the RX7900 series with it's chiplet design. The video needs to have a clearer title really.

Although I am not surprised after seeing the issues with Zen1.

However, I find my RTX3060TI boosts too high on less intensive games. I literally power limited the card by 50% in Fallout4 and it makes no difference in framerates but the fan is much quieter. If not it was going up and down all the time. Since it is in an NCase M1 right next to me on my desk it was definitely noticeable.
Those posters obviously didn't watch the video and should be dismissed. It's an interesting issue IMHO as it's probably not something that's tested too much, AFAIK most reviews dump on a demanding game and look at power usage, or run a benchmark. This scenario would be missed by that testing and it's something that really shouldn't be a thing TBH.

I mentioned my 580 and as I said, it seemed to be clock happy with freesync. I did notice a few posts on reddit when I was looking at it at the time mentioning the same thing, but it wasn't as bad as the example in the video. Partly because I don't think its power usage was as aggressive, and secondly because it had a smaller power budget - so even if it was more power hungry than it's counterpart, it'd never be 200w different. Though as I said, the video mentions the 6950 behaving similar to the 3090 so I'd assume it's a 7000 series issue, just like the more common high idle power draw which they seem to be addressing albeit at a glacial pace.

I'm fairly sure my 3060ti underclocks on FO4. It's been a while since I've played it though, and I didn't add graphical mods from memory. And it's already underclocked to 1860 with a 170w max power target, while locking it to 60fps because I don't trust the game not to break. But I don't think it was getting to 1860 when I played. It downclocks in Anno 1800, but then I limit it to 48fps, so it should downclock. Though I think if i'm using fullscreen borderless I don't think it does, or maybe not as much, whereas fullscreen does. Not sure I've noticed anything elsewhere, but then I've not really been looking. Does it run well and is it noisy are the two things that I generally pick up on, maybe also is it pumping out a lot of heat.

My 580 would downclock to ~1100/700mhz when playing black mesa, except doing so caused stuttering. Was the only game i had to use clock blocker on to boost back up to full clocks and solve the stuttering. Annoyingly boosting my power usage. If the 700mhz was smooth it would have been absolutely fine because the fps was fine..
 
Those posters obviously didn't watch the video and should be dismissed. It's an interesting issue IMHO as it's probably not something that's tested too much, AFAIK most reviews dump on a demanding game and look at power usage, or run a benchmark. This scenario would be missed by that testing and it's something that really shouldn't be a thing TBH.

I mentioned my 580 and as I said, it seemed to be clock happy with freesync. I did notice a few posts on reddit when I was looking at it at the time mentioning the same thing, but it wasn't as bad as the example in the video. Partly because I don't think its power usage was as aggressive, and secondly because it had a smaller power budget - so even if it was more power hungry than it's counterpart, it'd never be 200w different. Though as I said, the video mentions the 6950 behaving similar to the 3090 so I'd assume it's a 7000 series issue, just like the more common high idle power draw which they seem to be addressing albeit at a glacial pace.

I'm fairly sure my 3060ti underclocks on FO4. It's been a while since I've played it though, and I didn't add graphical mods from memory. And it's already underclocked to 1860 with a 170w max power target, while locking it to 60fps because I don't trust the game not to break. But I don't think it was getting to 1860 when I played. It downclocks in Anno 1800, but then I limit it to 48fps, so it should downclock. Though I think if i'm using fullscreen borderless I don't think it does, or maybe not as much, whereas fullscreen does. Not sure I've noticed anything elsewhere, but then I've not really been looking. Does it run well and is it noisy are the two things that I generally pick up on, maybe also is it pumping out a lot of heat.

My 580 would downclock to ~1100/700mhz when playing black mesa, except doing so caused stuttering. Was the only game i had to use clock blocker on to boost back up to full clocks and solve the stuttering. Annoyingly boosting my power usage. If the 700mhz was smooth it would have been absolutely fine because the fps was fine..
My current playthrough is modded,so is definitely more of a load than normal. It does downclock,but it's quite clear I can drop clockspeeds even more with tweaking and it has no effect on framerates. I use a 12 litre NCase M1 case,which is sat on the desk next to me so it is obvious when fans do ramp up! :p

But what it did was cycle up and down(and so does the fan),but with my tweaking it doesn't really ramp up as much during the game. Even with a custom curve power draw does drop,once I powered limited the dGPU and framerates seem not to be affected. But GPU utilisation ends up being higher if you look at the stats in this state.Tested with some more demanding games,and definitely the reduced power target,etc is noticeable in reduced framerates. Even in this case,you can tweak power,curves,etc and see similar framerates.
 
Last edited:
@Bencher also mentioned this before about ryzen cpu power draw in low usage scenarios being much higher than intels power draw in same low usage scenarios. Just looking at peak power pull in cinebench doesnt tell the whole story.
 
Techpowerup also tests v sync 60fps and 4080 is better. Messing around in the drivers or 3rd party apps shouldn't be something that the user is required to do.
 
Last edited:
Sadly another reason why I'd not consider another AMD GPU. I bought their flagships for years, more recently with 390X, Fury, Vega64, Radeon 7. All had high hopes of driver improvements that would eliminate high power draw and improve performance. That sadly never fully materialises.

Much safer option is to buy Nvidia. You get launch day 100% performance, stability and much lower idle/multi-monitor/media playback power consumption. Oh and the ability to turn on RT in games with playable FPS at 4K.

That’s not true is it?

Remember, it’s us you’re talking to dude.

Burning connectors, 3080s that were boosting beyond their means and gawd knows what else disagrees with you. Oh and 3.5gb gate.

Oddly of all of the GPUs you mention yes, a lot of them were turds. But to say Nvidia is a better bet is nuts.

The 7000 series are a new thing. As such yeah, creases that need to be ironed out. Kinda like the creases with Ampere and oh, 2080ti space invader mode.

I’m pretty sure I’ve made my point and don’t need to continue.
 
Last edited:
There are no technologies limiting the fps to the same as the amd card in that video in order to limit the power usage. It's an apples to apples comparison, throwing in Radeon Chill is not an apples to apples comparison.

If someone suggested locking the nvidia card to 60fps while running the amd card at full throttle people would say it's an unfair comparison - because it would be.
Oh good, can we stop talking about DLSS then?
 
Back
Top Bottom