• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The AMD Driver Thread

I’m on expo profile on mem but I’d was doing this before one day it’s fine next day it crashes only ever on a game nothing else and seems to be fornite 75% and has couple crashes on Starfield
Starfield might not be the best I have heard people saying they get odd crashing. But maybes good starting point here, rule out cpu and mem, by running Prime95 for a wee bit. If that is OK, can focus on gpu next. Confirm temps etc. That sort of thing.
 
I'm running an older Nvidia card on the same monitor and even with high refresh rates the memory doesn't have to clock up. Well, it does if I run at 4K@144Hz, but it's fine at 120Hz, with my AMD card I have to drop down to 60Hz to get the memory to downclock.
Bear in mind that gfx memory technologies influence this behaviour. i.e. Vega10 can idle relatively low with practically any display config with HBM2. Not sure which older nvidia (or even present AMD) GPU you're using as reference points.

With that said, a single 2160p display should be able to achieve low idle at 144Hz. Give us the display model, GPU and connectivity method (DP / HDMI+ version) and we can take a look.
 
Bear in mind that gfx memory technologies influence this behaviour. i.e. Vega10 can idle relatively low with practically any display config with HBM2. Not sure which older nvidia (or even present AMD) GPU you're using as reference points.

With that said, a single 2160p display should be able to achieve low idle at 144Hz. Give us the display model, GPU and connectivity method (DP / HDMI+ version) and we can take a look.
Currently running an Nvidia 1080Ti and an AMD 7900 XT.
I'm currently using 2 PHILIPS Gaming 279M1RV displays. 7900 XT connected to one via HDMI 2.1 and one DP 1.4 (I thought it was 1.4a but website say 1.4). 1080Ti is connected to each monitor with an HDMI-to-DP cable.

I realise for AMD the fact there are 2 monitors may be the issue (Nvidia is fine with it though).
 
I realise for AMD the fact there are 2 monitors may be the issue (Nvidia is fine with it though).
The fact that there are two isn't an issue on its own.

They're two identical displays, so VBI is likely not the culprit either, but you're still driving two high res & refresh displays on an ASIC with 24 GiB GDDR6 (which behaves quite differently to 5X)
 
The fact that there are two isn't an issue on its own.

They're two identical displays, so VBI is likely not the culprit either, but you're still driving two high res & refresh displays on an ASIC with 24 GiB GDDR6 (which behaves quite differently to 5X)
Thanks for replying, I do appreciate it. Not going to lie, it is disappointing to think that there might not ever be a solution and that I might get this again in the future until we move off DDR6 (presumably regardless of vendor).
 
It's a good point, whatever happened to HBM? Wasn't that supposed to be the next big thing a while back?

I know HBM 3 is being used on Hopper in the ai space. The amd instinct cards also make use of HBM.

It never really had a good outing on pc, fury x was the poster child for it and it seemed like they were having production issues as fury x was never really in stock anywhere for the lifetime of the product, though weirdly nano and regular fury weren't that hard to get a hold of.

It reappeared on Vega in 2017 with HBM 2 and that's the last we seen of it.

Kinda odd amd dropped it so suddenly as it was in the works there from 2008. Though the conversation about chiplets started in amd around 2016 going by that gamers nexus video on rdna 3 with one of the techs from amd. Maybe having chiplets and hbm just wasn't feasible to do so they had to use gddr again. Looking at an rdna 3 die it basically looks like a hbm enabled gpu with the mcd's surrounding the gcd, so maybe squeezing even more into that space with hbm stacks just wasn't possible?
 
Last edited:
Currently running an Nvidia 1080Ti and an AMD 7900 XT.
I'm currently using 2 PHILIPS Gaming 279M1RV displays. 7900 XT connected to one via HDMI 2.1 and one DP 1.4 (I thought it was 1.4a but website say 1.4). 1080Ti is connected to each monitor with an HDMI-to-DP cable.

I realise for AMD the fact there are 2 monitors may be the issue (Nvidia is fine with it though).
It might be the cables. My 7900XTX uses 60W at idle with two monitors connected with DP. Though that's already too high, it goes much higher if one or both monitors are connected with HDMI.


It's a good point, whatever happened to HBM? Wasn't that supposed to be the next big thing a while back?
Cost. However, with GPU prices now being so high that AMD/nVidia are undoubtedly making plenty of profit, I would like to see a return to HBM so we get better value for our excessive quantities of money.
 
Cost. However, with GPU prices now being so high that AMD/nVidia are undoubtedly making plenty of profit, I would like to see a return to HBM so we get better value for our excessive quantities of money.

I can't see how amd could bring it back using the chiplet setup, the die area is already incredibly busy with the gcd and multiple mcd's. Trying to shoehorn hbm in would probably make it a nightmare to make.
 
Last edited:
You'll want a bit of overhead....Unless it's just a sensor/software issue, the highest TBP I've seen from my 7900xt pulse is 453W (just the card!) in HWmonitor

That was in RDR2 with everything cranked (incl 4x MSAA), it's usually around 350 in most games when not using vsync, from what I've noticed so far anyway

I’ll keep an eye on it but it seems fine so far. The PSU can handle spikes well above its 650W rating. It’s only running a 5600x, one nvme, aio, mobo and a couple of case fans.
 
Back
Top Bottom