• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Serious design flaw on certain 4870/4890's

Old news. If you read the thread, only occt can cause the lockups and in some rare cases furmark. The card simply cannot handle the amperage that gets pulled by the card when using things like occt.

No one has been able to replicate this problem in any game since no game comes close to putting an 85A+ load on the core.

I actually came across this problem before that thread was created when using occt with my card overvolted and heavily overclocked. Since it was the only software that did that, I ignored it since it isn't really a sign of artifacting or anything.
 
Last edited:
No one has been able to replicate this problem in any game since no game comes close to putting an 85A+ load on the core.

Can someone explain how you can put that much amperage on a card when a PSU will only manage 80A for a 1KW? Thanks :)
 
Not 80A on the 12V psu rail, but 80A at the GPU voltage....so for example on a 4890 though would be around 1.3-1.4ish volts.
 
OCCT used atleast, do some pretty non-standard methods when using CUDA on nVidia cards for tests, so it wouldn't suprise me if the problem is something with OCCT rather than ATI.

That said early batches of the 4870 did have a problem with the power regulation stage similiar to this at a much lower threshold, but that was fixed in all subsequent runs - it would be suprising if they hadn't left a bit of extra headroom after running into the problem the first time.
 
Not 80A on the 12V psu rail, but 80A at the GPU voltage....so for example on a 4890 though would be around 1.3-1.4ish volts.

Ah ok thanks, I was wondering why Everest was reporting 115A on my 1950XTX when it was running OCCT, I just dismissed it as a erroneous reading.
 
looks like the non reference cards that use 4 phase power regulation are uneffected by the high current draw of the gpu. best thing to do is if your in the market looking to get a 4870 or 4890 grab a non reference design card that uses more vrm phases.

as for the reference design cards, technically ati owe those guys a recall on thier cards due to insufficient power supply on the cards.
 
Last edited:
The issue seems pretty multifaceted - on the one hand, there are a few results which show the reference cards (including the 4870X2 by the looks of it) crash at 82 amps. Though not all reference cards seem to do that (judging by the thread). There's also one guy that claims that he managed to get nearly 84 amps under load using a different test, without crashing, but upon using the test settings required, the system locked up quite quickly. I'll test my own card tomorrow I think.
 
Wouldn't bother me, as long as it works in games why stress it so much?

Serves then right for doing silly things with it.

Next week "News flash, core i7 vulnerable to blowtorch test"
 
Last edited:
The issue seems pretty multifaceted - on the one hand, there are a few results which show the reference cards (including the 4870X2 by the looks of it) crash at 82 amps. Though not all reference cards seem to do that (judging by the thread). There's also one guy that claims that he managed to get nearly 84 amps under load using a different test, without crashing, but upon using the test settings required, the system locked up quite quickly. I'll test my own card tomorrow I think.

Wouldn't suprise me at all - their original CUDA tests used some appalling methods that ended up locking up the card - not nVidias fault at all...
 
Hmm , a hardware design flaw in electronic device brought to light with some software

I dont like the sounds of that , what IF in say 6/ 1 year months time some game actually uses the same amount of power when running the game?

IT defo going to be a problem then isnt it ? ( peoples cards might be out of warranty)

The ATI hardware design engineers should have given a bit of thought to having a slightly bigger buffer between the max power they could simulate and the absolute max
the card could take
 
Hmm , a hardware design flaw in electronic device brought to light with some software

I dont like the sounds of that , what IF in say 6/ 1 year months time some game actually uses the same amount of power when running the game?

IT defo going to be a problem then isnt it ? ( peoples cards might be out of warranty)

The ATI hardware design engineers should have given a bit of thought to having a slightly bigger buffer between the max power they could simulate and the absolute max
the card could take

I don't think any game will use the cards at 100%, but not sure about folding.
 
Hmm , a hardware design flaw in electronic device brought to light with some software

I dont like the sounds of that , what IF in say 6/ 1 year months time some game actually uses the same amount of power when running the game?

IT defo going to be a problem then isnt it ? ( peoples cards might be out of warranty)

The ATI hardware design engineers should have given a bit of thought to having a slightly bigger buffer between the max power they could simulate and the absolute max
the card could take

It would be a very bland looking game indeed.

The test is doing one algorithm to utilise all of the units on the GPU at the same to todo a single task which would never happen in a game.
 
If my CPU crashed in Prime95 at stock settings I would sure as hell return it and I don't see why GPU should be any different, it's good to see we have a similar test for GPU's now.

Even if this hasn't been the cause of many reported crashes in todays games, it could well be exposed in future games or in GPGPU software so if I was owner of such a card I would want it fixed/replaced.
 
If my CPU crashed in Prime95 at stock settings I would sure as hell return it and I don't see why GPU should be any different, it's good to see we have a similar test for GPU's now.

Even if this hasn't been the cause of many reported crashes in todays games, it could well be exposed in future games or in GPGPU software so if I was owner of such a card I would want it fixed/replaced.

A bad comparison as Prime test many more aspects of the CPU to find what parts are not upto the task or there would not be a need for it to be run for more than 1 minute.

The GPU is not failing its the Power Unit on the card that's not up to the job on an unrealistic scenario & seeing as every CPU has bugs that under a specific scenario will cause errors, even the one your using, you better return your CPU & not use a PC at all.
 
Last edited:
Its a totally unrealistic situation, the cards are find in ALL games, no news here.

seems to be a reality since a lot of different people have tried the test and it crashes thier cards too so the flaw is real. not really an unrealistic situation too, seeing as how non reference cards that have better power regulation design dont have the problem.

kind of hurts amd even more now since recently they fobbed off palit/gainward for bringing out non reference design cards before the higher tier amd card companies like sapphire etc got a chance to bring out thiers.

this is why i always give preference to buying cards that are non reference design since they usually have better power regulation circuits on them than the reference stuff.

one thing is for sure, amd have known about this right in the beginning. im sure that amd has a tool to really create a 100% load on the gpu even more so than occt can, just like intel had linpak for thier cpus.
 
Last edited:
Back
Top Bottom