• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Serious design flaw on certain 4870/4890's

one thing is for sure, amd have known about this right in the beginning. im sure that amd has a tool to really create a 100% load on the gpu even more so than occt can, just like intel had linpak for thier cpus.


AMD have known about this for awhile... they redid the original 4870 design slightly soon after launch because of a related issue... as I said earlier I'm suprised they didn't at that time then put in even more headroom...

At the end of the day you get what you pay for AMD cards are cheap and your unlikely to run into this problem in every day useage. You don't really expect a £15 pair of shoes to last forever but you'd expect to get more milage out of a £50 pair.
 

I think you've sort of missed the point his post - whilst it is a realistic scenario in one sense, that particular sense happens to be under a very specific load testing circumstance as opposed to something that would be used in a real application where you want to use the hardware, rather than test the hardware. If you were to couple that particular load with any useful processing (i.e. rendering a model to the screen) you'd find that the overall load of the card would drop rather dramatically. The only reason you would use a calculation such as the one used in that test would be for the sole reason of testing a card's stability, there's no practical use for it otherwise.

Also another interesting point to bring up is this just tests the shaders - the 4800 cards incidentally also score about 30-40% more FPS than their GTX200 contemporaries, which indicates the test stresses the 4800 cards a lot more than it does the GTX200 cards. Also to truly test the card's stability wouldn't you also have to load the texturing units and render back ends since the shader cores might be able to operate at a given clock speed but the other parts might not?

Anyway, haven't run the test yet, I'm going out now so I'll put it on then with something monitoring the temperature and amps going through the card and such to pinpoint the point of the crash presuming it happens. It's a totally stock (black PCB aside) XFX 4870 1GB. I did run it for about 5 minutes last night and found the VRM's were getting to about 120 degrees but the current was hovering around 79 amps, so wish me luck. :\
 
AMD have known about this for awhile... they redid the original 4870 design slightly soon after launch because of a related issue... as I said earlier I'm suprised they didn't at that time then put in even more headroom...

At the end of the day you get what you pay for AMD cards are cheap and your unlikely to run into this problem in every day useage. You don't really expect a £15 pair of shoes to last forever but you'd expect to get more milage out of a £50 pair.

That's a weird comment considering all the failing chipset issues Nvidia have had in the past year so.
 
I think you've sort of missed the point his post - whilst it is a realistic scenario in one sense, that particular sense happens to be under a very specific load testing circumstance as opposed to something that would be used in a real application where you want to use the hardware, rather than test the hardware. If you were to couple that particular load with any useful processing (i.e. rendering a model to the screen) you'd find that the overall load of the card would drop rather dramatically. The only reason you would use a calculation such as the one used in that test would be for the sole reason of testing a card's stability, there's no practical use for it otherwise.

Also another interesting point to bring up is this just tests the shaders - the 4800 cards incidentally also score about 30-40% more FPS than their GTX200 contemporaries, which indicates the test stresses the 4800 cards a lot more than it does the GTX200 cards. Also to truly test the card's stability wouldn't you also have to load the texturing units and render back ends since the shader cores might be able to operate at a given clock speed but the other parts might not?

Anyway, haven't run the test yet, I'm going out now so I'll put it on then with something monitoring the temperature and amps going through the card and such to pinpoint the point of the crash presuming it happens. It's a totally stock (black PCB aside) XFX 4870 1GB. I did run it for about 5 minutes last night and found the VRM's were getting to about 120 degrees but the current was hovering around 79 amps, so wish me luck. :\

this shouldnt happen regardless of how the test is done. the cause of this is cheaping out on voltage regulation on the cards since its not effecting cards that use non reference designs. all this test shows is that 82amps seems to be the upper limit that can be pulled from the reference 3 phase power design cards. and 120c on the vrms is some serious stuff man, iv run occt overnight on my card and vrm temps were under 70c but thats a gtx260 non referance i have.

cant you mod your cooling to get lower vrm temps on your card?
 
Errr don't the cards need to be overclocked in order to reach that current?

you obviously havent been reading the thread at xtreme systems forums, this happens at stock speeds on these cards, thats why its a serious matter. im waiting to see how amd responds to this and how they will compensate users who are effected.
 
Also another interesting point to bring up is this just tests the shaders - the 4800 cards incidentally also score about 30-40% more FPS than their GTX200 contemporaries, which indicates the test stresses the 4800 cards a lot more than it does the GTX200 cards.

Thats a given because they are synthetically loading the cards up in a specific manner to hit peak flops performance. You can't equate that directly to either realworld performance or how much more or not that is stressing the card compared to another architecture.
 
why are you making fun out of this? how can some people enjoy it when they get a black screen due to too much current pull? if you had a card effected by this issue you would see things differently.

I think this is being blown out of proportion really - to make this happen they have to synthetically load up the cards in a specific manner that achieves peak efficency from the SPs this will never ever happen on ATI cards under normal circumstances - its one of the weaknesses of the ATI arch in that they can never sustain peak performance in realworld situations and as it happens in this situation its a good thing.
 
I think this is being blown out of proportion really - to make this happen they have to synthetically load up the cards in a specific manner that achieves peak efficency from the SPs this will never ever happen on ATI cards under normal circumstances - its one of the weaknesses of the ATI arch in that they can never sustain peak performance in realworld situations and as it happens in this situation its a good thing.

Lucky that my OCed 3x 3870 run the test just fine.
 
Obviously they have a higher overhead between what the core can require and what the power stage can supply than the 4800 cards :D
 
Obviously they have a higher overhead between what the core can require and what the power stage can supply than the 4800 cards :D

That,s my point.
So in reference to a comment you made earlyer about cheap, They did not cheap skate on the cards before the 48xx.
And it would not bother me in any case as the gaming is more important than the benching when it comes to gfx cards.
 
Last edited:
Just an update on the test I'm running, it's survived. Currently idle monitoring for the last 4 minutes. :)

Edit: rofl the VRM temperatures went above 128 degrees and the figures went into the minus values. :D

Also: It never breached 82 amps by the looks of it. Guess I got lucky with my card?
 
Looking at it another way, maybe ATI are deliberately restricting the cards overclockabilty by setting the OCP at a lower level than they used to.
 
That was as much an oblique poke at nvidia and the problems with the VRMs on some 200 series cards at a time when they were a lot more expensive than the equivalent ATI card as it was a prod at ATI.
 
whats ocuk's official stance on this issue? someone buys a 4870 takes it home decides he wants to see how much fps increase he gets in occt compared to his previous card and boom black screen on him? grounds for a refund/exchange?

this is going to be real interesting. i doubt ocuk can give an official response without first consulting ati on this matter.
 
Just an update on the test I'm running, it's survived. Currently idle monitoring for the last 4 minutes. :)

Edit: rofl the VRM temperatures went above 128 degrees and the figures went into the minus values. :D

Also: It never breached 82 amps by the looks of it. Guess I got lucky with my card?

could you take a photo of the vrm section of your card and post on here pal. would like to see what sort of cooling is on there as standard.
 
Back
Top Bottom