Soldato
- Joined
- 18 May 2010
- Posts
- 23,201
- Location
- London
Got some links you can share? I have not seen one yet.
Just Google "Cable melt 5080"
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Got some links you can share? I have not seen one yet.
I see cables now melting on 5080s now.
That's the big chunk of the problem too - it was a design choice by Nvidia that device has no way to know how much current it pulls through each cable. It can be easily done, they chose not to - money and board size (apparently mostly the latter) for few extra components.(...) and if/when such a situation occurs there's no way for the device to know about it.
It puzzles me that it doesn't violate some kind of electrical standard. I mean, the equation is pretty simple. You know the gauge of the wire, the number of wires, the wattage and the fact there's no balancing or safety mechanism. Buyers need to be protected by safety standards because sometimes it's clear that manufacturers just do not care. Buyers implicitly seem to trust 'if it's on sale, then it's safe'. People now still seem to be flat out refusing to accept it's not safe and are still ordering 5090s. Tech influencers also (mostly) seem unwilling to just come out and say it.That's the big chunk of the problem too - it was a design choice by Nvidia that device has no way to know how much current it pulls through each cable. It can be easily done, they chose not to - money and board size (apparently mostly the latter) for few extra components.
I see cables now melting on 5080s now.
Then it's worthless. Why do I want predicted frame that's older than my last real frame. That's just lag.That's not how FG works at all, though. It's predicting nothing. As Nvidia themselves, together with many reviewers said, FG only compares 2 GPU generated images (first and second frame) and then generated frames in between. There's no predicting anything at all involved in that, it's just trying to very quickly generate something in between, often with very bad quality (because it has to be very quickly done).
Then it's worthless. Why do I want predicted frame that's older than my last real frame. That's just lag.
That fine if I'm not giving it a live input. Coronation street can accept the extra ms.It's literally what TV's have been doing for decades, frame interpolation. Nothing new.
True but it would be better than the alternative IMO. I'd much rather have to replace six fuses than £2k+ worth of GPU or even worse, one things for sure i wouldn't feel comfortable leaving it unattended while it was doing any heavy work.The first fails, then the second in a beautiful domino effect
Fuses don't work like that, transient spikes in the ms range wouldn't be enough to blow a fuse.Yeah that wouldn't work. If anything it would guarantee failure. Any time there was enough of a spike to blow a fuse the rest of the cables would just draw more power till every fuse has blown. I guess it would save the card. But would need to replace the fuses regularly![]()
That's not how FG works at all, though. It's predicting nothing. As Nvidia themselves, together with many reviewers said, FG only compares 2 GPU generated images (first and second frame) and then generated frames in between. There's no predicting anything at all involved in that, it's just trying to very quickly generate something in between, often with very bad quality (because it has to be very quickly done).
Then it's worthless. Why do I want predicted frame that's older than my last real frame. That's just lag.
It's literally what TV's have been doing for decades, frame interpolation. Nothing new.
As you say, currently (pun intended) there's no circuitry to actually test how much power is running over each wire, on either PSU's or GPU's.The PSUs with 12vhpwr cables, why are they even letting that much power go through a single wire.
Similarly, where an adaptor is used, it still means that one or more of the 8 pin power wires is supplying more power than it should. I think this is more easily solved at the PSU side.
As you say, currently (pun intended) there's no circuitry to actually test how much power is running over each wire, on either PSU's or GPU's.
The problem seems to be poorly made cables but yes additional safety circuitry would resolve the problem completely.
Because that's how PSU's are designed, when you see 12v DC rail @ 58 amps there's a great big honking busbar that all the outputs connect to.The PSUs with 12vhpwr cables, why are they even letting that much power go through a single wire.
Because that's how PSU's are designed, when you see 12v DC rail @ 58 amps there's a great big honking busbar that all the outputs connect to.
If you wanted separate voltages each with their own amperage you'd separate rails for each of the six wires.
Because currently they arent set up to monitor on a per wire basis because it's never really been an issue before. What you're asking for would be a significant increase in cost.This seems like a PSU job though. What is OCP and OPP actually doing if they will let the full power go through a single wire?
Sure, you could do anything you like but are you really suggesting we should have an entirely new ATX power spec, that PSU manufactures should redesign their entire line up, and anyone buying a 50 series GPU has to also buy one of these new PSU's simply because Nvidia dropped the ball.You could also just monitor it and cut power if it goes over spec.
From what I understand (from that Intel engineer's Reddit post), it's generally considered better and the responsibility of the device to balance/monitor the load, not the power-supply. I'm not an EE so don't take my word for that.Because currently they arent set up to monitor on a per wire basis because it's never really been an issue before. What you're asking for would be a significant increase in cost.
Its equally viable to do on the GPU, so it will depend if this becomes costly for Nvidia/AIB's replacing cards, the PSU isn't failing so doubt they'll do anything on that end. The increase in cost would be much easier for a £1-2k product to absorb than a £1-200.