• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA RTX 50 SERIES - Technical/General Discussion

(...) and if/when such a situation occurs there's no way for the device to know about it.
That's the big chunk of the problem too - it was a design choice by Nvidia that device has no way to know how much current it pulls through each cable. It can be easily done, they chose not to - money and board size (apparently mostly the latter) for few extra components.
 
Last edited:
That's the big chunk of the problem too - it was a design choice by Nvidia that device has no way to know how much current it pulls through each cable. It can be easily done, they chose not to - money and board size (apparently mostly the latter) for few extra components.
It puzzles me that it doesn't violate some kind of electrical standard. I mean, the equation is pretty simple. You know the gauge of the wire, the number of wires, the wattage and the fact there's no balancing or safety mechanism. Buyers need to be protected by safety standards because sometimes it's clear that manufacturers just do not care. Buyers implicitly seem to trust 'if it's on sale, then it's safe'. People now still seem to be flat out refusing to accept it's not safe and are still ordering 5090s. Tech influencers also (mostly) seem unwilling to just come out and say it.
 
Last edited:
That's not how FG works at all, though. It's predicting nothing. As Nvidia themselves, together with many reviewers said, FG only compares 2 GPU generated images (first and second frame) and then generated frames in between. There's no predicting anything at all involved in that, it's just trying to very quickly generate something in between, often with very bad quality (because it has to be very quickly done).
Then it's worthless. Why do I want predicted frame that's older than my last real frame. That's just lag.
 
The first fails, then the second in a beautiful domino effect
True but it would be better than the alternative IMO. I'd much rather have to replace six fuses than £2k+ worth of GPU or even worse, one things for sure i wouldn't feel comfortable leaving it unattended while it was doing any heavy work.
Yeah that wouldn't work. If anything it would guarantee failure. Any time there was enough of a spike to blow a fuse the rest of the cables would just draw more power till every fuse has blown. I guess it would save the card. But would need to replace the fuses regularly :P
Fuses don't work like that, transient spikes in the ms range wouldn't be enough to blow a fuse.
 
That's not how FG works at all, though. It's predicting nothing. As Nvidia themselves, together with many reviewers said, FG only compares 2 GPU generated images (first and second frame) and then generated frames in between. There's no predicting anything at all involved in that, it's just trying to very quickly generate something in between, often with very bad quality (because it has to be very quickly done).
Then it's worthless. Why do I want predicted frame that's older than my last real frame. That's just lag.
It's literally what TV's have been doing for decades, frame interpolation. Nothing new.

No It doesn't wait for the next frame, the input lag would double if it did that and it doesn't.
 
Last edited:
The PSUs with 12vhpwr cables, why are they even letting that much power go through a single wire.

Similarly, where an adaptor is used, it still means that one or more of the 8 pin power wires is supplying more power than it should. I think this is more easily solved at the PSU side.

Every 12v pin should be throttled to a max amperage on the PSU side. For an 8 pin that means 84W max (but under a specified load only 50W per pin), which when two are combined in an adapter is 168W max (or 100W under the specified load).

Screenshot-2025-02-15-155828.png
 
Last edited:
The PSUs with 12vhpwr cables, why are they even letting that much power go through a single wire.

Similarly, where an adaptor is used, it still means that one or more of the 8 pin power wires is supplying more power than it should. I think this is more easily solved at the PSU side.
As you say, currently (pun intended) there's no circuitry to actually test how much power is running over each wire, on either PSU's or GPU's.

The problem seems to be poorly made cables but yes additional safety circuitry would resolve the problem completely.
 
As you say, currently (pun intended) there's no circuitry to actually test how much power is running over each wire, on either PSU's or GPU's.

The problem seems to be poorly made cables but yes additional safety circuitry would resolve the problem completely.

This seems like a PSU job though. What is OCP and OPP actually doing if they will let the full power go through a single wire?
 
Last edited:
The PSUs with 12vhpwr cables, why are they even letting that much power go through a single wire.
Because that's how PSU's are designed, when you see 12v DC rail @ 58 amps there's a great big honking busbar that all the outputs connect to.

If you wanted separate voltages each with their own amperage you'd separate rails for each of the six wires.
 
Because that's how PSU's are designed, when you see 12v DC rail @ 58 amps there's a great big honking busbar that all the outputs connect to.

If you wanted separate voltages each with their own amperage you'd separate rails for each of the six wires.

You could also just monitor it and cut power if it goes over spec.
 
Last edited:
This seems like a PSU job though. What is OCP and OPP actually doing if they will let the full power go through a single wire?
Because currently they arent set up to monitor on a per wire basis because it's never really been an issue before. What you're asking for would be a significant increase in cost.

Its equally viable to do on the GPU, so it will depend if this becomes costly for Nvidia/AIB's replacing cards, the PSU isn't failing so doubt they'll do anything on that end. The increase in cost would be much easier for a £1-2k product to absorb than a £1-200.
 
Last edited:
You could also just monitor it and cut power if it goes over spec.
Sure, you could do anything you like but are you really suggesting we should have an entirely new ATX power spec, that PSU manufactures should redesign their entire line up, and anyone buying a 50 series GPU has to also buy one of these new PSU's simply because Nvidia dropped the ball.

Because if so that seems to be asking a lot from people and companies especially when they've not caused the problem.
 
Last edited:
Because currently they arent set up to monitor on a per wire basis because it's never really been an issue before. What you're asking for would be a significant increase in cost.

Its equally viable to do on the GPU, so it will depend if this becomes costly for Nvidia/AIB's replacing cards, the PSU isn't failing so doubt they'll do anything on that end. The increase in cost would be much easier for a £1-2k product to absorb than a £1-200.
From what I understand (from that Intel engineer's Reddit post), it's generally considered better and the responsibility of the device to balance/monitor the load, not the power-supply. I'm not an EE so don't take my word for that.

I think the AIBs are equally to blame here. Their engineers absolutely know this isn't safe and yet they continued to release their products.
 
Back
Top Bottom