• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA RTX 50 SERIES - Technical/General Discussion

What does true or false have to do with what expectations?

If you're suggesting that the entire ATX specification should be changed simply because Nvidia can't design a connector then maybe you're being a bit unreasonable. Can it be done, sure if we put our minds to it most things can. Should it be done, heck no because expecting an entire industry to bend over backwards just to accommodate a single organisation is ridiculous.
Did Nvidia design the connector on the PSU end as well?
 
Isn't every PSU that comes with 12VHPWR connectors/ cables "third party"?

Not in that sense no, it's saying don't swap PSU proprietary supplied 12VHPWR cables with any 3rd party cable to power a 5090 or 5080-use the one that came with your PSU.

Only use the adapter supplied with your gpu if your using the included adapter on PSU without the 12VHPWR.

You also can't swap any proprietory 12VHPWR cables between different model PSU's even if they fit a different models/brand PSU's modular slot.

BUT any PSU bought for 40 series also isn't specced for this gens 5090/80, you need to buy another new PSU if you want to run direct PSU 12VHPWR cable to GPU or use the box supplied splitter.

Messy in the confusion but,(if folks are dependant on tommybhoy from OC'ers forum explaining how to interpretate an article on how to stay safe when running a 5090/80 and I've already had a typo lol) more importantly extremely dangerous **** show!
 
Last edited:
Still isn't great in the scheme of things
They've shrunk that 90 down so that it fits into SFFPC's.

Money over safety might bite them this time.

Edit and the 5080's included along with the 5090 in that article it's not just the 90 series this gen.
 
Last edited:
Because the 8 Pin PCI-E cables/connectors are so overspecced, even the worst quality ones are still more than enough.
4x PCI-E connectors is officially rated at 600W, but as per the link above in reality the limit is closer to double that, at 1152 Watts
Because, and as I've said before I'm no electrical engineer, you could run a PCI-E power connector with a single live wire and, at least from what i worked out, still be within the specs of what the wire is rated for.

You'd be right on the edge of what a single 16/18 gauge wire can handle ; 12v DC @ 150W (12.5 amps) ; but on paper it wouldn't melt.

To do that with a 12vHPWR you need three 4 gauge wires, the sort you find attaching the battery to your car.
Did Nvidia design the connector on the PSU end as well?
Did Molex, did PCI-SIG, did SATA-IO? No they designed the specs and the power supply companies followed them.
 
Because, and as I've said before I'm no electrical engineer, you could run a PCI-E power connector with a single live wire and, at least from what i worked out, still be within the specs of what the wire is rated for.

You'd be right on the edge of what a single 16/18 gauge wire can handle ; 12v DC @ 150W (12.5 amps) ; but on paper it wouldn't melt.

To do that with a 12vHPWR you need three 4 gauge wires, the sort you find attaching the battery to your car.

Did Molex, did PCI-SIG, did SATA-IO? No they designed the specs and the power supply companies followed them.
My point is it's not nvidia's connector design that's the problem. The problem is trying to squeeze 50 DC amps into some pieces of spaghetti, as you've alluded to in this post :)
 
My point is it's not nvidia's connector design that's the problem. The problem is trying to squeeze 50 DC amps into some pieces of spaghetti, as you've alluded to in this post :)
So who tried to do that if not Nvidia? And who is not balancing the power draw across those pieces of spaghetti?
 
Last edited:
So who tried to do that if not Nvidia?
Still not the connector design though. Nvidia going over the power budget is a different story. They should be lobbying for an improved power supply and delivery standard (higher voltage) to cater for these high power devices. They should NEVER be producing something with less than 10% headroom to maximum spec for some wiring.
 
Last edited:
BUT any PSU bought for 40 series also isn't specced for this gens 5090/80, you need to buy another new PSU if you want to run direct PSU 12VHPWR cable to GPU or use the box supplied splitter.

That was my reading of it too; if you want to go direct with one single cable from the GPU to the PSU (one connector to one connector) the supposed ‘best practice’ is now to go from a new h++ female port on the PSU to the H++ female port on the GPU.

Or the other ‘least risky option’ is using a H++ splitter cable provided with your GPU (connecting running into 4x standard pcie cables from the PSU) e.g. use the octopus cable provided by your GPU manufacturer.

A ‘compatible mismatch’ (?!) going from a single H+ on the PSU to H++ on the GPU side is ‘safer’ for the 40 series due to the wider tolerances, but the tighter tolerances make even more scope for issues and less room for error on the 50 series.

… and it’s still not clear whether that’s correct or not because of the mixed messaging!!

My current hypothesis - purely because I think it’s the most rational explanation - is that the H++ cable spec hasn’t changed, yet Moddiy, Seasonic, Coolermaster and Cablemod are causing confusion by suggesting they have made some changes to cables because of a purported change in spec… when actually what has happened is they have made minor changes to make cables more robust which are not actually required by the H++ spec. But anyone’s guess is as good as mine.

Whatever the position, we’re all agreed - totally crap cables / connectors, what were they thinking, throw them in the sea.
 
Last edited:
Still not the connector design though. Nvidia going over the power budget is a different story. They should be lobbying for an improved power supply and delivery standard (higher voltage) to cater for these high power devices. They should NEVER be producing something with less than 10% headroom to maximum spec for some wiring.
You'll need to explain that one to me, how is it not the design of the connector? You mean its physical design, how it's designed to work, what?

Because if it was designed to work how most (all?) power connectors have been designed in the past to work like i said it would be using something like three 4 AWG wires.
 
You'll need to explain that one to me, how is it not the design of the connector? You mean its physical design, how it's designed to work, what?

Because if it was designed to work how most (all?) power connectors have been designed in the past to work like i said it would be using something like three 4 AWG wires.
What do you mean by connector? For me it's the plug. Do you mean the entire cable? Or the wiring standards from psu end to gpu end?

If you mean the cable itself, I agree it's underspecced. A single 4AWG wire is good for 85 amps, so one or two of those would be great. However, the pins at the end of it would need to be monster as a 4AWG cable has a conductor diameter of 5mm - and the pins would need to be similarly sized!
 
This is what Der8auer had to say about the whole cable situation on another overclocking forum:

I just want to clarify a few things :D I'll do an update video in 2 or 3 days but just to post it already.

The point of my video was to show that there can be issues with the cable/connector no matter how skilled you are. This "standard" simply doesn't allow for any kind of failures and also looking at the current Buildzoid video, the power design of 4090/5090 adds to the risk. I'm sure if I replace it with a new cable, it will be just fine.
But the point is that this kind of failure should not happen in the first place. A connector in our DIY industry should be fail-safe enough to compensate for this.

I probably experienced a rare case and most FE should be fine. My cable was performing bad but not **** enough to directly melt. But I have the feeling that the burned connectors are exactly what I experienced, just a bit worse.

But reporting about cases like this creates awareness and if the industry improves this in the future, we all win. The current situation is just not good.
 
What do you mean by connector?
This...
However i would exclude the wires from being part of the connector outside of what their gauge should be because despite what some oversensitive types may think wires are wires, if they're made of the same materiel, have the same diameter cross section, and are the same length one wire is pretty much the same as another.

I'll be even more specific, it's not even the physical design of the connector because it does what a connector is mean to do, it connects one thing to another and as a means of connecting two thing it does what it was designed to do. (then again a metal coat hanger could do that so it's not saying much :))

The problem is the connector, in its current serialised form, is being asked to operate way outside its design specs. It was designed, at least it started, with 6/8 separate 8A × 12V pins running in parallel (iirc) with each pin capable of drawing 150W. When they changed it to a vertical connector they serialised the power deliver so you no longer had 6/8 separate (it's actually less than that as they did build in some redundancy, iirc it was like three or four) power pins, you basically have one that's being fed by 6/8 separate under-specced wires.

Is that a fault of the connect? Not necessarily as it's being asked to do something it wasn’t designed for so you could go back to using it how it was designed to work but that would mean re-engineering the devices that's using it. You could simply use bigger gauge wires but then they wouldn't fit in the connector as it's currently designed, and there's probably a half dozen more solutions that would work around the problem.

If you want to get really pedantic it's not the fault of the connector, it's the fault of the people who are using it in a manner it wasn't designed for but seeing as they're pretty much one of the same that's seems to be splitting hairs.
 
Some interesting comments here from a Reddit megathread on the fiasco, purportedly from a head of R&D at Corsair.

The TLDR is:

- Moddiy seemingly doesn’t understand the H++ spec or have been talking rubbish, either way they are confusing everyone.

- Der8auer’s vid was rushed; there was no way the temps he was recording were accurate, because the cable would have melted way before he could record them.

- The most likely cause for high temps. shown by Der8auer was a defect in that specific cable.

- The design of the cable standards isn’t great, but now everyone is in a frenzy because of his probably crappy cable.

Jonny-Guru-Gerow (Corsair Head of R&D)​

Also a legendary PSU reviewer back in 2000s and 2010s

Link to Reddit Account here

Some relevant comments:

It's a misunderstanding on MODDIY's end. Clearly they're not a member of the PCI-SIG and haven't read through the spec. Because the spec clearly states that the changes made that differentiate 12VHPWR from 12V-2x6 is made only on the connector on the GPU and the PSU (if applicable).
My best guess of this melted cable comes down to one of several QC issues. Bad crimp. Terminal not fully seated. That kind of thing. Derau8er already pointed out the issue with using mixed metals, but I didn't see any galvanic corrosion on the terminal. Doesn't mean it's not there. There's really zero tolerance with this connector, so even a little bit of GC could potentially cause enough resistance to cause failure. Who knows? I don't have the cable in my hands. :D
------

The MODDIY was not thicker gauge than the Nvidia. They're both 16g. Just the MODDIY cable had a thicker insulation.
------

That's wrong. Then again, that video is full of wrong (sadly. Not being like Steve and looking to beat up on people, but if the wire was moving 22A and was 130°C, it would have melted instantly.)
16g is the spec and the 12VHPWR connector only supports 16g wire. In fact, the reason why some mod shops sell 17g wire is because some people have problems putting paracord sleeve over a 16g wire and getting a good crimp. That extra mm going from16g to 17g is enough to allow the sleeve to fit better. But that's not spec. Paracord sleeves aren't spec. The spec is 16g wire. PERIOD.
------

If it was that hot, he wouldn't be able to hold it in his hand. I don't know what his IR camera was measuring, but as Aris pointed out.... that wire would've melted. I've melted wires with a lot less current than that.
Also, the fact that the temperature at the PSU is hotter than the GPU is completely backwards from everything I've ever tested. And I've tested a lot. Right now I have a 5090 running Furmark 2 for an hour so far and I have 46.5°C at the PSU and 64.2°C at the GPU in a 30°C room. The card is using 575.7W on average.
Derau8er is smart. Hr'll figure things out sooner than later. I just think his video was too quick and dirty. Proper testing would be to move those connectors around the PSU interface. Unplug and replug and try again. Try another cable. At the very least, take all measurements at least twice. He's got everyone in an uproar and it's really all for nothing. Not saying there is no problem. I personally don't *like* the connector, but we don't have enough information right now and shouldn't be basing assumptions on some third party cable from some Hong Kong outfit.
------

ABSOLUTELY. There is no argument that there is going to be different resistance across different pins. But no wire/terminal should get hotter than 105°C. We're CLEARLY seeing a problem where terminals are either not properly crimped, inserted, corroded, etc. what have you, and the power is going to a path of less resistance. But this is a design problem. I can't fix this. :-( (well... I can, maybe, but it requires overcomplicating the cable and breaking the spec)
------

They provide this if your PSU is not capable of more than 150W per 8-pin. If used with a PSU that CAN provide more than 150W per 8-pin, it just splits the load up across the four connections
There is no "6+2-pin to 12VHPWR". The cable is a 2x4-pin Type 4 or 5 to 12V-2x6. There is no disadvantage to using this as the 12VHPWR has 6 12V conductors and 6 grounds and two sense that need to be grounded. 2x Type 4 connection gives you up to 8x 12V and 8x ground. So, this is a non-issue.
12VHPWR to 12VHPWR is fine too. Just like the 2x Type 4 8-pin or 2x Type 5 8-pin, you have a one-to-one connection between the PSU and the GPU. That' s why I don't like calling these cables "adapters". If it's one-to-one, it's not an adapter. It's just a "cable".
 
One sold on the auction site for £9000

The buyer and the seller at some point before the auction:

8Om9xWT.jpeg
 
IDK who Jon Gerow is, i think it's the person PCMag spoke to, however he seems knowledgable (not that I'd know) and he has some more words to say on the subject for anyone who's interested...
He is the guy who started johnnyguru website. An OG in the enthusiast scene and psu industry. His website was the go to for psu analysis and review for years. He's been head of Corsair psu r&d for a while now.
 
Back
Top Bottom