• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rtx 3080 lower quality capacitor Issue

https://forums.evga.com/m/tm.aspx?m=3095238&p=1

EVGA admit it was an issue and the cause of the delay in getting their FTW3 cards out. Don't know why they couldn't have said that initially rather than blaming "unprecedent demand".

Say it took almost a week of testing to narrow down the issue. But personally I'm not convinced it is just down to these caps - that would be some real amateur level design in the first place - capacitors in this kind of usage are a cheap and effective way of hugely increasing your stability margin to catch edge cases and/or real world usage scenarios you can't easily predict.
 
Been following the 3080 for some time now. I think they should scrap boost clocks altogether and advertise only confirmed stable clocks with extensive testing.
If you want to overclock then use whatever software you use and do it yourself. Its like CPUs some overclock better than others same with GPUs.

Am going to wait it out for what AMD bring to the table and the 3080 is sorted out, as am looking to do a total rebuild next year.
 
The AIBs must love nVidia right now - a lot seems to come down to a mixture of less than ideal AIB PCBs and poor binning for OC parts with nVidia not giving them enough time to really work with but they can't really bite the hand that feeds them :s
 
I think they should make graphics cards like mini motherboards as an example you can upgrade Vram, GPU, cooling or the board itself. Just like building a PC it would cost more but you would get what you can afford and to what spec you want.
As graphics cards are getting more and more advanced it makes sense.

Not sure if its possible but its an idea. ;)
 
AiB partners skimping on quality on cards costing from £700- £1500 quid, so efforts from AiB to differentiate price has come down to cutting corners elsewhere on the board and not conform to standards shown on FE cards... it appears, if you win the silicon lottery your not affected...... hovering over the pre-order cancel button.
 
So if EVGA are saying 4 POSCAP + 20 MLCC (what Nvidia uses/FTW3 will use), the Asus's have 6 MLCC on back, any idea what the other 14 are? MLCC?

Edit: I think 10 MLCC in 1 block = 1 POSCAP, so 20 MLCC's form 2 blocks therefore replace two POSCAPS.
 
I have 25+ years experience in electronics manufacturing and I've never heard of tantalum capacitors being called 'lower quality' or 'cheaper' than multi-layer ceramic caps. They are simply different types. In cost terms multilayer ceramic caps are probably more than 25x cheaper than tantalum caps. In turn tantalum caps are larger and offer much bigger capacitance so you can put less on the board to achieve the same capacitance. The boards like the Asus TUF with all ceramics caps will likely be cheaper to build for that section even considering the extra costs for the SMT pick and place of 60 ceramic caps vs 6 tantalum caps.

They do have different characteristics like impedance, ESR etc which may account for some boards being better than others. If the capacitors do turn of to be an issue it's not going to be down to component quality or cost cutting, it's purely down to design and presumably if Nvidia have a design spec that gives the OEM board makers a choice of what to fit then the fault lies with Nvidia.
Thankyou for this, it's looking a bit caveman on some of the forums with MLCC good and POSCAP bad coming from people who've never heard of such a thing until today.
 
That happened before many years ago.

Well before FE we had no Nvidia Ref cards other than review samples sent out and EVGA etc bought the GPU and either had their own factory or paid a factory to put it on a PCB, and fit VRam/SMD's Caps etc.

Nvidia gave them strict specs on what makes and values of parts to use inc Samsung Vram.

This worked for years then some took shortcuts (forget which series it was) but one of worse recalls was EVGA and they had to recall cards and solder on higher value resistors/SMD's to fix them, they also use non Samsung Vram.

That is when Nvidia took over and all card were made for Nvidia to Nvidia spec by a company (cannot remember name) and EVGA etc bought the whole card and stuck their branded sticker on the cooler and wrote/flashed their Bios on it.

This went on for a while but eventually they were allowed to change the cooler etc, still looked like the Nvidia Ref blower but like EVGA's ACS 1/2/3 coolers and GTX 680 FTW blower cooler.


My advise, buy an FE it is over-engineed
 

Buildzoid's thoughts. TL;DW: He thinks it's plausible and places the blame firmly at Nvidia's feet, because Nvidia issues guidelines to the AIB partners on PCB design and component selection, approve all PCB designs in-house and also design the capacitor layout on the back of the GPU themselves, providing AIBs with a few different options on how to populate it, but not allowing them to actually change anything about its design. Which is why this is an issue across all sorts of custom cards I suppose.
 

Buildzoid's thoughts. TL;DW: He thinks it's plausible and places the blame firmly at Nvidia's feet, because Nvidia issues guidelines to the AIB partners on PCB design and component selection, approve all PCB designs in-house and also design the capacitor layout on the back of the GPU themselves, providing AIBs with a few different options on how to populate it, but not allowing them to actually change anything about its design. Which is why this is an issue across all sorts of custom cards I suppose.
On the flip side the AIB chose not to follow nvidias example and cheaped out on some materials AND charge a higher price than MRSP. Wether nvidia signed off on the design or not nvidia didn't make them do this.
 

Buildzoid's thoughts. TL;DW: He thinks it's plausible and places the blame firmly at Nvidia's feet, because Nvidia issues guidelines to the AIB partners on PCB design and component selection, approve all PCB designs in-house and also design the capacitor layout on the back of the GPU themselves, providing AIBs with a few different options on how to populate it, but not allowing them to actually change anything about its design. Which is why this is an issue across all sorts of custom cards I suppose.

Like he said in the video - this isn't generally an issue - on some GPUs they are even missing a bunch of them without any ill effect. IMO there is more to the story even if this is part of it.
 
On the flip side the AIB chose not to follow nvidias example and cheaped out on some materials AND charge a higher price than MRSP. Wether nvidia signed off on the design or not nvidia didn't make them do this.
The bad cap design is one of Nvidia's examples though. Presumably the cheapest one aimed at entry-level models. Nvidia told the AIBs that that design was fine and would work as expected. They may not have forced the AIBs to use it, but if they provided it as an option for budget-oriented models then it goes without saying that companies are going to use it. The fact that just about every single AIB has been caught out and had to scramble to try and fix their cards weeks before launch is evidence enough that Nvidia have screwed up here. If it was just one AIB cutting corners then maybe you can lay all the blame at their feet, but this is clearly a standard, Nvidia-verified capacitor layout provided to them. Blaming them for assuming the company who designed the GPU knew what they were doing is silly.

Like he said in the video - this isn't generally an issue - on some GPUs they are even missing a bunch of them without any ill effect. IMO there is more to the story even if this is part of it.
His video was posted before EVGA confirmed that this is the problem though. He does also say he doesn't have enough information to say for sure what's going on, but presumably now he does given EVGA's statement. An AIB coming out and directly addressing and admitting the problem (and what it took to rectify the situation) is 100% cast iron proof of the situation, without any need for further speculation. Of course, there's probably still going to be some card to card variance based on the silicon lottery and manufacturing tolerances. It probably is the case that some individual cards work fine with the dodgy layout. But the issue is clearly inherent and widespread when using that capacitor layout, or there wouldn't have been such a panic about it behind the scenes and expensive, last-minute scrambling to replace it.
 
Back
Top Bottom