• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is there going to be a dual fermi card?

With a revised PCB, some decent 3rd party cooling rather than your average reference jobbie it should be possible to pull power useage down considerably and cooling efficency much increased. A dual 416SP card would work pretty well and be very competitive.

Do you have a source for this Rroff? :)
 
Hope you can wait a long time dude. For your own saftey, please don't hold your breath :)
I think DM has sumed it all up quite nicely, forget any technical or financial reason it can or can't be done, they just don't matter. It's simple numbers of working cores that's the issue.
Is there any news of Nvidia doing another production run?

With info of, well, basically a few more available May and possibly more in a few months, so you're looking at September, basically, no there isn't and why would they.

$200-300 loss per card, for a 9k wafer run you're looking at $50mil outlay, and due to losses if they sold them all you might get $10-20mil back if very lucky. The problem is fairly obvious, they got A3 back on lets call it Jan the 1st, if they sent cards off then, 6 weeks later they'd get the first wafer back, 6 weeks after that they'd have a 6week production run, we're beyond that now, they should have, realistically over 6 weeks, well, lets just call it a LOT of wafers, that would be on top of the hot lots they did(wafers they run through before they get A3 back, praying to the gods it works otherwise the money is literally down the drain). IF it was being produced and wafers were being done every week, they'd have more now, they'd have shipments coming in every few days in larger quantities like AMD did.

There wouldn't be talk of "another month before a second small batch get released".

My guess is what little they had, well, had to go through an extremely long binning process, added to several new designs for heatsinks and lots and lots of testing to find the right clockspeed/SP ratio to get the highest yield they can.

Binning takes no time at all when half the cores off a wafer simply pass fine as a top end part, when almost none do, each individual core needs running in a huge range of tests to find which shader works or not.

Heck, more cores and full scale production aren't commercially viable right now simply based on the amount of testing it takes to get so few parts. If it takes a couple months to test a couple thousand cores, well, if it hit full scale production and they produced 50k a month, they wouldn't be done testing the final few thousand before Xmas anyway.

Its almost certainly been, well basically EOL'd before it even launched.


AS for a revised PCB dropping power, no, a tiny bit maybe, there are marginal savings to be made in shared power circuitry, but not much. The 448sp part isn't a 225W part as it is, its pretty much a 250W part, Nvidia just redefined their TDP to make it sound less bad.

If they were going to make a dual Fermi it would be SO SO far out of the "recommended" specs, it just wouldn't matter. 480W load, or 600W, honestly its so far out there it doesn't matter. You break the specs by 5W, Dell won't use it, thats gone, may aswell go the whole way at that point.


Theres not a single person on earth that wouldn't buy a 600W 495GTX with 2x480gtx's in it, that would suddenly see the card as great at 500W as a 490GTX with 2x448 or 2x416sp's in it.

While there are people who wouldn't buy a 600W 495gtx, but would buy a 300-350W dual Fermi part with say 384sp's in because its not insane power wise. Problem is they can't make that, and it would get demolished by a 5970, so again it would be pointless.

If they are going beyond the 300-350W bracket, they may aswell make it a triple Fermi 1000W card, the people that would buy any of the possible cards, would buy whatever it was without a care in the world.


Again the single limiting factor is, no one has the spare cores, and Nvidia aren't building them, and no one involved could turn a profit on it. The rest, is irrelevant.
 
The PCI-e spec has a thermal/electrical advisory for desktop systems but it doesn't limit the entire card to 300watt - only the amount you can draw from each of the official power connectors, 75watt, 150watt, etc. at no point does it prevent a developer from using another power source - only the above advisory as to points they should adhere to.

A dual GPU fermi card is certainly possible as DM says, even if its slightly impractical.

With a revised PCB, some decent 3rd party cooling rather than your average reference jobbie it should be possible to pull power useage down considerably and cooling efficency much increased. A dual 416SP card would work pretty well and be very competitive.


Yes it could work very well but competitive?.
I am not sure it can be in the time frame they can get it out and by then the 6*** series could be out and it will have a harder time against the 6*** series.
I do think a respin and the fermi will be a monster on 28nm,but it all depends what ATi brings out as well.
 
Last edited:
Its not going to work miracles - but you can easily drop power consumption by around 5% over the typical nVidia reference PCB design. Coupled with a well implemented design for shared components and you could probably get that up by a good few percent.
 
so it cant go over 300 watt on a pci slot? intresting does this mean graphics cards are bottlenecked now? Until its possible to sort out the power draw? Will the max load ever be increased?
 
There isn't much point in Nvidia releasing one. Fermi's performance per watt is terrible hence whatever they do ATi will be able to beat it. GTX470 X2 would perform very similar to a 5970 yet would use more than 400 watts.
 
so it cant go over 300 watt on a pci slot? intresting does this mean graphics cards are bottlenecked now? Until its possible to sort out the power draw? Will the max load ever be increased?

There is nothing to stop it exceeding 300watt it is however a somewhat ridiculous amount to use.
 
So whats going to happen then?Technology advancement is just going to stop because fermi draws a lot of power?I dont think so,we will see better fast cards as technology advances ONCE AGAIN,just as we have always seen,once you stop the evolution its game over and there is no point Inserting another coin.
 
Back
Top Bottom