• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia's dual chip card waits for Antilles

Associate
Joined
24 Sep 2008
Posts
390
Location
Telford, UK
http://www.fudzilla.com/graphics/item/20924-nvidias-dual-chip-card-waits-for-antilles
Nvidia has a dual GF110 card that is supposed to launch very soon. However, since AMD has postponed its dual-chip card for Q1 2011 it looks like Nvidia will do the same.

Sources close to company are saying that Nvidia partners are more or less ready and that the company can pull a quick launch even in 2010, but it looks like it will only happen after AMD's Antilles Radeon HD 6990 dual chip card comes out. Nvidia badly wants the two chip card performance crown and has to see and study Antilles before its launch. The rumoured name for dual GF110 card is Geforce GTX 590.

We still have to see Cayman Radeon HD 6970 and HD 6950 cards in very late 2010, and if they can beat Geforce GTX 580 and soon to launch Geforce GTX 570. The challenge for AMD is that making a big chip and run it fast is not that easy.

Last time ATI did this was with R600 more than three years ago and things didn't go as smooth as planned. The chip was quite hot and didn’t perform that well. After that the head of graphics Rick Bergman said that ATI is changing the course to make RV770 and a year later RV870, two chips that were chasing AMD's performance-per-watt dream.

Now this dream is apparently over as Cayman, a massive chip is powering Antilles, Radeon HD 6990. Let’s hope it all goes well for AMD.
 
If the 6990 turns out to be dual 6970's, and a 6970 is touted as having GTX480 performance, then nvidia will certainly need to create quite a monster to beat it.

Triple slot cooler or something ^^?
 
I'm surprised they are considering a pair of GF110s as a single card solution, the TDP on that would be massive (I'd expect a 900W PSU recommended requirement). I could see the GF104 full fat (or GF114 it has been rumoured to be renamed to), arriving as a dual chip card. (A sort of GTX560x2).

I suppose a stripped down and lower clocked GF110 could make it into a dual GPU card (A GTX570x2 sort of thing). Still, it would be quite a beast.
 
Last edited:
I don't doubt it. Nvidia has certainly turned a few corners since the failed fermi toaster launch back in April. Where as amd has dropped the ball somewhat.

570gtx is probably coming soon too.

Cayman better be epic, because it's going to be a tough sell as it already scheduled to miss the massive Christmas sales.
 
Nvidia has certainly turned a few corners since the failed fermi toaster launch back in April.

The GTX580 is still a very power hungry card though, it just has better cooling this time round, and where efficiencies were made, clocks were increased. I still think a full fat GF110 dual GPU card is unlikely, but I'm often wrong. ;)
 
I still think a full fat GF110 dual GPU card is unlikely, but I'm often wrong. ;)

I agree. A dual-GF110 card will need to reduce per-core power requirements significantly. That said, unless Cayman turns out to offer amazing power efficiency, I don't see Antilles using anywhere near "full-fat" chips either.

The dual-GPU card game is really about performance-per-watt. While Nvidia have improved on this somewhat, they are still a long way behind AMD. I'm not expecting Cayman to match Cypress in the performance-per-watt stakes, but I still expect it to be more power efficient than the GTX580.

It will be interesting to see just how much both companies are willing to push the supposed "300W max TDP" envelope in order to get out a dual-GPU card.
 
I don't doubt it. Nvidia has certainly turned a few corners since the failed fermi toaster launch back in April. Where as amd has dropped the ball somewhat.

570gtx is probably coming soon too.

Cayman better be epic, because it's going to be a tough sell as it already scheduled to miss the massive Christmas sales.

They have??

They relleased a new series of cards... nvidia havent :P
 
I doubt the dual card will be two GTX580's on one PCB, much more likely they will use two lower specced cores like the GTX570 will be.
 
Amazing, I didn't check the link but halfway through the text I knew it must be fudzilla.

Last time ATI did this was with R600 more than three years ago and things didn't go as smooth as planned. The chip was quite hot and didn’t perform that well. After that the head of graphics Rick Bergman said that ATI is changing the course to make RV770 and a year later RV870, two chips that were chasing AMD's performance-per-watt dream.

Now this dream is apparently over as Cayman, a massive chip is powering Antilles, Radeon HD 6990. Let’s hope it all goes well for AMD.
This is just unfounded garbage. He's clearly trying to make out that Antilles is like Fermi, and that Nvidia are great because they can make 'big, hot chips'. AMD will still have superior performance per-watt and performance per mm, making their chips smaller and cooler.

For that reason I find it hard to believe that Nvidia could come out with a faster dual-GPU card that didn't crap all over the 300W limit (unless they cheat of course).
 
Skyrocket, as Phoenix said, re-releasing a respin to get full yields a full year later is not turning corners, its honestly a little pathetic(if they'd done a full respin after the first cores came back last July/August, they could have had the "580gtx" for April, it was instead their insistance on doing a full respin but praying a basic spin, 3 times, would fix the problem that took longer anyway and resulted in a non 512sp final part.

Nvidia lets see, don't have a low end part for Fermi, their mobile stuff is doing horribly as the power requirements for all the mobile Fermi's are ridiculous. In the meantime they've firesaled their midrange to gain a little marketshare, which are bigger than 5870's in size and cost yet sold for less than a 6850.

What corner did they turn exactly?

As for Duff man, theres quite literally no reason to assume Cayman won't match Cypress for power per watt. Its incredibly rare for a new architecture to offer worse performance per mm2 and per transistor, infact I'm not sure I could name one.

Nvidia also hasn't really improved power efficiency. A 260gtx offers worse power efficiency than a 280gtx, a 5850 worse than a 5870, a 5750 worse than a 5770, etc, etc. A full core with bits disabled offers worse power efficiency, thats simply how life works. We've seen that a 580gtx in furmark with the power limiter turned off uses noticeably MORE power, about exactly what you'd expect in reality. In general gaming improved performance with less waste from a cluster turned off.


Thats really not here nor there though, not a new architecture, marginal power efficiency gains, 90% of which are just being a fully enabled core rather than improved anything.

If Cayman uses the same amount of power, the same amount of shaders and offers 20% better performance, it would you know, have 20% better percieved power efficiency.

Assuming its a 2GB card, then memory power should go up on its own, we'll have to see, the actual shaders/rops/everything else would almost certainly be improved in a new architecture, thats just how life tends to work. More memory can alter the exact power draw, double density chips with a bigger power draw and higher speed, higher voltage could use an extra 10% power, which could drop the overal power efficiency. But then compare power vs a 2gb 5870 and you'd see the probably efficiency increase.
 
They have??

They relleased a new series of cards... nvidia havent :P

i wouldn't exactly call two cards (6850 and 6870) a new series but i do agree AMD haven't dropped the ball yet. the currant rumor would indicate that things are not quite going the way they would like them to be.
IE: the possible delay of the launch of the 6900 cards, Antilles being pushed back into 2011 ETC.

AMD might still pull it all together and everything will be fine for them, but the rumors are flying around and normally with these things there is some slim element of truth to them, so all would certainly not seem well in the AMD camp.
 
I agree Dutch, far more likely to be 2 GTX570

In general two times full 512sp cores at say with a 6% lower clock would almost certainly use less power than 2 480sp cores with a 6% higher clock, both offering similar performance.

512sp cores will "mostly" be from the centre of the wafer where the silicon is marginally better, and will use a few less watts than the same cores from the edge of the wafer. IE 512sp cores are chosen because they are the best silicon with best power characteristics, using worse cores on a dual GF110 would be the worst way to go.

Theres a reason the 5970 uses two of the best 5870 cores they could make, downclocked ;)

AMD and Nvidia probably aren't giving a monkeys about the 300W "limit" this time around. The 5970 that people knock performance, is downclocked ONLY to get inside 300W, look at the main launch material it talked heavily about overclocking, about how the card could handle 400W easily, etc etc. It was never meant to not be overclocked, being 300W was essentially a gimmick.

Every core was capable and every card was capable of running 5870 speeds, and was basically launched as "5970's, wink wink, nudge nudge, overclock it to full 5870 speeds if you really want, wink wink".

If Nvidia/AMD both go that route again, heres our dual GF100/cayman with 400Mhz clocks and 250W, then in small print, "these cores will do 800Mhz with no issues", maybe.
 
I don't doubt it. Nvidia has certainly turned a few corners since the failed fermi toaster launch back in April. Where as amd has dropped the ball somewhat.

570gtx is probably coming soon too.

Cayman better be epic, because it's going to be a tough sell as it already scheduled to miss the massive Christmas sales.

Sure. :p
 
Skyrocket, as Phoenix said, re-releasing a respin to get full yields a full year later is not turning corners, its honestly a little pathetic(if they'd done a full respin after the first cores came back last July/August, they could have had the "580gtx" for April, it was instead their insistance on doing a full respin but praying a basic spin, 3 times, would fix the problem that took longer anyway and resulted in a non 512sp final part.

Nvidia lets see, don't have a low end part for Fermi, their mobile stuff is doing horribly as the power requirements for all the mobile Fermi's are ridiculous. In the meantime they've firesaled their midrange to gain a little marketshare, which are bigger than 5870's in size and cost yet sold for less than a 6850.

What corner did they turn exactly?

As for Duff man, theres quite literally no reason to assume Cayman won't match Cypress for power per watt. Its incredibly rare for a new architecture to offer worse performance per mm2 and per transistor, infact I'm not sure I could name one.

Nvidia also hasn't really improved power efficiency. A 260gtx offers worse power efficiency than a 280gtx, a 5850 worse than a 5870, a 5750 worse than a 5770, etc, etc. A full core with bits disabled offers worse power efficiency, thats simply how life works. We've seen that a 580gtx in furmark with the power limiter turned off uses noticeably MORE power, about exactly what you'd expect in reality. In general gaming improved performance with less waste from a cluster turned off.


Thats really not here nor there though, not a new architecture, marginal power efficiency gains, 90% of which are just being a fully enabled core rather than improved anything.

If Cayman uses the same amount of power, the same amount of shaders and offers 20% better performance, it would you know, have 20% better percieved power efficiency.

Assuming its a 2GB card, then memory power should go up on its own, we'll have to see, the actual shaders/rops/everything else would almost certainly be improved in a new architecture, thats just how life tends to work. More memory can alter the exact power draw, double density chips with a bigger power draw and higher speed, higher voltage could use an extra 10% power, which could drop the overal power efficiency. But then compare power vs a 2gb 5870 and you'd see the probably efficiency increase.

Hi DM, Let me point out that i know very little of gpu design and that i am basing my assumptions on my electrical knowledge(Buildings). As i see it if you increase the physical size of the GPU you would need to pump in more volts to overcome the larger resistance in which the voltage has to pass through(the gpu mm²) which in turn leads to more losses(Volt Drop). As we all know in the electrical world that power usage is reacitve so that if any side of the formula is changed it has a direct effect on the other factors. Erco that an increase in GPU size and therefore needing more volts. does not create a linear increase in power usage as twice the die size would require more than twice the power required(Inc losses) ?

Interested in your thoughs(or anyones to that matter who knows how it works within slilica)
 
Last edited:
Hi DM, Let me point out that i know very little of gpu design and that i am basing my assumptions on my electrical knowledge(Buildings). As i see it if you increase the physical size of the GPU you would need to pump in more volts to overcome the larger resistance in which the voltage has to pass through(the gpu mm²) which in turn leads to more losses(Volt Drop). As we all know in the electrical world that power usage is reacitve so that if any side of the formula is changed it has a direct effect on the other factors. Erco that an increase in GPU size and therefore needing more volts. does not create a linear increase in power usage as twice the die size would require more than twice the power required(Inc losses) ?

Interested in your thoughs(or anyones to that matter who knows how it works within slilica)

Well, until we know the specs (and GPU design) for Cayman it's difficult to make any specific comments. The best we can do is talk in general terms about GPU design. You are correct though, that these things are never linear.

I described in this post some of the reasons why a GPU redesign to improve scalability (i.e. for "future-proofing") can lead to an increase in the proportion of the GPU used for 'non-shader' components. A natural consequence of this could be a slight decrease in performance-per-watt (the price of creating an architecture that can scale over the next few generations), assuming the same manufacturing process is used. Of course, everything is described in general terms, because we don't have specific information on Cayman.

Drunkenmaster is correct that it would be unprecedented for a new GPU architecture to be poorer than a previous design in terms of performance-per-watt. But it's also unprecedented for a 'truly new' architecture to be produced on the same manufacturing process as the previous generation (40nm in this case), so we're in untested waters here. A shrink in the manufacturing process will always lead to improvements in per-watt performance. These improvements will naturally drown out any performance-per-watt losses which may occur from moving to the more complex and scalable architecture.
 
Hi DM, Let me point out that i know very little of gpu design and that i am basing my assumptions on my electrical knowledge(Buildings). As i see it if you increase the physical size of the GPU you would need to pump in more volts to overcome the larger resistance in which the voltage has to pass through(the gpu mm²) which in turn leads to more losses(Volt Drop). As we all know in the electrical world that power usage is reacitve so that if any side of the formula is changed it has a direct effect on the other factors. Erco that an increase in GPU size and therefore needing more volts. does not create a linear increase in power usage as twice the die size would require more than twice the power required(Inc losses) ?

Interested in your thoughs(or anyones to that matter who knows how it works within slilica)

In the world of super-conductors its not as cut and dried as that. You are correct that the further electricity travels there is more resistance, but as a manufacturing process improves so does other factors such as electrical interference and leakage, which can affect where you can physically place things on a chip, and how robust the design has to be.

As a process improves these aspects become less of an issue, so things can be placed closer together, and optimised for current flow, improved gating etc etc....... This is what we are beginning to see now.
 
In general two times full 512sp cores at say with a 6% lower clock would almost certainly use less power than 2 480sp cores with a 6% higher clock, both offering similar performance.

Had not thought of that, to busy thinking what GPU they will use :D

But it will be interesting to see what they do
 
Last edited:
Back
Top Bottom