• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA to Name GK110-based Consumer Graphics Card "GeForce Titan"

So this is a single-gpu which is 85% as fast as a 690?

wow :)

Im surprised how strict nvidia is being with its partners though - not even allowing a custom brand sticker? :rolleyes:
 
Supposedly,it will be 85% as fast a GTX690.

Unlikely

Its a big chip and will have heat/clocking problems if run at the same speed as a GK104.

65% -70% would be a better estimate.

I also read a rumour of only 14 out of 15 shader clusters being enabled, this means I won't be rushing out to buy one as an updated version with all 15 shader clusters will be launched later.
 
Unlikely

Its a big chip and will have heat/clocking problems if run at the same speed as a GK104.

65% -70% would be a better estimate.

I also read a rumour of only 14 out of 15 shader clusters being enabled, this means I won't be rushing out to buy one as an updated version with all 15 shader clusters will be launched later.

It wouldn't make much sense at all for them to release it, at those sort of speeds it's barely any faster than a 680, but cost nVidia significantly more to produce.

I think a chip of this kind is far more feasible in the consumer space on the next round of die shrinks.
 
I don't buy it, it'll mess up the hierarchy with the 690, the yeilds won't be great, and there's no way it'll be $900.

It could be a limited run loss making card,so they can boast they are the best for gaming or some marketing stuff like that.

However,perhaps these companies could actually focus on making better cards under £200:

http://tpucdn.com/reviews/Gigabyte/GTX_660_OC/images/perfrel_1920.gif

Someone with a HD6870/GTX560/GTX560TI, is not seeing massive performance progression at 1080P if they move upto a HD7850/GTX660/HD7870. The £100 to £160 cards are rather underwhelming ATM.
 
Last edited:
It wouldn't make much sense at all for them to release it, at those sort of speeds it's barely any faster than a 680, but cost nVidia significantly more to produce.

I think a chip of this kind is far more feasible in the consumer space on the next round of die shrinks.

A bit like it does not make sense to release a mid range card and pretend its a top of the range GPU, thats nvidia for you.
 
Not read the article (at work) but any substance in the rumours, they have my interest. I will have a read when I get home and cheers for the info Cat :)
 
Seems to say that the product group will be called 'geforce titan' as opposed to 'GTX'.

Has anyone uncovered what jen hsun meant by 'project thor'? He 'accidentally' let it slip while introducing shield, saying it was a codeword he shouldn't have used, and part of the shield demo was the movie thor.
 
I don't buy it, it'll mess up the hierarchy with the 690, the yeilds won't be great, and there's no way it'll be $900.

If the rumours are correct that I mentioned a couple of months back or so they have a stockpile of cores that failed to make the grade for for the K20s but completely viable for GeForce parts - so while they won't be hitting the full 2,880 SPs they are probably capable of being clocked up a bit from the Tesla specification. Tho I think the 85% of a 690 performance claim is a bit on the high side - I think someones based it on the supposed SPs of the GK110 + the clock speed of the GK104 which isn't going to happen.
 
If the rumours are correct that I mentioned a couple of months back or so they have a stockpile of cores that failed to make the grade for for the K20s but completely viable for GeForce parts - so while they won't be hitting the full 2,880 SPs they are probably capable of being clocked up a bit from the Tesla specification. Tho I think the 85% of a 690 performance claim is a bit on the high side - I think someones based it on the supposed SPs of the GK110 + the clock speed of the GK104 which isn't going to happen.

Well exactly, the clock speed won't be as high, and they won't be getting the full shader count if they're using binned chips, so it makes you wonder what the whole point is?

The chips will be massive, what, 550mm2? They know how that went when they tried it with Fermi, they're about 2x the size of GK104, so will cost at least double to produce, probably a bit more to account for wastage on a circular wafer.

It doesn't make any sense at all to try and make a consumer graphics card out of them.
 
Well exactly, the clock speed won't be as high, and they won't be getting the full shader count if they're using binned chips, so it makes you wonder what the whole point is?

The chips will be massive, what, 550mm2? They know how that went when they tried it with Fermi, they're about 2x the size of GK104, so will cost at least double to produce, probably a bit more to account for wastage on a circular wafer.

It doesn't make any sense at all to try and make a consumer graphics card out of them.

I was thinking about the performance claim of 85% of a GTX 690.

If they choose the games wisely it is easy to justify.

Just pick games that don't scale well in SLI.
 
Well exactly, the clock speed won't be as high, and they won't be getting the full shader count if they're using binned chips, so it makes you wonder what the whole point is?

The chips will be massive, what, 550mm2? They know how that went when they tried it with Fermi, they're about 2x the size of GK104, so will cost at least double to produce, probably a bit more to account for wastage on a circular wafer.

It doesn't make any sense at all to try and make a consumer graphics card out of them.

I think the point is - tho I have no way to check what the actual facts are currently - they have a fair amount of cores that didn't make the grade for Oak Ridge's Titan and they had to produce over 20,000 parts for that (full populated nodes + readily available replacements) which given nVidias track record for yeilds means they probably have a lot of salvaged parts they'd otherwise be writing off - or they can palm off on consumers and make some more money.
 
I think the point is - tho I have no way to check what the actual facts are currently - they have a fair amount of cores that didn't make the grade for Oak Ridge's Titan and they had to produce over 20,000 parts for that (full populated nodes + readily available replacements) which given nVidias track record for yeilds means they probably have a lot of salvaged parts they'd otherwise be writing off - or they can palm off on consumers and make some more money.

If this is what they are doing, once they have used all the cores that did not make the grade the next step is to launch a card with all 15 shader clusters enabled. This will not make anyone who buys one of these GeForce titans very happy.
 
I think the point is - tho I have no way to check what the actual facts are currently - they have a fair amount of cores that didn't make the grade for Oak Ridge's Titan and they had to produce over 20,000 parts for that (full populated nodes + readily available replacements) which given nVidias track record for yeilds means they probably have a lot of salvaged parts they'd otherwise be writing off - or they can palm off on consumers and make some more money.

I understand that, but it makes no sense for a marginal speed boost over a 680.

It'd make more sense for them to just put them in compute based products, which it seems like they have done.

The Tesla cards that use these have two versions, the 20 and the 20X, one uses the full 15 blocks, and the other has one disabled.

So personally, I don't really buy that they've been stockpiling them in that fashion.
 
Back
Top Bottom