• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA to Name GK110-based Consumer Graphics Card "GeForce Titan"

Heh... I'm not really one to blow my own trumpet, but... CALLED IT :D

If/when we see GK110, I'm fairly confident that we'll see:


a) Significantly more than "marginal" increases in performance. In fact, I'd wager that it will be biggest performance jump we've ever seen within the same manufacturing process. 40% at the very least, and likely to be higher. Remember, the GK110 chip is twice the size of GK104! [7.1Bn transistors vs 3.54Bn for the GTX680]

b) Pricing will reach new heights for a single GPU card. Unless AMD also go for a monster GPU (which they've not been keen to do in the past) Nvidia will have no reason to price the chip competitively. The GK104 replacement (GK114?) is likely to go up against AMDs high-end sea islands chip, with GK110 in a "super high-end" price point of its own. I'm fully expecting prices to be well over £500.


GK110 will be no ordinary intra-process refresh. I suspect we're in for a bit of a raping as far as pricing goes, but the performance may just act as a lube.


As much as I like ridiculously overpowered GPUs, I doubt I will be buying one if they're pushing £700.
 
Unlikely

Its a big chip and will have heat/clocking problems if run at the same speed as a GK104.

65% -70% would be a better estimate.

I'm not sure about this, the 680 has plenty of headroom in terms of power and heat so I think it is possible. Especially given the current '680' was really the 660/670

Bear in mind the 7970 die size is 24% bigger than 680 and it only has a TDP of 195W. This is in stark contrast to all previous first gen X80 GTX Nvidia products.


According to TPU the 680 has ~60% the performance of 690 so question is can the 780 get a 40% improvement over 680 to hit 85%, it's possible I think but obviously time will tell.
 
Last edited:
Heh... I'm not really one to blow my own trumpet, but... CALLED IT :D




As much as I like ridiculously overpowered GPUs, I doubt I will be buying one if they're pushing £700.

This is why you are my hero Duff :) Checking out your claims over the past couple of years, you have been pretty spot on.

Do you do autographs? :p
 
Heh - thanks Gregster :p Here's your autograph: http://www.youtube.com/watch?v=I7JY2V_GEuE



Regarding the performance issue - don't confuse "85% the speed of a GTX690" with "85% faster than a GTX680"...

Let's be optimistic, and say that a GTX 690 is 90% faster than a GTX680. Then a "GeForce Beast" at 85% the speed would be just over 60% faster than a GTX680. In reality I'd adjust this number down a little to account for 'wishful thinking'. When you consider that a refreshed GK114 ("GTX780") part is likely to be 15-20% faster than a GTX680, the improvement is not quite so dramatic.

I'd say that a 50% performance bump over the GTX680 is probably not too far off reality. It all depends on the clockspeed that the GK110 cores can reach. I can't see them hitting 1Ghz, but 800Mhz seems realistic. At 800Mhz a 2880-core GK110 would be pushing 50% more pixels than a GTX680, and with a 384-bit interface, would have ~50% more memory bandwidth. Still, at 800Mhz I'd expect a GK100 to draw a hell of a lot more power than a GTX680 - I would not be at all surprised to see this card knocking at the door of 300W.
 
Just skimming that whitepaper, it amazes me seeing all the different parts of a GPU . I'd love to goto nVidia or AMD and have somone explain to me exactly how a GPU is made in detail, the complexity astounds me.
It's also no wonder that the actual physical product will be slightly less performance than that which was stated on paper.
 
Last edited:
Just skimming that whitepaper, it amazes me seeing all the different parts of a GPU . I'd love to goto nVidia or AMD and have somone explain to me exactly how a GPU is made in detail, the complexity astounds me.

You'd be there a long time!

In terms of the number of individual interacting components, modern GPUs are the most complex things created by man. Several Billion transistors working together in "perfect" harmony.
 
You'd be there a long time!

In terms of the number of individual interacting components, modern GPUs are the most complex things created by man. Several Billion transistors working together in "perfect" harmony.

I imagine the first 100% fault free die has yet to emerge, or it's mounted in a block of glass..somewhere :D
 
7.1b transistors!!

That is double 680

Says a full implementation can have 15 SMX units, 680 has 8

This will be the full fat Kepler we were all expecting months ago: big, hot, loud and fast :D
 
Last edited:
Just found this.....

"The GK110 has 15 SMX-units, equipped with a total of 2880 CUDA cores. In the Titan, one SMX unit is disabled, leaving 2688 cores active (almost twice the amount found on a GTX 680). This allows Nvidia to sell partly defunct chips, and keep prices at a more acceptable level. The GeForce Titan has an impressive 6 GB of GDDR5 memory, linked through a 384-bits interface. The GPU is reportedly clocked at 732 MHz, while the memory has an effective clock speed of 5.2 GHz. The card should have a 235 watts TDP.

Nvidia's GeForce Titan is expected to launch somewhere in February, bearing a pricetag of about $899 (£565)"

full link..http://uk.hardware.info/news/32978/nvidias-geforce-titan-with-gk110-gpu-launches-next-month
 
$900?
Say that relates to £700 including tax and general EU boning. Why not just get a GTX690 and have 100% the speed of a GTX690? or 2 GTX680s

Just seen the latest GTX690 prices.... holy mofo.
 
$900?
Say that relates to £700 including tax and general EU boning. Why not just get a GTX690 and have 100% the speed of a GTX690? or 2 GTX680s

Just seen the latest GTX690 prices.... holy mofo.

There has always been cheaper ways to put similar numbers in benchmark tables, but nothing compares to a single chip card..all considered, especially if that is a big Mofo single card :D
 
$900?
Say that relates to £700 including tax and general EU boning. Why not just get a GTX690 and have 100% the speed of a GTX690? or 2 GTX680s

Just seen the latest GTX690 prices.... holy mofo.

I think when it comes to performance the GTX 690 is good at ocing, runing them at GTX 680 clocks is no effort at all and there is still a lot in the tank.

When nvidia launches the Titan they will struggle to get them to run a high clock speed and also ocing will be limited.

The biggest advantage to getting a Titan is the 384 bit bus which will make them very attractive to anyone into multi monitor gaming (somewhere a GTX 690 can not really go).
 
"The GK110 has 15 SMX-units, equipped with a total of 2880 CUDA cores. In the Titan, one SMX unit is disabled, leaving 2688 cores active (almost twice the amount found on a GTX 680). This allows Nvidia to sell partly defunct chips, and keep prices at a more acceptable level. The GeForce Titan has an impressive 6 GB of GDDR5 memory, linked through a 384-bits interface. The GPU is reportedly clocked at 732 MHz, while the memory has an effective clock speed of 5.2 GHz. The card should have a 235 watts TDP.[/url]

Hmm... If those figures are correct, then the card would push only ~30% more pixels than the GTX680. I was expecting a slightly higher clockspeed really. We will have to see whether the GK110 includes the kind of architectural tweaks we have come to expect from a mid-process refresh - if so it could bump the relative performance up a little - but bear in mind this will be going up alongside the GK104 refresh (GK114?), which we can probably expect to offer a ~10-20% performance increase based on past events.

Anyway, with a 730Mhz base clock and 2688 shaders, the "85% the speed of a GTX690" claim seems like a very optimistic assessment.

$900?
Say that relates to £700 including tax and general EU boning. Why not just get a GTX690 and have 100% the speed of a GTX690? or 2 GTX680s

Just seen the latest GTX690 prices.... holy mofo.

This really isn't a card aimed at those looking for "bang for buck". It's aimed at those who want "the fastest GPU at any price". Without a 'monster-sized GPU' from AMD to compete, Nvidia can pretty much charge whatever premium they want.

Remember that GK110 is primarily designed for GPU compute, rather than gaming. A large proportion of the transistors are dedicated to fast interconnects and other GPGPU-specific functions that are of little use in gaming. GK114 will be a more gaming-oriented card (as GK104 was), and should represent much better value for money and power efficiency. Still, I wouldn't hold your breath for pricing being any better than this generation (so probably around £450 at launch).
 
Last edited:
Question Mr Duf-Man ;)

Way back when we got final details of released kepler cards "680" a few of us decided they maybe not worth the jump from Fermi, I'm sure you kept your 580 but I ended up with a couple of 680's,( I only ever run Single setups)

I would describe the 680 as a good card very power efficient etc.. but a finicky card, throw any game at a fermi and it would do a steady job, the same with 680 and performance is very good or very bad + often there are like internal synchronisation type issues in and around v-sync frequencies, interesting all the fixes like Active V-sync and frame rate limiters etc were introduced with this card ;) never needed all that malarkey with fermi !

I realise it's a tough one especially if you have not run one of the cards but I would be interested on your take on that.
 
interesting all the fixes like Active V-sync and frame rate limiters etc were introduced with this card ;) never needed all that malarkey with fermi !

Adaptive/dynamic vsync is something thats been coming for awhile, it was first used (outside of drivers) with smart vsync in RAGE (idtech5).

Driver level framerate limiting is something quite a few people have been requesting for quite awhile too especially developers as people don't like it when their GPUs are churning out 1000s of FPS in simple scenes/cutscenes, etc. making the capacitors and other components whine and sometimes getting very hot doing not much at all.
 
Back
Top Bottom