• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA to Name GK110-based Consumer Graphics Card "GeForce Titan"

Adaptive/dynamic vsync is something thats been coming for awhile, it was first used (outside of drivers) with smart vsync in RAGE (idtech5).

Driver level framerate limiting is something quite a few people have been requesting for quite awhile too especially developers as people don't like it when their GPUs are churning out 1000s of FPS in simple scenes/cutscenes, etc. making the capacitors and other components whine and sometimes getting very hot doing not much at all.

Yes sure, but adaptive Vsync was pushed with the cards release blurb, agree max fps limiters are needed to stop the inductors singing, but EVGA introduced frame capping at the front end of their PrecisionX and were telling people with problems to use it instead of vsync.
 
Question Mr Duf-Man ;)

Way back when we got final details of released kepler cards "680" a few of us decided they maybe not worth the jump from Fermi, I'm sure you kept your 580 but I ended up with a couple of 680's,( I only ever run Single setups)

I would describe the 680 as a good card very power efficient etc.. but a finicky card, throw any game at a fermi and it would do a steady job, the same with 680 and performance is very good or very bad + often there are like internal synchronisation type issues in and around v-sync frequencies, interesting all the fixes like Active V-sync and frame rate limiters etc were introduced with this card ;) never needed all that malarkey with fermi !

I realise it's a tough one especially if you have not run one of the cards but I would be interested on your take on that.

Well, actually I am still running a decrepit old GTX480 :eek:

I agree that the GTX680 is quite an impressive card on paper. I think that Nvidia took the right approach by stripping down the GPU-compute features to provide a "leaner", more power-efficient gaming card for the mass market. I really didn't agree with the pricing though - this entire generation has been overpriced in my humble opinion - especially from the green side.

My intention was to wait until prices dropped a little and then pick up either a GTX680 or a 7970, but prices have not dropped as much as I was hoping. Combine this with the fact that I rarely have time to enjoy gaming these days, and a load of unexpected other expenses (a.k.a. life... :( ), and I never got around to it. I'll probably pick up either an 8970 or a GTX780 on release I think... When I have time I want to play through Far Cry 3, Metro 2033 and Battlefield 3. My crusty old GTX480 can't really handle these games at 2560x1600 res.



Anyway, regarding your question, it's hard for me to say anything concrete as I have never used a 680, but I imagine most of the "issues" are related to drivers in some way. These days GPUs are basically just massively parallel number-crunching boxes, and the "quality" of the output (rather than the performance) is mostly determined by the instructions you feed to it. Whenever you add new features there is the possibility of them being buggy, or not quite working as hoped / as advertised.

I don't know how the next-gen will change these things - it all really depends on what new features Nvidia choose to implement, and how reliably they are implemented at driver level. The hardware really just defines the amount of raw computing power you have (and any bottlenecks that may arise), and everything else is a result of software. The only real hardware feature that I would point to in the Fermi -> Kepler transition is the memory bandwidth... Kepler has a lower "memory bandwidth to pixel-pushing power" ratio, and it's possible that this may introduce a few subtle issues, particularly at high resolutions. From what I understand the GTX780 (GK114) might be using a 384-bit interface (like the GTX580 did), which may help to alleviate some of these memory-bandwidth bottlenecking issues.
 
Just to add to what Jakus was asking about, I think part of the problem with the 680s compared to the previous gens, is the fact that they scaled back the memory bus, which seems to have resulted in these issues you've mentioned.

The Fermi cards weren't restricted in memory bandwidth, so the GPU itself wasn't artificially limited when it comes to high res or bandwidth dependent situations.

I think this was a really bad move from nVidia, had they included a greater bus width, I think the Kepler chips would be quite a bit better in performance.
 
The Fermi cards weren't restricted in memory bandwidth, so the GPU itself wasn't artificially limited when it comes to high res or bandwidth dependent situations.

I think this was a really bad move from nVidia, had they included a greater bus width, I think the Kepler chips would be quite a bit better in performance.

I agree - the GTX680 would certainly have benefited from a 384-bit interface. I'm sure that the 256-bit approach was taken in order to keep build costs down - another reason why I was unhappy with the pricing of the card.

Despite its great performance it always felt like an "upper mid-range" card that performed very well, rather than a true high end card. Which, of course, relates back to the tired old "GTX660 re-badged" argument from last year (i.e. that Nvidia "intended" the 'full fat Kepler' to be their high-end 6-series card card, but didn't release it partly because it was hideously delayed, and partly because they could compete with AMDs offerings without it)
 
Yes sure, but adaptive Vsync was pushed with the cards release blurb, agree max fps limiters are needed to stop the inductors singing, but EVGA introduced frame capping at the front end of their PrecisionX and were telling people with problems to use it instead of vsync.

Borderlands 2 has to have frame limits as well, to stop screen tearing.. not just Vsync, it has to be limited to 60fps, but i still see it though
 
256 bit Vs 384 bit @ 5760*1080 = massive win for 384 bit. 2*7950's comfortably win at that resolution against my 680's. It was a bad move from Nvidia and fools like me still got them.

In my defence, performance is acceptable and it is only the 310.xx drivers that have been a massive let down. New drivers next week (so I am lead to believe) and hopefully they can pull something out of the bag.

384 bit next or no thanks.
 
256 bit Vs 384 bit @ 5760*1080 = massive win for 384 bit. 2*7950's comfortably win at that resolution against my 680's. It was a bad move from Nvidia and fools like me still got them.

In my defence, performance is acceptable and it is only the 310.xx drivers that have been a massive let down. New drivers next week (so I am lead to believe) and hopefully they can pull something out of the bag.

384 bit next or no thanks.

Must really hurt on triple displays. :( Probably fine for single resolution, or 1440p shenanigans.
 
It even affects 1920*1080 somewhat although 256 bit bus is still more than capable of delivering decent performance at that resolution.
 
At the price that is being stated there's no way intend to buy the card.

Hmm. Debates going on at the EVGA forum considering this card. Many filled with suspicion about the Titan card.
Yeah, no, I see holes in the sweclockers link that guru3d cites (Sure, if you think there are holes, you're going to find them [:D]; if you read this stuff enough you can tell which ones need more thought).
Subtle error that most will miss:

"When NVIDIA released the GeForce GTX 680 in March 2012 it was clear that the new flagship was not based on the full version of the GPU Kepler (GK110). Instead, a cut-down version (GK104) to hold down the manufacturing costs"

GK100 was the base model that was never made and nVidia went for the cut-down GK104, not GK110, which would be a refresh (like GF100 in GTX480 and GF110 in GTX580 before).Assumption that a GeForce GK110 will have similar specs to Tesla K20-- that has never happened before. I were nVidia, I don't see benefit in taking those precious GK110 dies in cards that cost $3500 and sell them for even $900, unless I was absolutely certain that the sales volume would make up for lost profits. But high-end is never high-volume, and I don't know who is stupid enough to pay 80% more than a reference GTX680 only to gain 13% performance, which just about any GTX680 can reach overclocked to around 1140MHz base.
How do I get 13%? Simple math: (2496-cores x 702MHz base) ÷ (1536-cores x 1006MHz base) = 1.13. Because the frequency is low, the extra cores don't compensate much. GK110 would need 1GHz base to be worth it, but K20 uses 225W as is, so 1GHz base would push it up to 260W; add in the extra Vram to the suggested 6GB and raise the frequency, it wouldn't be able to close the performance gap between GTX680 and GTX690 while using the same power and costing as much as GTX690-- it's dead in the water.
The name Titan suggests a relation to the Cray Supercomputer that uses the GK110 Tesla cards-- what does that have to do with anything? I'm sure Cray and nVidia had their own codenames for their products before everything went public. I'm thinking those names got out and someone built this up.
Can someone explain the jump from Titan to Titanium other than spelling, Google Chrome translation errors and Greek mythology? It if far easier for me to believe that if a "GTX680 Ti" was to show up, it would reuse a GK104 die (leading to similar cost to produce a GTX680) and set the frequency higher by BIOS (cheapest route), a la HD7970 GHz Edition, than go for a GK110 die at a lower frequency.
In my mind, the only way rumor could possibly have meaning is going by EK's decision to make waterblocks for Tesla K20. What if they are thinking if a GeForce model uses the same PCB template, then they are ready before anyone else? That is a major gamble on EK's part, especially for a $3500 compute card; unless Cray buys 18,288 waterblocks.[;)] Coincidentally, the EK blocks for Tesla K20 are to appear in February 2013 as well.

lehpron
 
Last edited:
Bah, ignore me. It isn't as bad as I am making out but it does gripe me a little with the prices and I know that 7950's will beat me, which makes it hard to swallow.

So if you could go back in time would you grab some 7970/50s?

Im still not 100% satisfied with AMD cards atm, another game thats not 100% smooth is Torchlight 2! Radeon pro fixes this but i thought 13.1 would have :\

Id also like to know how well only 1 of your 680s does with farcry 3 online? I still cant go above med settings or its not smooth?
 
So if you could go back in time would you grab some 7970/50s?

Im still not 100% satisfied with AMD cards atm, another game thats not 100% smooth is Torchlight 2! Radeon pro fixes this but i thought 13.1 would have :\

Id also like to know how well only 1 of your 680s does with farcry 3 online? I still cant go above med settings or its not smooth?

He's multi GPU and multi monitor so it's different.
 
So if you could go back in time would you grab some 7970/50s?

Im still not 100% satisfied with AMD cards atm, another game thats not 100% smooth is Torchlight 2! Radeon pro fixes this but i thought 13.1 would have :\

Id also like to know how well only 1 of your 680s does with farcry 3 online? I still cant go above med settings or its not smooth?

If I went back in time and chose 7970's/50's, I would be a big hater of AMD and probably never go near their stuff again.

Not sure what your memory is like but you may remember for months and months, AMD + CF 7970/50 + Multi monitor = BSOD.

I consider my self quite handy around a PC with some knowledge of sorting problems out but this was a problem that couldn't be sorted out and required AMD to do it and sadly, it took forever for them to do it.

As a long time reader of these forums, I see problems galore on both sides and I see the best of both sides. Recently, AMD have done well and Nvidia are dragging their heels a fair bit. As of this moment now, the best purchase is 2*7950's for multi screen gaming IMO but it wasn't like that a short while ago.

Because the 680 / 690 just aren't fast enough to meet the demands of consumers today..

What is fast enough?
 
Anyway, regarding your question, it's hard for me to say anything concrete as I have never used a 680, but I imagine most of the "issues" are related to drivers in some way. These days GPUs are basically just massively parallel number-crunching boxes, and the "quality" of the output (rather than the performance) is mostly determined by the instructions you feed to it. Whenever you add new features there is the possibility of them being buggy, or not quite working as hoped / as advertised.

everything else is a result of software. The only real hardware feature that I would point to in the Fermi -> Kepler transition is the memory bandwidth... Kepler has a lower "memory bandwidth to pixel-pushing power" ratio, and it's possible that this may introduce a few subtle issues, particularly at high resolutions. From what I understand the GTX780 (GK114) might be using a 384-bit interface (like the GTX580 did), which may help to alleviate some of these memory-bandwidth bottlenecking issues.

Thanks Pal, sorry to drop that on you.
Re, smoothness
It's a tough one, and so hard to differentiate between things like every tiny mouse movement now with so much rendering power feeling "digitally" harsh to fermi's "oil damped" mouse type feel and the frame rate speeds that are now putting you into that screen tear territory.
To Instances in Farcry 3 where 680 can render any scene at maximum settings but suffer massive fps drops just turning around where as fermi, sure does not have the same horsepower but exhibits a good "torque curve" and does not exhibit the big fps swings,the problem may well be a synchronisation issue with the havok engine. but that's actually a good analysis, the chopped down kepler acts at times like a small capacity car engine :D

In fairness I don't believe the cut down memory bus is the issue at normal resolutions because, although I don't play it (but I know how badly it ran on my 570) the 680 ploughs smoothly through BF3, and it does a good job in Crysis and crysis 2 with later drivers.

Strange one, for me unless Nvidia chopped out more than they tell us, maybe it's just internal clock rates that just don't play nicely with some physics and game engines ?
 
Just to add to what Jakus was asking about, I think part of the problem with the 680s compared to the previous gens, is the fact that they scaled back the memory bus, which seems to have resulted in these issues you've mentioned.

The Fermi cards weren't restricted in memory bandwidth, so the GPU itself wasn't artificially limited when it comes to high res or bandwidth dependent situations.

I think this was a really bad move from nVidia, had they included a greater bus width, I think the Kepler chips would be quite a bit better in performance.

Well, we might not have to wait long to find out, albeit a closer to full fat version Kepler ;)
 
Back
Top Bottom