• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Did NVIDIA Originally Intend to Call GTX 680 as GTX 670 Ti?

So just over 100mhz true overclocking headroom is OK with you then? because the card barely passes 1200mhz before crapping out.

So your bet is off, and like many it seems you didn't actually read the reviews very well, just skimming over them before coming to your conclusions.

BTW. Please stop bringing I7 turbo into this.

The fact is it impedes your overclock, so you have to disable it. And if you couldn't disable it it would be a big deal.

I'll admit i didnt even read the reviews tbh im just following logic.....and if it only get an extra 100mhz over the nvidia pre-set turbo clocks then that just means nvidia did the overclocking for you so you dont have to even bother. It also means noobs who are too afraid or dont have the knowledge to overclock their cards are getting the top level of performance out of the hardware they paid for. If it craps out barely over 1200mhz then so what,it is what it is. But to say its unfair to compare it to ATI's stuff straight out the box is a bit weird imo. I am dissapointed that the cards dont overclock any higher it makes me wonder about long term stability/reliability......i think that should be the main concern if its true(if temps are good then i dont see it being an issue). I'll go read the reviews now:)
 
Last edited:
I was just thinking about stability/reliability. If it OC's itself and runs pretty hot then who knows what will happen when noobs have them running 24/7 in cases full of gunk and dust. I guess it would take heat into consideration and you simply wouldn't get the added performance as it would stay on low frequencies. I'll also be interested to see what OCUK members 680's clock at to see if the reviewers were given cherry picked cards.
 
I too think Nvidia was planning to use it as more like a Ti everything points that way. But Nvidia really has no need to bring its full weapon as yet, as AMD can't compete with the 680 as it is. So its rightfully named in my option.

thinking about it if that was originally a mainstream part what is in store for the next few years :O
 
I'd hardly say it can't compete a 7970 clocked at the max commonly 24x7 stable clocks is barely if even double figure percentage slower overall than a 1200MHz GTX680.
 
Apparently I heard elsewhere that the original GTX 680 was meant to have been based on a GK100 GPU, that was later going to be superceded by the GK110 as the GTX 780 early next year.

Specs on both the GK100 and 110 being unknown, but the 100 would have been at least a 384 bit part with more pins required to power it, and possibly 2048 shader cores.
 
After reading about this dynamic overclock thing im really impressed. I think overclocking as we know it on GPU's has just changed for the foreseeable future. It seems its gonna be all about temps now as the GPU will inteligently clock itself as high as it can go as long as the temps are good and even adjusts the voltages on its own to keep the card stable.

In saying that i think this is taking some of the old skool fun out of overclocking. A thing a lot of us used to just take for granted tbh was the default coolers of our GPU's. Most people bought the one with the best cooler they could afford already strapped onto it and then fiddled with the cards brains untill we got a stable clock that we liked. It was part of the game and it was fun. Now instead of fiddling with that stuff our cards will do it for us,and if we want to go more extreme then we have to buy a better cooling soloution.Hmmmm.

Now the focus shifts for the average overclocker from fiddling with settings that are stable to weighing up the pro's and con's of different aftermarket coolers which doesnt seem as romantic an idea to me as the good old roll your sleeves up and crank the clocks/voltages/fanspeed up. Its taking a lot of fun out of the "game" or art if you will,of overclocking.It also costs money to replace a cooler and some people dont want to tamper with a £400-£500 GPU.

Im not going to argue against a card that delivers maximum performance on demand out of the box without tinkering with the settings though,thats pretty awesome and makes me wonder why they never thought about this before.

Maybe my perspective on this is a bit off and please correct me if im wrong but if this is a trend that AMD was to follow with their cards and nvidia keeps it up then it might be a good day for the cooling companies? a lot of people who used to just overclock on stock coolers wont be able to push the cards as far on stock coolers as they used to and will be forced to buy an aftermarket soloution to get their thrill.

Hope what i wrote wasnt just a load of ****** lol.

Rant over:p
 
I think theres still room for traditional overclocking as well - just the GTX680 is a mid-range process that nVidia have wrung the nuts off to compete with the 7970 as the slightly underwhelming performance of the 7900 series has left them the space to do that. I think the proper 670, 760, etc. cards will still see some decent OCing headroom over the limits used by the stock adaptive overclocking settings.
 
Hope what i wrote wasnt just a load of ****** lol.

Rant over:p

Far from it...

As the manufacturing process shrinks, and transistor density increases, GPUs become increasingly constrained by power draw. This trend has been growing for the past decade with GPUs, and even longer with CPUs. Now that we are approaching the limits of what can be cooled comfortably with air in a dual-slot config (around 300-400W maximum realistically), the key to unlocking performance increasingly becomes efficient power management.

This is no great secret... I've been discussing it on these forums for a couple of years now (see here for example), and no doubt the engineers at AMD and Nvidia have been investigating it for far longer.

The first foray into power management was AMDs Powertune, introduced with Cayman, which approximates power draw based on GPU activity, and places a cap on the maximum power draw. This allowed them to ensure that the GPU will fit into a particular power envelope, even on high-power 'outlier' applications (like furmark for example).

The "dynamic clocking" present in the 680 is just an evolution of this concept; allowing the GPU clock to be adjusted in order to maintain a more consistent power draw over a wider range of applications (again, if you check the thread I linked to you will see discussion on the potential for such a mechanism). I have no doubt that we will see something similar from AMD in the future. AMD were first to the table with power containment, and Nvidia have improved on the concept. It's the nature of the business that the two companies continually leapfrog each other in this fashion, and I expect AMDs answer to be even more advanced, when it arrives.




All this is quite separate to the issue of overclocking headroom on the cards. The 7970 clearly has a lot more overclocking headroom than the 680, which is clocked much closer to its limit. The 7970 is something of a special case when it comes to overclocking headroom... In the past, very few high-end cards have been able to offer more than ~10% overclock comfortably.
 
none knows what will be next its just impossible guys

nvidia has gk110 was taped out in January but it was too powerful but amd got only 7970

thats why they told us that 7970 was weak

they will wait for amd 7990 and maybe then they will release not dual gpu but one gpu real gk110 but who knows...

as u know 7970 needs much more power than 680 so amd have to decrease gpu speed down so it will be about 60 faster than 7970

as nvidia already has gk110 it could be equal to 7990

but if not they just make 2 gk104 and call it 690 and by my calculation 2 8 pins will perfectly hande 1ghz version with nforce inside it will blow away 7990

i wanted 680 but after test result 2 weeks ago its not worth 500 obama

middle end gpu for high end price
 
Far from it...

As the manufacturing process shrinks, and transistor density increases, GPUs become increasingly constrained by power draw. This trend has been growing for the past decade with GPUs, and even longer with CPUs. Now that we are approaching the limits of what can be cooled comfortably with air in a dual-slot config (around 300-400W maximum realistically), the key to unlocking performance increasingly becomes efficient power management.

This is no great secret... I've been discussing it on these forums for a couple of years now (see here for example), and no doubt the engineers at AMD and Nvidia have been investigating it for far longer.

The first foray into power management was AMDs Powertune, introduced with Cayman, which approximates power draw based on GPU activity, and places a cap on the maximum power draw. This allowed them to ensure that the GPU will fit into a particular power envelope, even on high-power 'outlier' applications (like furmark for example).

The "dynamic clocking" present in the 680 is just an evolution of this concept; allowing the GPU clock to be adjusted in order to maintain a more consistent power draw over a wider range of applications (again, if you check the thread I linked to you will see discussion on the potential for such a mechanism). I have no doubt that we will see something similar from AMD in the future. AMD were first to the table with power containment, and Nvidia have improved on the concept. It's the nature of the business that the two companies continually leapfrog each other in this fashion, and I expect AMDs answer to be even more advanced, when it arrives.




All this is quite separate to the issue of overclocking headroom on the cards. The 7970 clearly has a lot more overclocking headroom than the 680, which is clocked much closer to its limit. The 7970 is something of a special case when it comes to overclocking headroom... In the past, very few high-end cards have been able to offer more than ~10% overclock comfortably.


Transistor counts for AMD :-
6970 = 2640M
7970 = 4310M

AMD has 1670M increase in transistor count this generation



Transistor counts for NVIDIA :-
580 = 3000M
680 = 3540M

NVIDIA has 540M increase in transistor count this generation



156wq3t.jpg




These figures only say to me it is a mid range card too is the current 680 and even its power usage clearly shows this because they have not hit any power limits yet.
 
Last edited:
7970 wins
7970 wins
Oh look, 7970 wins again.

.

Funny you (frankly a complete nobody) are so desparate to "prove" this yet all the reputable hardware sites are clearly stating the 680 to be the winner. Honestly - either you are an AMD employee or you REALLY need to find a different hobby sweetheart.
 
Funny you (frankly a complete nobody) are so desparate to "prove" this yet all the reputable hardware sites are clearly stating the 680 to be the winner. Honestly - either you are an AMD employee or you REALLY need to find a different hobby sweetheart.

Oh yay more insults !

Awesome, keep them coming.
 
To the OP, YES, the card was supposed to be mid range without a doubt, nvidia played their cards right this round and did what any business would do, and compared the current product to the market and realised they could win with a castrised card. Good for them I say.

From a competition point of view I just hope AMD/Nvidia are willing to cut prices to have some good price compo.

My hunch is that both companies realise that gpu power is 2/3 (even more) years ahead of software requirements so they can afford to slow down and charge whatever they want.

We are already to the point where CPU power is basically at appliance level GPU power aint that far behind.
 
If the 680 truly is a 670 Ti underneath the marketing; there does seem to be a lot of evidence pointing to this being the case. What does this mean for a 670 Ti release? Will we still get one and if so what will they do to the spec?

From what I have read on the redesign of the architecture, disabling a streaming multiprocessor will now result in a much more drastic reduction in performance. I can't see that these leaves room for a 670 Ti unless it is just an under clocked 680.
 
Getting back on topic.
When will the 384-bit real GTX680 be released? Will it have 3Gb of VRAM and will it massively devalue the 256-bit cards including the 7xxx series ?

So where is the extra memory controller hiding?

GeForce_GTX_680_Die_Shot_575px.png


(Source: Anandtech)

A 384bit version would have to be larger with more shaders/cache otherwise you would have dead area around the core to fit all the external connections (which don't shrink as much over time).
 
Last edited:
Funny you (frankly a complete nobody) are so desparate to "prove" this yet all the reputable hardware sites are clearly stating the 680 to be the winner. Honestly - either you are an AMD employee or you REALLY need to find a different hobby sweetheart.

He's already admitted to stumping up the cash for a 7970, clearly regretting it and trying to convince himself otherwise. ;)

If GTX680 is a mid range card then it says more about AMD's lack of competitiveness than anything, not competitive in the CPU market and now floundering to beat NVidia's mid range cards in the GPU market.
 
Last edited:
Back
Top Bottom