• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX680 Rumour

I think I'll keep an eye out for a 460 in the MM and try SLI rather than buy next-gen. As has been said, most modern games aren't pushing the graphics envelope much to justify the expense (for me).

If it's for the system in your signature then I wouldn't bother unless you want to try getting SLI to work on a motherboard which isn't SLI compatible.

There are software hacks to attempt this but I've no idea how well they work.
 
How much do you guys think the old 500 series will go down with the 600 series coming so soon, do you think it will or do a 480 on us and barely drop down?

IT should drop a lot in price. And we'll see 6xx low end parts matching GTX 580 speeds. It's different with the 480 as it's still the same architecture and process. Just a revision. And it is slightly ahead of a 570 in performance.

Edit: I mean after Kepler's release. Before release I doubt there will be any significant price change.
 
IT should drop a lot in price. And we'll see 6xx low end parts matching GTX 580 speeds. It's different with the 480 as it's still the same architecture and process. Just a revision. And it is slightly ahead of a 570 in performance.

Edit: I mean after Kepler's release. Before release I doubt there will be any significant price change.

We wont see low end parts the same speed as a gtx580. I would expect the mid to high end part to be in that region. Maybe something along the lines of the 660/ti part.
 
Price will depend on a lot of factors including the competition from AMD and the economy (which is looking like taking quite a dump Q1 before picking up again).
 
Then again 460 is around 280 speeds. The performance gain in kepler will most likely be greater than what we saw moving from Tesla to Fermi. So cards below 660 should match 580 speeds.
 
^^ I find the proposed nVidia roadmaps a bit odd tbh - unlike the AMD ones where the staggered releases are due to 3 different designs - the nVidia kepler one is looking more like a typical refresh than a new generation release. While that is true in some respects in that Kepler is heavily based on Fermi it still doesn't make sense, either someones read the information wrong or nVidia is doing something different to what people think they are doing... or TSMC is having even bigger problems than claimed.

We'll have to see, nothing about 28nm is really known, Nvidia claim its great, then they'll blame it when everything goes wrong TSMC claim everything they have is great at any point in time, they are useless and AMD seem to get something working as soon as a new process has any kind of real volume, as did 2-3 other clients on 40nm and there are supposedly more tape outs from more clients on 28nm, just volume isn't there yet(or probably is there now but real volume to actual release is obviously a minimum of 6 weeks and realistically + at least 2-4 weeks to build up a little more stock before launch).

Judging by Nvidia's problems since the 280gtx right up to 580gtx, Nvidia just keep trying to do what is borderline impossible and honestly, stupid. The entire industry seems to know 500mm2 is WELL beyond the favourable yields/cost barrier you should go past, when intel are struggling at 435mm2 to get yields(but cost is less of an issue as they have no intention of selling any parts, even cut down ones sub £300), it should tell you what sensible sizes are. Most companies have been avoiding anything over 400mm2 for the past 4 processes, except Nvidia.

I've said for 3 years now, they really need to come up with a competitive performance/mm2 architecture because it will overcome almost all their manufacturing problems and end up with faster/lower power chips, its win win for them and will hit AMD where it hurts, smaller die/lower cost.

IT should drop a lot in price. And we'll see 6xx low end parts matching GTX 580 speeds. It's different with the 480 as it's still the same architecture and process. Just a revision. And it is slightly ahead of a 570 in performance.

Edit: I mean after Kepler's release. Before release I doubt there will be any significant price change.

low end parts will NOT match the 580gtx. Simple as that, the 560ti isn't your average midrange card anyway.

Then again 460 is around 280 speeds. The performance gain in kepler will most likely be greater than what we saw moving from Tesla to Fermi. So cards below 660 should match 580 speeds.

Talking nonsense frankly, sorry but the performance difference from Fermi to Kepler won't be more than Tesla to Fermi, and cards below the 660 should match the 580gtx, because the 560ti/non ti aren't actually midrange cards.


In almost every previous generation you had from a 8 pipeline high end, 4 pipe midrange, 2 pipe low range over a decade ago, to 1600 shader 5870 high end, 800 shader 5770 midrange, 400 shader low end.

512 shader high end....... a "normal" midrange would be 256shaders, the 660ti is 384 shaders(ignoring the new stupidly named one) and the non ti at 336 shaders also is above the "midrange". A 660 non ti with double the shaders should comftably beat a 580gtx.

A 650 with double the previous 192 shaders, would not beat a 580gtx.

The difference is with particularly this last generation the splitting up of cards isn't standard any more, Nvidia don't have a classic 256shader part which you'd expect a new gen midrange of 512shaders with maybe slightly improved clocks to beat out the old high end.



Its not really worth speculation anyway at the moment, at the moment it appears like the "high end" kepler will be the GF104 till around next xmas, maybe beyond it. This should be somewhere around a 750-850 shader card, it will beat the 580gtx easily, but not by 80% like a 1024 shader card would, you would hazard a guess that is roughly where the GK112 will come in when it finally arrives.

The GK106 will likely be around the 350-400 shader mark and should not beat a 580gtx.

Question is really in the naming, I absolutely wouldn't be surprised to see them call the GK104, a 680gtx, GK110(2xgk104) 690gtx then a GK112, a 780gtx...... realistic naming is GK104 = 660ti, GK112, 680gtx GK110 690gtx.
 
We'll have to see, nothing about 28nm is really known, Nvidia claim its great, then they'll blame it when everything goes wrong TSMC claim everything they have is great at any point in time, they are useless and AMD seem to get something working as soon as a new process has any kind of real volume, as did 2-3 other clients on 40nm and there are supposedly more tape outs from more clients on 28nm, just volume isn't there yet(or probably is there now but real volume to actual release is obviously a minimum of 6 weeks and realistically + at least 2-4 weeks to build up a little more stock before launch).

Judging by Nvidia's problems since the 280gtx right up to 580gtx, Nvidia just keep trying to do what is borderline impossible and honestly, stupid. The entire industry seems to know 500mm2 is WELL beyond the favourable yields/cost barrier you should go past, when intel are struggling at 435mm2 to get yields(but cost is less of an issue as they have no intention of selling any parts, even cut down ones sub £300), it should tell you what sensible sizes are. Most companies have been avoiding anything over 400mm2 for the past 4 processes, except Nvidia.

I've said for 3 years now, they really need to come up with a competitive performance/mm2 architecture because it will overcome almost all their manufacturing problems and end up with faster/lower power chips, its win win for them and will hit AMD where it hurts, smaller die/lower cost.



low end parts will NOT match the 580gtx. Simple as that, the 560ti isn't your average midrange card anyway.



Talking nonsense frankly, sorry but the performance difference from Fermi to Kepler won't be more than Tesla to Fermi, and cards below the 660 should match the 580gtx, because the 560ti/non ti aren't actually midrange cards.


In almost every previous generation you had from a 8 pipeline high end, 4 pipe midrange, 2 pipe low range over a decade ago, to 1600 shader 5870 high end, 800 shader 5770 midrange, 400 shader low end.

512 shader high end....... a "normal" midrange would be 256shaders, the 660ti is 384 shaders(ignoring the new stupidly named one) and the non ti at 336 shaders also is above the "midrange". A 660 non ti with double the shaders should comftably beat a 580gtx.

A 650 with double the previous 192 shaders, would not beat a 580gtx.

The difference is with particularly this last generation the splitting up of cards isn't standard any more, Nvidia don't have a classic 256shader part which you'd expect a new gen midrange of 512shaders with maybe slightly improved clocks to beat out the old high end.



Its not really worth speculation anyway at the moment, at the moment it appears like the "high end" kepler will be the GF104 till around next xmas, maybe beyond it. This should be somewhere around a 750-850 shader card, it will beat the 580gtx easily, but not by 80% like a 1024 shader card would, you would hazard a guess that is roughly where the GK112 will come in when it finally arrives.

The GK106 will likely be around the 350-400 shader mark and should not beat a 580gtx.

Question is really in the naming, I absolutely wouldn't be surprised to see them call the GK104, a 680gtx, GK110(2xgk104) 690gtx then a GK112, a 780gtx...... realistic naming is GK104 = 660ti, GK112, 680gtx GK110 690gtx.

NVIDIA is stupid? And I suppose you're smarter than them? Stop talking rot. Chip sizes and designs are predicated on far more factors than your moronic rants about area. Which seems to be the only thing you ever talk about because you lack the technical background to talk about anything else. So every question comes down to areas for you. somehow. Every question is turned into something to do with areas. Reminds me of a professor at the southampton microelectronics lab who once said: 'if the only tool you know is a hammer, then every problem starts to look like a nail. A dangerous path.'
NVIDIA, and every other company, has far more knowledgeable people than you working on their products, and they make decisions based on far more variables than you can even begin to fathom. I stopped reading your long, insipid, empty rants a long time ago. But I can't help noticing some ridiculous sentences jump up from time to time in these walls of text. Get off your high horse, stop pretending to have all the answers, and stop pretending long winded rants will make up for your lack of any real knowledge with substance.
There is no reason NVIDIA can't make a 500mm2 chip if it aligns with its strategy and design capability. In fact, ITRS predicted applications for 500mm2 chips as early as 2006/2007. And NVIDIA has been successful in doing exactly that, by creating and following the strategy THEY'VE developed, And NOT by listening to some random guy on a forum ranting about how NVIDIA, AMD and Intel should actually be running their companies based on speculation pulled out of the rear-end.

And NVIDIA will continue to do what it is doing and be successful at it, regardless of whether or not they make 500mm2 chips, because they will make their decisions based on optimization theory which drives virtually every single decision in every area of engineering. regardless of whether or not that decision appears counter-intuitive to an ignorant forum troll.
 
Last edited:
NVIDIA is stupid? And I suppose you're smarter than them? Stop talking rot. Chip sizes and designs are predicated on far more factors than your moronic rants about area. Which seems to be the only thing you ever talk about because you lack the technical background to talk about anything else. So every question comes down to areas for you. somehow. Every question is turned into something to do with areas. Reminds me of a professor at the southampton microelectronics lab who once said: 'if the only tool you know is a hammer, then every problem starts to look like a nail. A dangerous path.'
NVIDIA, and every other company, has far more knowledgeable people than you working on their products, and they make decisions based on far more variables than you can even begin to fathom. I stopped reading your long, insipid, empty rants a long time ago. But I can't help noticing some ridiculous sentences jump up from time to time in these walls of text. Get off your high horse, stop pretending to have all the answers, and stop pretending long winded rants will make up for your lack of any real knowledge with substance.
There is no reason NVIDIA can't make a 500mm2 chip if it aligns with its strategy and design capability. In fact, ITRS predicted applications for 500mm2 chips as early as 2006/2007. And NVIDIA has been successful in doing exactly that, by creating and following the strategy THEY'VE developed, And NOT by listening to some random guy on a forum ranting about how NVIDIA, AMD and Intel should actually be running their companies based on speculation pulled out of the rear-end.

And NVIDIA will continue to do what it is doing and be successful at it, regardless of whether or not they make 500mm2 chips, because they will make their decisions based on optimization theory which drives virtually every single decision in every area of engineering. regardless of whether or not that decision appears counter-intuitive to an ignorant forum troll.

I love you
 
http://semiaccurate.com/2011/12/01/can-nvidia-supply-apple/

Some interesting points about kepler in this article that give merit to what dm was saying. Charlie was right about a lot of nv's problems with fermi so i would not discount what his thoughts on kepler are.

Theres an interesting point in there that theres no gk102 which from fermi codenames should have been the highend single gpu like gf102 was the gtx480. He thinks nv scrapped this part as they were having trouble making it hence why theres a gk112 coming out a year late. gf112 was the gtx580 which could suggest that the gk112 may come out as the gtx780.
 
Last edited:
Xsistor makes some interesting points. However I would have to point out that you would think a large corporation full of smart people would only make smart informed decisions. Unfortunately corporate history is littered with failures caused by the human factor. Ego, greed,pride etc. can all lead to such failures. Are either GPU manufacturer exempt from making poor decisions?

I've personally worked for a number of companies full of PhD's where choices were heavily influenced by flaws in peoples characters. Theory is all well and good but the real world doesn't always follow the rules ;)
 
NVIDIA is stupid? And I suppose you're smarter than them? Stop talking rot. Chip sizes and designs are predicated on far more factors than your moronic rants about area. Which seems to be the only thing you ever talk about because you lack the technical background to talk about anything else. So every question comes down to areas for you. somehow. Every question is turned into something to do with areas. Reminds me of a professor at the southampton microelectronics lab who once said: 'if the only tool you know is a hammer, then every problem starts to look like a nail. A dangerous path.'

NVIDIA, and every other company, has far more knowledgeable people than you working on their products, and they make decisions based on far more variables than you can even begin to fathom. I stopped reading your long, insipid, empty rants a long time ago. But I can't help noticing some ridiculous sentences jump up from time to time in these walls of text. Get off your high horse, stop pretending to have all the answers, and stop pretending long winded rants will make up for your lack of any real knowledge with substance.
There is no reason NVIDIA can't make a 500mm2 chip if it aligns with its strategy and design capability. In fact, ITRS predicted applications for 500mm2 chips as early as 2006/2007. And NVIDIA has been successful in doing exactly that, by creating and following the strategy THEY'VE developed, And NOT by listening to some random guy on a forum ranting about how NVIDIA, AMD and Intel should actually be running their companies based on speculation pulled out of the rear-end.

And NVIDIA will continue to do what it is doing and be successful at it, regardless of whether or not they make 500mm2 chips, because they will make their decisions based on optimization theory which drives virtually every single decision in every area of engineering. regardless of whether or not that decision appears counter-intuitive to an ignorant forum troll.

Post of the week. Right on the money...

"All you need is ignorance and confidence, and then success is sure" - Mark Twain.
 
Except Semiaccurate is tripe and you linking it doesn't help his case one bit but rather hurts it. It's opinionated journalese dressed up in technical language for the masses, and only the non-technical would use it as a source. I am quite aware DM's rant sound like they were sliced off semiaccurate. And that is part of the big problem. Of all the reviewers/tech journalists anandtech is an example a few guys who actually understand engineering -- I think some of them are even electronics engineers -- And they are far more conservative. You should notice that.

Only the bottom-of-the-barrel journalists resort to opinionated rants like semiaccurate. You see it everywhere. Fox news comes to mind. If news is reported as a series of facts with the viewer drawing the conclusion, then that is something else. Semiaccurate reads like a blog post by some AMD shill ranting about his butthurtness.

At any rate, it is impossible for anyone to know enough from a few journalistic pieces (be it tech journalism or otherwise) to be able to dictate how companies should do their engineering -- which is a much more complex affair. It's one thing to have an opinion. but to present that opinion overbearing as fact is nothing short of sheer ignorance of the engineering process. Microelectronics (and nanoelectronics which is the phase we are now entering) is a thoroughly specialised and complex affair even in electronics engineering which is a very specialised field (compared to other fields like mechanical or chemical which are more general). If NVIDIA was building 500mm2 chips they would do so with real reason dictated by all their research engineers with BEngs/MEngs, and PhDs/EngDs. It is impossible even for an engineer who is an outsider to effectively determine the prudence of design decisions. Design decisions go through a complex process, and no real engineer would dare to do so knowing what hte process is like. Much less an oppinionated layperson and part-time forum troll.

It is experts in microelectronics and nanoscale systems that are behind ITRS and if they predicated larger chip designs it's because they new it had its use and its niche. And if NVIDIA is using them (and it has successfully enough) then that is their affair. It would be presumptuous to a tremendous degree to think some forum troll or some idiot-journalist on semiaccurate know better.
 
NVIDIA is stupid? And I suppose you're smarter than them? Stop talking rot. Chip sizes and designs are predicated on far more factors than your moronic rants about area. Which seems to be the only thing you ever talk about because you lack the technical background to talk about anything else. So every question comes down to areas for you. somehow. Every question is turned into something to do with areas. Reminds me of a professor at the southampton microelectronics lab who once said: 'if the only tool you know is a hammer, then every problem starts to look like a nail. A dangerous path.'
NVIDIA, and every other company, has far more knowledgeable people than you working on their products, and they make decisions based on far more variables than you can even begin to fathom. I stopped reading your long, insipid, empty rants a long time ago. But I can't help noticing some ridiculous sentences jump up from time to time in these walls of text. Get off your high horse, stop pretending to have all the answers, and stop pretending long winded rants will make up for your lack of any real knowledge with substance.
There is no reason NVIDIA can't make a 500mm2 chip if it aligns with its strategy and design capability. In fact, ITRS predicted applications for 500mm2 chips as early as 2006/2007. And NVIDIA has been successful in doing exactly that, by creating and following the strategy THEY'VE developed, And NOT by listening to some random guy on a forum ranting about how NVIDIA, AMD and Intel should actually be running their companies based on speculation pulled out of the rear-end.

And NVIDIA will continue to do what it is doing and be successful at it, regardless of whether or not they make 500mm2 chips, because they will make their decisions based on optimization theory which drives virtually every single decision in every area of engineering. regardless of whether or not that decision appears counter-intuitive to an ignorant forum troll.

Unfortunately as with all your other ill informed tripe, Nvidia has proved you wrong.

Where is GK102, cancelled, why because 500mm2+ chips are borderline impossible to make, why was 480gtx never, ever released with every shader working, because 500mm2+ chips are borderline impossible to make.

Do I bring everything down to area, yes, why, because chips take up space, and size dictates EVERYTHING ABOUT A CHIP.

Why does the 580gtx perform as it does...... because its a 530mm2 chip, if size wasn't an issue, why isn't it 1060mm2 and twice as fast?

Again, why does just about every single other cpu maker ALL agree that 500mm2+ is bat**** crazy.

Why are Nvidia the ones to get hit hardest, for longest, costing them 10's of mil's in failed chips.......... and also produce the biggest chips on a given process. No, none of this is linked at all, its random, I'm some stupid guy on the internet that's all it is.

I'm wrong, 480gtx 512 shader versions were a dime a dozen, and they didn't spend what would be 3-4million in re-taping out the chip ALONE, ignoring the thousands and thousands of hours of dozens/hundreds of engineers working on fixing it. The 480gtx was on time, with high yields and very nice selling margins.

Oh wait, no, the opposite of all that is true.


500mm2 isn't IMPOSSIBLE to make, you could theoretically make a pi X 150mm^2 sized chip should you really want, but a single defect would mean the chip doesn't work, but one wafer eventually might come through working.

500mm2 simply plunges yields into an area that makes cost per working chip vs the cost you can sell it at, incredibly unfavourable.

Every single last thing about every single chip you've ever used was designed within design limitations ALL based on size. Chips and size go hand in hand, cost and area are proportional, performance and area are proportional, power and area are proportional, yields and area are inversely proportional.

The discussions go back to die size, because, the discussions go back to die size, NOT because I take them there, because its a fundamental part of chip production that you'd have to be moronic to think doesn't factor into every decision about chip production.

First thing that happens with chip design is a bunch of eager nerds make a list of everything new they want to do in a chip, then an engineers says, we can make a chip X size, and they cut off everything they can't fit on the chip starting with the least important, until they can make something that fits, overly simplistic, but that is simply how it works.
 
Xsistor makes some interesting points. However I would have to point out that you would think a large corporation full of smart people would only make smart informed decisions. Unfortunately corporate history is littered with failures caused by the human factor. Ego, greed,pride etc. can all lead to such failures. Are either GPU manufacturer exempt from making poor decisions?

I've personally worked for a number of companies full of PhD's where choices were heavily influenced by flaws in peoples characters. Theory is all well and good but the real world doesn't always follow the rules ;)


This is why Optimization Theory is important. Engineering decisions admit that there are no perfect decisions. There are simply very good ones. The reason behind this has to do with something called the local maxima (or minima) problem in mathematics. It's a complex topic that is beyond the scope of a few forum posts, but the point of that is that no company is trying t ofind the perfect solution to hte perfect problem (in fact there is a very complicated mathematical framework that underpins why searching for hte perfect problem is impossible).
But far be it for a random forumer and his favourite tech-journalist to think they know how best to dictate terms to the engineering design process.

Engineering design is all about compromise. About finding the best fit given various constraints. And the whole process is part-heuristic and part-very-mathematical. Engineers don't sit around coffee discuss things in english words with no real quantifiable data and then make decisions. They use a very rigorous process.

And i find it ridiculous that DM comes on here, writes a llong pointless rant, and then thinks his argument has ANY merit at all. Anyone who thinks he has merit clearly does not understand the complexities of the engineering process.

Not even a top microelectronics engineer outside NVIDIA, without their data sets and greater strategy, can reliably critique their approach to design. The same is true for any engineering company.
Chances are, you employ those hypothetical-critical-engineers and you will find that they are forced to use similar solutions to hte problem when the real weight of design constrains come into play. Or they will come up with different designs but with different associated problems.

Wolpert summarised this phenomena expertly in what is called the No Free Lunch Theorem in Optimization.

I'm not saying NVIDIA or any company is perfect in their approach and there are many many flaws. But the degree of authority with which DM attacks such designs is ludicrous, disingenuous, misleading and downright ridiculous.
 
First thing that happens with chip design is a bunch of eager nerds make a list of everything new they want to do in a chip, then an engineers says, we can make a chip X size, and they cut off everything they can't fit on the chip starting with the least important, until they can make something that fits, overly simplistic, but that is simply how it works.

I'm sorry but you clearly do not know about this field. Take a step back and stop pretending you do. Try to unlearn everything you learned from Semiaccurate because it is not even semi-accurate. Pick up a few books on Circuit Theory, Differential Equations, Digital Design, Analogue Electronics. Once you get some grasp of the topics, go study CMOS design. This will take you a few years. Then when you're done try churning out designs according to engineering specs, and you will see what a complex process it is. IT is not a bunch of nerds hashing things out in english language terms you can fit on an oppinionated blog or forum post.
 
Last edited:
Back
Top Bottom