• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Geforce 'Maxwell' Thread

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,359
Location
kent
Some times DM posts walls of text that make sense and some times he posts walls of text that don't, sometimes there are even walls of text that some of it makes sense and some of it doesn't.
Which one this is, for a change isn't entirely dependant on whether you are anti Nvidia and pro AMD, or vice a versa.
 
Soldato
Joined
1 Apr 2010
Posts
3,034
If the 780 / Titan was ready at this point they'd have released it. It would have left them how long with a much faster chip than AMD? I think the 680 / 7970 just reflected the realities at that time.

nVidia wouldn't have released the 680 thinking that it would be faster than the 7970 forever. The specs alone show that the 7970 should have been the better card. Luckily AMDs driver team got their backsides in gear and got the hierarchy about where it should have been :).


I don't think we will ever know if the gk110 was ready to tap out. All I do know is that nvidia always like to retain the performance crown, the 680 came with a 256bit bus and looked very small pcb wise, it had the air of a mid range gpu. Plus the gossip and pics before the 680 was released showing it at as a 670 where very believable, especially as previous nvidia top end gpu's had more bus and ram than the lower cards.

I can well believe that nvidia may of had issues releasing a full fat chip at that period of time but I cant believe that the 680 was always intended to be their top end product for that cycle.

Think about it this way release a upper mid range chip as top end and surely there will be more profit to be had. Then move what was to be the top end gpu to the next cycle and make even more cash due to the reduced r&d. Makes perfect business sense considering the competition didn't have anything massively faster.
 
Soldato
Joined
27 Feb 2012
Posts
6,586
Think about it this way release a upper mid range chip as top end and surely there will be more profit to be had. Then move what was to be the top end gpu to the next cycle and make even more cash due to the reduced r&d. Makes perfect business sense considering the competition didn't have anything massively faster.


Getting into tin foil hat territory now :p
AMD could also be accused of releasing mid range first, as the bus is smaller than their top range R9 290 ;)

Think back to the GTX2** era, the 280 was 512bit and the 260 was 448bit.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,179
If the 780 / Titan was ready at this point they'd have released it. It would have left them how long with a much faster chip than AMD? I think the 680 / 7970 just reflected the realities at that time.

All their GK110 allocation was going to oak ridge, etc.

Dunno how true it actually was but the original leaks about the titan cards came from this angle with claims of nVidia stock piling the cores that didn't make the grade for use in super servers to reuse in GeForce/Titan cards.
 
Soldato
Joined
30 Nov 2011
Posts
11,358
All their GK110 allocation was going to oak ridge, etc.

Dunno how true it actually was but the original leaks about the titan cards came from this angle with claims of nVidia stock piling the cores that didn't make the grade for use in super servers to reuse in GeForce/Titan cards.

they were also going in to Tesla cards, which they were selling for £3000+
if you had limited manufacturing capability / yield issues and a choice between selling a GPU core for £3k or for £800, which would you choose?

as you say, it was only when all other (higher priced) avenues of selling the cores were no longer taking every chip they could make that they stock piled the cores to put in to the consumer grade cards
 
Associate
Joined
8 May 2014
Posts
2,288
Location
france
a lot of ppl starting to drop TSMC, AMD moved to GlobalF, qualcomm's next SoC is going to SMIC...maybe the lanes going to be freed on TSMC lol, and we could hope Nvidia gets their 16nm earlier than expected.
 
Soldato
OP
Joined
2 Jan 2012
Posts
11,996
Location
UK.
Def think we'll see a flagship in Feb, just in time for Witcher 3 (Nvidia Sponsored Title) and a great title to show what it's capable of.

Will be saving my pennies for this, hope I can resist the cards that come this year :p, cos you know whatever comes next year is going to be that much better.

Wander how Titan Black will fare against GTX 880 :confused:
 
Soldato
Joined
27 Mar 2010
Posts
3,069
I disagree, the GTX 680 was still a decent improvement over the GTX 580 in performance http://www.anandtech.com/bench/product/772?vs=767 , and also performance per watt. Likewise the 7970 was night and day over the 6970 http://www.anandtech.com/bench/product/509?vs=508

Then the GTX 780 / 780 Ti and R9 290 R9 290X again improved significantly over the GTX 680 and 7970.

Compare that to CPU's progress and your talking maybe 5% IPC improvement per new arch and AMD seem to be going backwards :(

GPU's have progressed well even with die shrinks not being viable.


The 680 and 7970 weren't true top line gpu's, just flagship gpu's during the first initial safety gap period on the then new 28nm process. They concentrated on a good yield rate over max performance, whilst working towards a way to make the real full fat dies= titan,290xThe 7970 from looking at the architecture and cu configuration was a mid range gpu which at the time was their flagship, they played it safe by keeping it small.
The layout of the Hawai shows that amd went wider and shorter providing 4 geometry engines, in compaison to Tahiti which was taller but half as wide with 2 engines.


I don't think gpu's have progressed well, we should have seen the 290 and the titan much earlier during the phase of 28nm! I think the development time has been very slow, I feel we are a good 2 years if not more behind the potential that should have been. Amd have until recently always played it safe with smaller dies and keeping it efficient so if they needed to they could just make an x2 single card.
Nvidia always played the big die game, but it took them a few attempts of gk110's k20's and marketing binned/die harvested variations of gf110's, titan 780 780ti until they could make enough for production.

The anandtech bench only represents the pre driver and the pre 1ghz 7970 release performance. As we know Tahiti was poorly optimized in both clocks and driver support. Secondly TSMC buggered up the half node (32nm) and that led to Cayman (6970) being strangled of its true potential. Cayman sould have provided a noticeable difference over the 5870 but it was pretty mergghh.

People keep forgetting that if the cayman was built on the 32nm then we would have had a stronger performing 7970 and 680.[/

The 7970 wasn't THAT much faster than the 680 with the newer drivers. Sure it was faster but not that much.

Agreed there wasn't much in it performance wise. but memory bandwidth and overclocking potential and then the driver support, all these helped the 7970/280x along during it's lifetime.Thankfully I never fell for the price of the 680, Nvidia must of made a killing of that execution fair play to them though.


That's the point though. Nvidia would have never released their top end card slower than amds flagship as nvidia always like to hold the crown (even if it is a small lead). When they did release the 680 it was slightly ahead of the 7970 and then amd sorted the drivers out which put the 7970 just slightly ahead. The 680 IMO was never meant to be nvidia flagship and really we all should have got a full fat chip instead.

Believe the hype or not I still think gpu advancement has slowed down and both companies are happy to drip feed tech as it benefits their pocket. Why bring out new tech every twelve months with big performance jumps when they can lengthen the cycles, reduce performance jumps, up the prices and get away with it.

It wasn't just the drivers Tahiti was purposely clcoked lower to enable nvidia to market their 680 in line with their 7970 whilst both companies worked towards the full fat die problem, also if you look at the timeline of events it all magically occurred every 6 months.
1. 6 months after the release of the 7970 they received a bios update/1ghz marketed.
2. 6months later the famous drivers were developed. Yeah I agree too luckily I only buy when I need to and I do my research before I commit to buying., however I'm more ****ed with the stagnation and watering down of graphics and gameplay in recent games.
 
Soldato
OP
Joined
2 Jan 2012
Posts
11,996
Location
UK.
***SNIP***

Obviously we would have all liked GK110 and 290X from the get go, point was the 7970 was a big improvement over the 6970 and likewise the GTX 680 over the 580. Performance per watt was massively improved. They weren't the full fat cards, and I was flamed for saying so at launch, funny how majority excepts that they weren't full fat now. Hindsight is a great thing for the fickle :p

AMD / Nvidia aren't obliged to give us everything all at once, they have to make money as well. R&D on these things is insane. GK110 and 290X came around quick enough, and if your not interested in more 28nm cards just wait until next year for 20nm/16nm. Time is flying by and they will be here soon enough.


We can all be impatient, and say they should do it all at once but at the end of the day 680 and 7970 were better than their predecessors, and likewise the 290X and 780Ti, and so on..
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
92,179
^^ Some people around here seem to have very short and very selective memories.

Gonna go pat myself a bit more on the back for buying a pair of 470s for £160 each (and not all that long after release) and sticking with em til the 780 GHZ.
 
Soldato
Joined
25 Oct 2007
Posts
6,911
Location
Los Angeles
^^ Some people around here seem to have very short and very selective memories.

Gonna go pat myself a bit more on the back for buying a pair of 470s for £160 each (and not all that long after release) and sticking with em til the 780 GHZ.

Personally I thought the 470/480's were great cards especially on H2O. I held out all the way to 680.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188


People keep forgetting that if the cayman was built on the 32nm then we would have had a stronger performing 7970 and 680.[/

How on earth do you get that if the 6970 was built on 32nm the 7970/680 would have been faster, they wouldn't have. 32nm wouldn't make 28nm transistors smaller. For a 384bit bus in the 7970's case, with GCN 1.0 architecture, there would be a set amount of rops/shaders that would suit a 384bit bus and thus we would have always ended up with a 350mm^2 part. For Nvidia they went with a 256bit bus and there is a set amount of shaders that works well with that.

Yields on the Gk110(notice GK100 got cancelled because their early attempt was as rubbish as GF100) were dire. It was sold to the professional market first because it took time to build up an inventory of parts for a consumer level market, they cost/yield wasn't going to cut it for profit at early stages and releasing a full part didn't happen for a VERY long time from launch.

Nvidia moved up their midrange part at an early date because for three generations they had problems with their 500+mm^2 parts. Midrange meant 256bit bus and however many shaders they would have calculated would work with that bus. Put 3000 shaders on it and they'd be a huge power draw and have no where near the required bandwidth to be fully utilised, put a 384bit bus and increase die size and power significantly for too few shaders and you get an inefficient card that is under powered, add the shaders to the new bus and you had a GK110 which they couldn't make.

32nm had precisely no impact at all, in any way, on what AMD/Nvidia made at 28nm.

When a range of cores have the same architecture and are all being made at the same time, IE all GK chips, they aren't reacting to the opposition, they are just doing what manufacturing allows. Sometimes you choose to go midrange first, sometimes high end. For years Nvidia went high end, midrange 6 or so months later and low end often up to 18 months later(or not at all... see the GT230........).

With AMD the 7970 was the high end part, 290x is a different architecture, it wasn't in the making at the time the 7970 launched(GK110 was absolutely in the making when the 680gtx launched), it has a bunch of different features. They aren't related in any way and the time gap between them pretty much confirms that the 7970 wasn't a "bumped up from midrange" part as Rroff seems to think.

The 680gtx very much was one, because anyone even half awake knew about the GK110 6+ months before the 680gtx launched and knew a bigger faster core was coming when they could make it.
 
Soldato
Joined
27 Mar 2010
Posts
3,069
Whilst architecture can't be compared, AMD from dec2010 (cayman 6970) to still today haven't provided the r9 290xt. looking at the shader and tmu, rop config is interesting, just like ati used to double up over each generation.

Cayman. 1536 shaders. 96 tmus. 32 rops. 256bit memory bus
Hawaiixt. 3072 shaders.192 tmus. 64 rops. 512 memory bus
 
Soldato
Joined
27 Mar 2010
Posts
3,069
How on earth do you get that if the 6970 was built on 32nm the 7970/680 would have been faster, they wouldn't have. 32nm wouldn't make 28nm transistors smaller. For a 384bit bus in the 7970's case, with GCN 1.0 architecture, there would be a set amount of rops/shaders that would suit a 384bit bus and thus we would have always ended up with a 350mm^2 part. For Nvidia they went with a 256bit bus and there is a set amount of shaders that works well with that.

No but 32nm would have made 40nm transistors smaller DO not forget Cayman was scalled down from it's original design as it failed to use the half node shrink. Now imagine it came out with the rumoured 1920 shaders and not the 1536, ok gcn vs vliw4 are totally different but then look does a 2048 gcn core part over a potential 1920 look high end, even 1536 vs 2048 despite gcn efficiency does not look high end, it looks 50% gain and mid rnage.

One thing we do know is Cayman should have been faster than it was. Why would that have made the 7970 and 680 faster, well effectively the 6970 would have performed faster than it did meaning that the 7970 would have had to provide more performance than it did when released. As the 6970 was underperforming the gap between 7970 and 6970 was large, but the gap between the 7970 and the 580 wasn't so great. the 7970 could have included one more cu on each engine making it a 17x64 x2 = 2176 shaders or they could have just released the clocks at their potential of 1050-1100 which they can all seemingly manage.




yields on the Gk110(notice GK100 got cancelled because their early attempt was as rubbish as GF100) were dire. It was sold to the professional market first because it took time to build up an inventory of parts for a consumer level market, they cost/yield wasn't going to cut it for profit at early stages and releasing a full part didn't happen for a VERY long time from launch.

YES already been said in the past and recently.


Nvidia moved up their midrange part at an early date because for three generations they had problems with their 500+mm^2 parts. Midrange meant 256bit bus and however many shaders they would have calculated would work with that bus. Put 3000 shaders on it and they'd be a huge power draw and have no where near the required bandwidth to be fully utilised, put a 384bit bus and increase die size and power significantly for too few shaders and you get an inefficient card that is under powered, add the shaders to the new bus and you had a GK110 which they couldn't make.

32nm had precisely no impact at all, in any way, on what AMD/Nvidia made at 28nm.

It did on the basis of it allowed Nvidia and Amd to market mid range offerings at a higher price point and higher tier.

When a range of cores have the same architecture and are all being made at the same time, IE all GK chips, they aren't reacting to the opposition, they are just doing what manufacturing allows. Sometimes you choose to go midrange first, sometimes high end. For years Nvidia went high end, midrange 6 or so months later and low end often up to 18 months later(or not at all... see the GT230........).

With AMD the 7970 was the high end part, 290x is a different architecture, it wasn't in the making at the time the 7970 launched(GK110 was absolutely in the making when the 680gtx launched), it has a bunch of different features. They aren't related in any way and the time gap between them pretty much confirms that the 7970 wasn't a "bumped up from midrange" part as Rroff seems to think.

The 680gtx very much was one, because anyone even half awake knew about the GK110 6+ months before the 680gtx launched and knew a bigger faster core was coming when they could make it.

Just because the 7970 isn't a scaled down version of the 290xt doesn't mean it can't be mid range. Compare barts to cayman and yes you can see barts is.
 
Last edited:
Back
Top Bottom