• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

[TechSoda] Waiting on 20nm graphics cards from Nvidia and AMD? Don’t bother.

Caporegime
Joined
17 Mar 2012
Posts
47,662
Location
ARC-L1, Stanton System
Billions of dollars are spent every year on shrinking the size of transistors, for good reason.

Smaller transistors have superior performance characteristics but the main reason for the shrink is because the smaller the transistors are, the more you can squeeze into a chip. That means you can get better performance from smaller chips, allowing you to squeeze more chips on to the same wafer – and the more chips on a wafer, the more money you make per wafer.

So node shrinks bring more money and smaller, faster chips – while using less power than before…it’s just a win all round. Simple, right?
This isn't the first time 20nm has been called into question, Nvidia and AMD themselves have publicly stated they are less than pleased with 20nm.

Both are still developing new GPU's and even architectures on 28nm.

The bottom line, IMO we aren't going to see 20nm any time soon, or even the medium term, they might even skip it altogether and wait for 16nm, just as they did in skipping 32nm.

Read on... http://techsoda.com/no-20nm-graphics-amd-nvidia/#.UxzZY3FeIPM.twitter
 
Soldato
Joined
5 Sep 2011
Posts
12,816
Location
Surrey
Comes as no surprise, didn't expect any offering this year simply based on the fact that even the 750ti is based on 28nm. They seem to be really struggling with it.

Makes you wonder what both vendors will offer nearer the Christmas period, as there is bound to be something new.
 
Soldato
Joined
23 Apr 2009
Posts
3,473
Location
Derby
Comes as no surprise, didn't expect any offering this year simply based on the fact that even the 750ti is based on 28nm. They seem to be really struggling with it.

Makes you wonder what both vendors will offer nearer the Christmas period, as there is bound to be something new.

Doubt it that's why I took the plunge on 2x290x's as I cant see them being properly out performed for some time just like the 7970's when they were launched.
 
Associate
Joined
9 Jul 2009
Posts
1,008
That's a very interesting article. If they are having these problems at 20nm then how is going even smaller to 16nm going to be any different? He mentioned that AMD are working on 16nm with Finfets which are 3D transistors similar to intels tri-gates if I'm not mistaken? We already know from intel's CPUs that 3D transistors have a negative impact on thermal efficiency which isn't good for GPUs at all.
 
Caporegime
OP
Joined
17 Mar 2012
Posts
47,662
Location
ARC-L1, Stanton System
That's a very interesting article. If they are having these problems at 20nm then how is going even smaller to 16nm going to be any different? He mentioned that AMD are working on 16nm with Finfets which are 3D transistors similar to intels tri-gates if I'm not mistaken? We already know from intel's CPUs that 3D transistors have a negative impact on thermal efficiency which isn't good for GPUs at all.

Its not 3D transistors that are causing extra heat, and its not 'necessarily' the extra heat that's causing the lower overclocking capabilities in Intel's CPU's.

Its simply that the Node is so small the metals are increasingly electrically resistant, that causes the heat and less stability.

Take a wire and put an electrical charge through it, take a thinner wire and put the same charge through it, the thinner you go the hotter the wire gets.
Its because the amount wire become less effective at dissipating the resistance.
 
Last edited:
Associate
Joined
9 Jul 2009
Posts
1,008
Its not 3D transistors that are causing extra heat, and its not 'necessarily' the extra heat that's causing the lower overclocking capabilities in Intel's CPU's.

Its simply that the Node is so small the metals are increasingly electrically resistant, that causes the heat and less stability.

Take a wire and put an electrical charge through it, take a thinner wire and put the same charge through it, the thinner you go the hotter the wire gets.
Its because the amount wire become less effective at dissipating the resistance.

Surely its a combination of both though? If you have a flat transistor, it has a lot more top surface area relative to its size to dissipate the heat out of. If you compact the transistors footprint by making it 3D, you have a lot less top surface area per transistor. Its not that they just get hotter, its that you physically can't get the heat away from them fast enough as can be seen on the newer intel chips. Once they reach a certain voltage, the temps spiral out of control regardless of how much cooling you throw at it.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Been saying this for some time, it's not new.

Take a "standard" process drop and take a 200-250W chip and call it 400mm^2.

The goal for a GPU company with this chip is effectively two fold, use the new process to produce the same number of chips at half the size with as close to halving the power usage as possible and you have your new midrange chip. Then to use the pure power saving to produce a chip that has twice the number of transistors and as close to twice the performance at the same die size/power usage.

So you relatively speaking want 50% drop in power per transistor and as close to 2x the transistor density. one issue is not everything now is dictated by transistor size, in the old days the gaps between transistors were so big comparatively that every drop in transistor size and gap between transistors was fine. So to keep clean signals and to keep transistors cool now dropping the distance between transistors doesn't always work so some parts of chips scale very badly now with process drops compared to other bits. Memory controllers are said to be one of the things that scale badly.


Either way, when you go from a 50% power reduction to 25%, and this is on the low power process, not the high power which is less and less where these processes are tuned for. Then a 290x replacement with lets say only 15-20% more power, effectively means the same power chip would at twice the transistor density end up going from a 450mm^2 chip to a 300mm^2 250W TDP but only 15-20% faster chip. A 450mm^2 chip would be effectively 70-80% faster but use most likely 400W + which just isn't a viable product for most companies.

16nm brings with it a 40% power reduction....... I'm still unclear if the 40% power reduction is supposed to be from their own 20nm, which is brilliant, or in comparison to existing 28nm, which is much less good but still would be enough to be the difference between a meaningful new high end and a crap one.

I'm unsure if they'll try a 7970/680gtx replacement at 20nm, and wait for 16nm for the high end or if they'll skip it entirely.

20nm transistor density isn't bad, 1.9 increase is good, but this really has to be used alongside a huge power drop to make it worthwhile. Don't forget that 16nm isn't a fully new process, it is 16nm finfet's on top of the existing 20nm base metal layer, the equipment, fabs, everything is basically the same so there is no reason to believe in huge delays for 16nm. It should be in volume production a year after 20nm and we'll likely see significant benefit for high end gpu's.


I'd prefer, MUCH prefer them to switch to 16nm as early as possible with genuine 70-80% faster cards in a given segment, than have some overly expensive, painfully small improvement 20nm cards which effectively push back 16nm cards. Everyone that works on the 20nm cards won't be working on the 16nm project, with limited resources I hope Nvidia/AMD both focus on 16nm.
 
Associate
Joined
4 Oct 2007
Posts
907
Makes you wonder what both vendors will offer nearer the Christmas period, as there is bound to be something new.

Refreshes of certain cards I'd imagine.

They has been rumors of a dual 290x card as well, might see similar things on both sides.

But Maxwell will probably launch at some point this year, it just doesn't look like it will be on 20nm. They might end up as rebranded 7xx parts like the 750 Ti has.

Its disappointing the jump to 20nm won't be made anytime soon though.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
20nm won't really offer anything. a 290x/780 + 20% performance for absurd cost? Or wait 10months more for 80% higher performance parts. I know which I want and making the former would delay the latter.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,168
With TSMC "claiming" 16mn is "only" a year away I wonder if that has got nVidia and AMD holding fire on 20nm and hence nVidia playing with the 750ti and seeing what else they can do with 28nm coz the 750ti and "Maxwell" version they are using on it makes absolutely no sense in regards to 20nm Maxwell development. Neither company has been happy with 20nm.
 
Soldato
Joined
22 Aug 2008
Posts
8,338
Looks like a dual-GPU card willy-waving contest is on the cards for the next year or so. :(

Unless Hawaii has a full fat version waiting in the wings after all?
 
Soldato
Joined
5 Sep 2011
Posts
12,816
Location
Surrey
Yerp.

There's not a lot appealing about a 790GTX really, unless it has 6GB per core. Even then the Hawaii counterpart will be more preferable with e-peen pixel pushing because of the memory bus.

Boring just thinking about the pair of them.
 
Soldato
Joined
16 Nov 2013
Posts
2,723
Asking a silly question why dont AMD and NVidia simply buy 22mn wafers of intel? Since they work fine.
Since Intel are moving to 16mn next jump on that bandwagon and skip 20mn aswell?
 
Soldato
Joined
24 Jul 2004
Posts
22,594
Location
Devon, UK
Asking a silly question why dont AMD and NVidia simply buy 22mn wafers of intel? Since they work fine.
Since Intel are moving to 16mn next jump on that bandwagon and skip 20mn aswell?

I think what works for a CPU won't necessarily work for a GPU. So you can't just use Intel's node.

I have to admit, I thought we'd have 20nm cards by now, I'm sure I even said it on this forum. How wrong was I? :D
 
Man of Honour
Joined
13 Oct 2006
Posts
91,168
^^ Yeah Intel's 22nm was designed around producing <200mm2 CPUs/SSDs with <2bn transistors whereas a high end GPU can be double that or more - GK110 is in the region of 7bn transistors.
 
Associate
Joined
2 Feb 2014
Posts
4
Been saying this for some time, it's not new.

Take a "standard" process drop and take a 200-250W chip and call it 400mm^2.

The goal for a GPU company with this chip is effectively two fold, use the new process to produce the same number of chips at half the size with as close to halving the power usage as possible and you have your new midrange chip. Then to use the pure power saving to produce a chip that has twice the number of transistors and as close to twice the performance at the same die size/power usage.

So you relatively speaking want 50% drop in power per transistor and as close to 2x the transistor density. one issue is not everything now is dictated by transistor size, in the old days the gaps between transistors were so big comparatively that every drop in transistor size and gap between transistors was fine. So to keep clean signals and to keep transistors cool now dropping the distance between transistors doesn't always work so some parts of chips scale very badly now with process drops compared to other bits. Memory controllers are said to be one of the things that scale badly.


Either way, when you go from a 50% power reduction to 25%, and this is on the low power process, not the high power which is less and less where these processes are tuned for. Then a 290x replacement with lets say only 15-20% more power, effectively means the same power chip would at twice the transistor density end up going from a 450mm^2 chip to a 300mm^2 250W TDP but only 15-20% faster chip. A 450mm^2 chip would be effectively 70-80% faster but use most likely 400W + which just isn't a viable product for most companies.

16nm brings with it a 40% power reduction....... I'm still unclear if the 40% power reduction is supposed to be from their own 20nm, which is brilliant, or in comparison to existing 28nm, which is much less good but still would be enough to be the difference between a meaningful new high end and a crap one.

I'm unsure if they'll try a 7970/680gtx replacement at 20nm, and wait for 16nm for the high end or if they'll skip it entirely.

20nm transistor density isn't bad, 1.9 increase is good, but this really has to be used alongside a huge power drop to make it worthwhile. Don't forget that 16nm isn't a fully new process, it is 16nm finfet's on top of the existing 20nm base metal layer, the equipment, fabs, everything is basically the same so there is no reason to believe in huge delays for 16nm. It should be in volume production a year after 20nm and we'll likely see significant benefit for high end gpu's.


I'd prefer, MUCH prefer them to switch to 16nm as early as possible with genuine 70-80% faster cards in a given segment, than have some overly expensive, painfully small improvement 20nm cards which effectively push back 16nm cards. Everyone that works on the 20nm cards won't be working on the 16nm project, with limited resources I hope Nvidia/AMD both focus on 16nm.

I think the main issue for the companies is cost. Yields have been dropping with each node while the price of wafers go up, so this new midrange chip on 20nm that you're projecting would probably cost the same or more as a current high-end 28nm one.

This is particularly true due to the maturity of 28nm. A few companies said it years ago but nobody really believed it - but 28nm is a very, very long lasting node.

I responded to a post on the article with something else that a lot of people missed.

From the Anandtech article (last paragraph) - http://www.anandtech.com/show/7764/the-nvidia-geforce-gtx-750-ti-and-gtx-750-review-maxwell/3

Finally there’s the lowest of low level optimizations, which is transistor level optimizations. Again NVIDIA hasn’t provided a ton of details here, but they tell us they’ve gone through at the transistor level to squeeze out additional energy efficiency as they could find it. Given that TSMC 28nm is now a very mature process with well understood abilities and quirks, NVIDIA should be able to design and build their circuits to a tighter tolerance now than they would have been able to when working on GK107 over 2 years ago.

Now can you imagine Nvidia doing the opposite with Maxwell on an immature 20nm? If you add that on top of the rest of the characteristics - I basically just can't imagine any card on 20nm beating what they already have or could currently do on 28nm.

How much faster could an "880 Ti" Maxwell on 28nm be compared to the 580 GTX? Massively so I reckon - the gains on 28nm have been crazy because of the length of time on the node - it must be getting pretty close to two nodes worth of performance gains on 28nm surely?

Those gains won't be seen on to 20nm for a very, very long time, or ever.

It's all starting to look a bit unlikely. Don't get me wrong though - I could be well off here but the evidence against 20nm looks pretty solid I think.

I'm also with you that I'd much rather they all waited on 16nm rather than attempt this. My big worry is that both will decide to abandon ultimate performance in favour of some mobile architecture on 20nm, further leading to the demise of the desktop. I think we all know it's heading that way eventually though...
 
Last edited:
Back
Top Bottom