• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

2304 shader HD7900 series card in the works??

Soldato
Joined
19 May 2004
Posts
3,848
Well AMD have officially stated that all shaders are active on the 7970.

Also 384bit mem bus = more bandwidth at similar clocks. (it was easier/cheaper to add more lanes than put in a faster mem controller/chips)

Now as for the core clocks, yeah, we are going to see 3rd party overclocked cards actually decently overclocked.

I wonder if gigabyte will do an SOC model?
 
OcUK Staff
Joined
17 Oct 2002
Posts
38,258
Location
OcUK HQ
These cards overclock like crazy!

Mine is now stable at 1225MHz core and 7800Mhz memory. :)

We shall definetely see some OC options coming from several vendors for sure. :D
 
Soldato
Joined
19 Jan 2010
Posts
6,769
Location
South West
You quite right Meaker, we will see some exceptional 3rd party cards this year.

I would love to see Gainward do one of their 'Golden Sample' of a 7970:).

Gibbo 1225MHz core and 7800Mhz memory, pretty impressive performance.:eek:

Very tempting to sell my 5850 & splash out on one.
 
Last edited:
Soldato
Joined
19 May 2004
Posts
3,848
Well I do believe that sapphire notes as far as clocks.

We will see them release (even if a limited number) of water cooled 7970s at 1335mhz, they will be expensive and fast.

I believe those sorts of clocks with voltage tweaks and improved third party coolers will be reachable and game stable.
 
Associate
Joined
14 Dec 2010
Posts
495
These cards overclock like crazy!

Mine is now stable at 1225MHz core and 7800Mhz memory. :)

We shall definetely see some OC options coming from several vendors for sure. :D

What are the temps/volts/fan speeds you're using for this Gibbo? i.e. Are they a great card that overclocks and performs well after you've overvolted the Christ out of them and sound like a cat getting zapped with a cattle prod?
 
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Transistor density doesn't scale linearly with identical architectures due to electron tunnelling and other quantum effects, parts of the chip have to be redesigned with additional space between wires and the sort.

Good design can mitigate this. I think Anand had a nice table from the Bulldozer or Llano reviews comparing chips and their densities.

This is certainly true, but if you check previous GPU generations transistor density has followed the expected trend more closely. For example, taking the last few AMD GPU die-shrinks (comparing successive high-end GPUs only) and comparing the increase in transistor density, we see:


130nm->90nm: [x800xt -> x1800xt]:
Expected: 108%
Actual: 96%

90nm->80nm: [x1900xt -> x2900xt]:
Expected: 27%
Actual: 40%

80nm->55nm: [x2900xt -> HD 3870]:
Expected: 111%
Actual: 102%

55nm->40nm: [HD 4870 -> 5870]:
Expected: 89%
Actual: 73%

40nm->28nm: [HD 6970 -> 7970]:
Expected: 104%
Actual: 74%


It's not a million miles away from the trend, but it's certainly the biggest "undershoot", and by a fair margin. You see a similar trend looking at Nvidia's chips also, though they have historically achieved lower absolute transistor densities since the 8800GTX (presumably due to the "hot clocked" shaders).

I suppose the key will be to see what kind of transistor density increase Nvidia come out with. If it's in the same ballpark as AMD (~75%) then we can assume it's due to process issues. If it's significantly higher (>90%) then there may be other factors.
 
Soldato
Joined
19 May 2004
Posts
3,848
Yes but without knowing the breakdown of types of transistor and the real process dimensions then looking at what they decided to call the process is a little pointless.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
This is certainly true, but if you check previous GPU generations transistor density has followed the expected trend more closely. For example, taking the last few AMD GPU die-shrinks (comparing successive high-end GPUs only) and comparing the increase in transistor density, we see:


130nm->90nm: [x800xt -> x1800xt]:
Expected: 108%
Actual: 96%

90nm->80nm: [x1900xt -> x2900xt]:
Expected: 27%
Actual: 40%

80nm->55nm: [x2900xt -> HD 3870]:
Expected: 111%
Actual: 102%

55nm->40nm: [HD 4870 -> 5870]:
Expected: 89%
Actual: 73%

40nm->28nm: [HD 6970 -> 7970]:
Expected: 104%
Actual: 74%


It's not a million miles away from the trend, but it's certainly the biggest "undershoot", and by a fair margin. You see a similar trend looking at Nvidia's chips also, though they have historically achieved lower absolute transistor densities since the 8800GTX (presumably due to the "hot clocked" shaders).

I suppose the key will be to see what kind of transistor density increase Nvidia come out with. If it's in the same ballpark as AMD (~75%) then we can assume it's due to process issues. If it's significantly higher (>90%) then there may be other factors.

You need to factor in a heck of a lot more than that, firstly it makes a lot more sense to factor in early lower yielding 40 vs early lower yielding 28nm, which gives you 334mm2 2.15billion chip to a 360mm2 4.3billion transistor chip.

Each process is seeing a seemingly bigger difference in terms of quality of chips out the door first to out the door last, and more things being done to combat the process problems from the chip makers.

It used to be, to a degree, that process guys made their process and then chip guys came along and just followed the design rules and made a chip without too many problems. Now, realistically the process and gpu's are done in tandem, TSMC get gpu guys in early(and cpu, really anything difficult to make) and try and adjust design the process as best as possible AND have the cpu/gpu guys adjust their chips to the process.

The 5870 is said to have 10-15% die space pretty much dedicated to combating yield/process problems, the 6970 iirc is what 15% larger with 25% more transistors -ish, so transistor density increased...... or the process got better to the point the 6970 could have less redundancy, hard to know without TSMC or AMD telling us point blank.

Likewise, the actual process node names are, rounded up or down, and with HKMG added across only part of TSMC's 28nm process, while hkmg is supposed to add around 10% to the size of a gate, well, who knows. PR is pr, is their lowest end, highest transistor density and non HKMG process close enough to 28nm, but the HP HKMG version is actually closer to 30-31nm?

Then you have the last major factor, from say 80-65-55nm almost everything was scaling down very well. It's being said that several bits in chips simply aren't scaling as well. the "process" size is merely the smallest ones possible, not the average size of all bits, nor the size everything can be shrunk to. IE 100% of the core used to shrink, these days, that number is reducing....... I've never seen anyone put a number on it though.


There are also other possibilities, did AMD go for a "faster" core so drop a little IPC essentially for higher clock speed, did that require some transistor changes, in terms of extra transistors for more speed. Have amd neutered it to stay within 300W, at 1250Mhz it's clearly going to be an awesome card, would it have launched at 1100Mhz if the 300W pci-e GUIDELINE been changed years ago, who knows.


Either way, the 7970 vs the 5870 gives a pretty good indication of process shrinking, early immature process both times, double the transistor count, performance up 70-80% and way more than that in tessellation and certain other situations.

This 300W thing is peeing me off, its a guideline not a "rule" anywhere, and its fine to have 4 300W cards in quad sli, but one card breaking 300W is somehow "bad", which gave us first off neutered dual gpu cards, and now we have dual gpu cards that blew through 300W, and no one gave a damn. But blowing through 300W on a single card, officially, and again AMD is being timid about it, or seemingly.

Either way, a lot of people were hoping process shrinks wouldn't be "easy" but would hold out for a little longer, with the 14-16nm being the point people are expecting insane problems and below that well, being almost unpredictable.

Think we need some pretty severe technology breakthroughs in the pretty near future 3-4 years max, or seeing some serious performance walls for gpu's and cpu's soon.
 
Last edited:
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
As I've said every time this has been mentioned (including three posts above):

The 40nm -> 28nm transition has not scaled as well as has been the case historically, in terms of transistor density. This is the observable fact.

As for the reason: We won't know whether this is a general issue with downscaling to such a small process, or simply a property of AMDs design choices, until we see Kepler. If Kepler sees ~70-80% increase in transistor density then it's fair to assume process issues are dominant. If it's 100%+ then we may assume it's design-oriented.
 
Last edited:
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
I think it's design oriented, as they've moved to a compute centric architecture similar to nvidia's -- and nv had a lower transistor density already due to different types of circuits.

While tunnelling starts to become an issue as circuits approach the nanoscale, thus requiring thicker dielectrics to prevent leakage which will certainly impact feature size, and, consequentially, packing density, this can also be remedied by using high-kappa dielectrics instead of, say, SiO2. At 28nm this is more easily remedied than with sub 0.011 micron technology where tunnelling is impossible to manage without a revolution in engineering materials.

So yeah, I'd agree with you and expect roughly the same packing density for two identical CMOS designs... But I attribute the difference to requirements such as more on chip SRAM circuits and such for a compute-strong architecture.
 
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
While tunnelling starts to become an issue as circuits approach the nanoscale, thus requiring thicker dielectrics to prevent leakage which will certainly impact feature size, and, consequentially, packing density, this can also be remedied by using high-kappa dielectrics instead of, say, SiO2. At 28nm this is more easily remedied than with sub 0.011 micron technology where tunnelling is impossible to manage without a revolution in engineering materials.

Interesting stuff...

So you're saying that beyond 11nm we're going to be looking at dramatically diminishing returns with regard transistor density? Or does it go even further, in that a sub 11nm process is simply not going to be viable with today's technology?
 
Associate
Joined
24 Jun 2009
Posts
1,545
Location
London
At sub 11nm we can't even continue to use silicon. Single molecule organic transistors are a major open area of research and a few labs have already constructed their own versions. It will likely become the step beyond that. So carbon ahould replace silicon in the near future(post 2016). As gate sizes decrease quantum tunnelling becomes more severe and is the major cause of leakage. But tunnelling can also be exploited for faster computation--- e.g quantum annealing optimises faster than simulated annealing on many problems.


11nm and below is where microelectronics becomes nanoelectronics due to maxwell's electromagnetics breaking down and analysis requiring pure quantum electrodynamics.



As for the current 7970 and 28nm u may also be right about the active transistor thing. (companies always use inconsistent terminology-- intel's so called 3d transistor being the pinnacle of ridiculousness on that front). Worth doing a bit of arithmetic on that when I get off work
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
33,188
I can't be arsed searching for process roadmaps as you find so many links to ridiculously old roadmaps its impossible to tell. Anyone have an idea when 14-16nm is due at say TSMC, and if they have said anything official about 10-12nm stuff?

I would think 22nm from TSMC is still at least 2 years away, 16nm 2 years after that and if really really lucky 12nm 2-3 years after that, I wouldn't be surprised if the time between processes pretty much from now goes up pretty damn fast though.

450mm wafers "might" bring down production prices enough that we could essentially see a new generation simply from much bigger chips, or from Nvidia/AMD prices dropping to make xfire/sli setups much more affordable.
 
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
The most recent info I've seen is from about 10 weeks ago:

http://forwardthinking.pcmag.com/sh...techcon-let-the-next-great-chip-debates-begin

Toward the end of next year or in early 2013, TSMC will roll out its 20nm technologies. At that node, there will only be two choices: 20G for high performance and 20SOC for low power mobile applications. This week ARM, TSMC, and Cadence announced the first design tape-out of a Cortex-A15 multi-core test chip using a 20nm process.

One interesting debate dealt with the structure of the transistor, following Intel's decision to move to "Tri-Gate" transistors as opposed to conventional planar transistors starting with its 22nm "Ivy Bridge" chips due out early next year.

Chiang said that planar transistor structures will "run out of steam" at gate lengths of around 17-18nm. By the end of 2014, TSMC will introduce 14nm, the first node at which the foundry will use FinFETs (a more generic name for a transistor that sticks up, of which Tri-Gate is a style). That date is not "fully settled yet," said Chiang, and TSMC could pull it in or push it back depending on what it hears from customers. But once it introduces FinFETs, the new structure will work for the 14-, 10- and 7nm nodes, meaning the foundries have at least "another ten years to go" with Moore’s Law, Chiang said.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Pull it in, lol, My bet will be early 2013 is the aimed date, real supply of real chips will be mid 2013 and 20nm, probably into 2015. Mostly going off TSMC, and to be fair, glofo, Intel, samsung and just about everyone else slipping years over a few processes on their roadmaps.

I'm fairly sure 22nm for Intel was originally supposed to be out early 2011(that is going back 3-4 years at least on the roadmap, maybe more).

We'll see what happens, at some point the cost of a new process might negate any usefulness we see out of it. IE maybe 10nm actually scales down well and you can fit a crapload more transistors, but the cost of doing so, the equipment, the metals, but mostly the time spent in production makes a 10nm wafer cost well over twice the cost of a 14nm wafer, meaning you can get a gpu that performance 30-40% better from the new wafer in the same die area/power usage, but it will cost twice as much, at which point you're better off simply using 2x 14nm gpu's in xfire/sli. That can obviously only scale so far though and would mean very power hungry systems at a time when, potentially in 5-8 years electricity costs could be through the roof.

Hehe, we might finally start getting games looking the same for years and years and dev's actually being forced to be original and write good stories :o

it is quite exciting to think what might pop up in terms of real next gen CPU production, something completely non silicon and where/how it might work.
 
Back
Top Bottom