• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

More news on Kepler and Southern Islands

Genuine question here:

Why don't nVidia/ATI just move to a "tick-tock" system like Intel does? It seems to work well for them and gives a chance to iron out any kinks with a new process by using a proven design?

Seems to make sense to me. ATI threatened to do it a few years back but that idea seemed to disappear down the pan somewhat.
 
^^^
Nvidia did employ a kind of tick-tock strategy, but the problem with it is, the tick-tock strategy get's interrupted on every new microsoft DX release as neither company want's to be seen as 'old tech', and if 'X' company doesn't have the 'expertise' or relevant process resources with launching new architectures on new processes, then that can get them into trouble.
On the last gen AMD do seem to be employing a 'safer' strategy, or at least a strategy to hedge their bets, by releasing mainstream cards with tried and tested architectures, with new architectures piloted/tested in the lower volume high-end market first...
 
Last edited:
Tick tock strategy wouldn't work in the slightest bit for AMD or Nvidia on gpu.

Why does it work for Intel, because Intel make their own parts, CPU's take well over a year to go from tape out to production, Intel control the process and Intel control everything.

You can not lock yourself in to that strict a plan at all on GPU's, even less so when the fabrication is done externally. AMD/Nvidia both had 32nm designs, 32nm got cancelled, Intel wouldn't do that, TSMC would, you have to be flexible. GPu's go from tape out to market in 6 months, cpu drivers are pretty easy, GPU drivers, API, software being made for them is FAR more complex.

Intel can go, we'll have process X in three years, lets start making a chip for it, and even if we radically redesign it, basically the OS will send the same info and the chip itself handles it differently. GPU makers go, we have no idea what process will be available in 3 years, we will think about architecture but won't remotely make final decisions on size/shader count/power envelope as we have no idea what process its on, etc, etc, etc.

As for power, theres nothing to suggest AMD have to sacrifice any clockspeed to use a different process.

6970 850Mhz 1.2v(ish, can't remember), 580gtx, 1.6Ghz, 0.9-1v. If a process is aimed at 1.2Ghz and sub 1v, theres nothing to say the HKMG would have been good for the 6970 anyway, while a process aimed at 800Mhz and higher voltage wouldn't have worked better for them.

It could just as easily work better for their particular core than be worse. The only thing we've seen is Nvidia can't use it because Nvidia type clocks, which are double AMD's roughly, can't be achieved as the process isn't designed for it. That doesn't mean 1Ghz, or even 1.2Ghz won't be fine, just that Nvidia roughly 1.5Ghz upwards isn't possible on it.
 
Who knows? as long as the atoms have enough space to operate in the way they’re supposed to ,some people around here ought to know though . Them we have quantum computing on the horizon too ,when we get to that stage size will not really matter any more ,its all down to quantum superpositions .:rolleyes:

This is all spanish to me :D
 
Sorry what I meant was that as GPUs are moving to 28nm from 40nm; how much further GPUs can go in terms of process shrinking (i.e. transistors becoming smaller) before a new material/technique is required?

Not sure, but power consumption will become the limiting factor soon as power consumption doesn't scale well with each process node progression, so your left with what they call 'dark silicon', which is die area that can't be powered up without going over TDP limits.

 
I dont think that is going to happen.

xfire 6970's is super powerfull!

Even 6950 xfire = 7970 is also extremely unlikely!

If they can keep the clock rate up on the low power process it wouldn't be that hard to achieve close to if not match 6950 xfire with a similar size chip, as 40nm to 28nm is a decent jump, and that's without any possible architectural efficiency increases...
 
I dont think that is going to happen.

xfire 6970's is super powerfull!

Even 6950 xfire = 7970 is also extremely unlikely!

28nm allows more than double the transistor density of 40nm, so expecting performance to double is not at all unreasonable.

In terms of relative process size, this is one of the biggest jumps for years - partly due to TSMC the abandoning the planned 32nm intermediate step. I expect the performance increase for this generation to be equally dramatic. Of course, whether you will actually need the extra power is another debate entirely!
 
If they can keep the clock rate up on the low power process it wouldn't be that hard to achieve close to if not match 6950 xfire with a similar size chip, as 40nm to 28nm is a decent jump, and that's without any possible architectural efficiency increases...

Its simply not true, or even close to true for various reasons.

The biggest reason is, we've next to never had a 100% performance increase from a new gen on a new process, even a full node drop. Also while 40-28nm is a full drop, in reality, its a very small drop compared to 90 to 65nm for instance, also 28nm only describes the SMALLEST part of they can make, not the average size nor the size of everything.

While this has been true for all other processes, essentially the lower we got the more the limits of things other than a basic gate become the limit. Anyway, good scaling is expected to be 1.4x transistor density from the move from 40-28nm. This can be offset slightly in various ways, 28nm at GloFo is set to be pretty considerably smaller than 28nm at TSMC.

Either way, we're moving roughly speaking beyond the days we can easily double transistors in a given area. So add together the fact we never ever got 100% performance increase, yes even the ridiculous golden era of the 8800gtx, which couldn't beat 1950xt's in xfire, when xfire wasn't as good, and the fact that we aren't anywhere near twice the number of transistors and 100% performance increase is NOT going to happen.

ON top of that the 6xxx series brought a fairly huge boost to average xfire performance.

Its all complete guesswork frankly, lower power process might mean not hitting the clock speeds they want, or a smaller die, or they might go for a die size increase over last gen.

Its a new architecture, infact pretty radically different if GCN becomes the new architecture, one that might not be as efficient per mm/2 for gaming but more efficient(massively so) for things other than gaming.

Without leaks of die size, and other basic info you can't even guesstimate performance, due to the architecture change it won't be even slightly comparable to last gen so that info alone won't give us anything more than a guess.

I'd think anything from 40-70% performance increase over a single 6970
 
Also while 40-28nm is a full drop, in reality, its a very small drop compared to 90 to 65nm for instance

90nm to 65nm is a 28% reduction in lengthscale, whereas 40nm to 28nm is 30%. So yes, it is a bigger jump.


Anyway, good scaling is expected to be 1.4x transistor density from the move from 40-28nm.

No.

The change in lengthscale is only a factor of 1.43, but silicon chips are 2D arrangements of transistors. Reducing the size of a transistor by 1.43 will increase the number you can pack into a given area by a factor of (1.43)^2 = 2.05.

You can see this by looking at transistor counts and process sizes of past GPUs. Take, for example, the jump from GT200 to GF100:

* The GTX280 has a transistor density of 2.43 million transistors per mm^2 (1.4Bn transistors over 576mm^2), and uses a 65nm process.
* The GTX480 has a transistor density of 6.05 million per mm^2 (3.2Bn over 529mm^2), and uses a 40nm process.

A straight application of the area-density rule would predict an increase in transistor density of (65/40)^2 = 2.64. In reality we see an increase of 2.49.

The same thing applies for 8800GTX -> GTX280: 90nm->65nm predicts an increase of 1.98x in transistor density, but in reality we saw an increase of 2.06x. It's never a 100% precise calculation, since design specifics will also affect transistor density, but it's always a good estimate.

This generation, transistor density will be roughly double that of the previous generation. Assuming that die sizes are similar to recent generations, we can expect double the number of transistors. This allows double the number of shaders, texture units, cache etc etc.The only thing that may stop the next generation cards from exceeding 580GTX SLI or 6970x-fire levels would be power draw limitations.
 
Last edited:
Back
Top Bottom