• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD HD7XXX Series

Soldato
Joined
22 Oct 2005
Posts
4,013
Location
Thailand
Anyone got any news on these cards yet? Last I heard they could arrive as early as September but there's been nothing here on them recently at all. Very happy with my GTX 470 Twin Frozr IIs but could definitely use an upgrade.

Also, is it just me or is there a lot less interest in new cards than there used to be?
 
Not heard much myself about the new cards but as for les sinterest, it depends where you look.

Mid/high range sees a lot of interest with the likes of the 6950.

The GTX480 caused a screen up until recently too when stock ran out.

Issue with high end is that they just arnt quick enough compared to last gen

When the 7800GTX stepped up a gear to the 8800GTX the jump was massive.

The same can be said for GTX 295 dual set ups being matched and beaten by single cards liek the 480.

The 580 though wasnt twice as fast as the 480 and the 590 didnt really clock leaving little in the way of being wowed. The same can be said of the 5870 to 6970 where by the gap wasnt enough to justify a 2 year gap in upgrades.

Hopefully we will see something special with the new range of cards and games when the consoles get a refresh.
 
They're also based on the same manufacturing process of 40nm. The new 28nm chips should see double speeds at lower temps.
 
They're also based on the same manufacturing process of 40nm. The new 28nm chips should see double speeds at lower temps.

Lower temps maybe not, Maybe temps will be around same the as current gen GTX5** series but with higher core clocks.
 
Phatzy said:
Lower temps maybe not, Maybe temps will be around same the as current gen GTX5** series but with higher core clocks.

Get the feeling that they're more likely to be small hike in performance (e.g.~20%?) but much lower temps and power consumption. Primary benefit of the new process - which is supposed to offer double the performance PER watt.

Overslop said:
Also, is it just me or is there a lot less interest in new cards than there used to be?

Hardly any games that need them to run :(. Blame consoles.
 
Get the feeling that they're more likely to be small hike in performance (e.g.~20%?) but much lower temps and power consumption. Primary benefit of the new process - which is supposed to offer double the performance PER watt.

Far more likely we will see a doubling of the number of transistors, leading to a near-doubling of performance. Power consumption will, once again, be snapping at the 250W-300W single-GPU limits. In other words, the "double the performance per Watt", which is a reasonable prediction, will be used to dramatically improve performance rather than power consumption (in high-end cards at leasT).

The key change is the smaller size of the transistors: 28nm allows just over double the number of transistors to be packed into the same area compared to 40nm... I'll quote something related to this that I posted recently, because I'm too lazy to go into depth again :p


The change in lengthscale is only a factor of 1.43 (40nm -> 28nm), but silicon chips are 2D arrangements of transistors. Therefore, reducing the size of a transistor by 1.43x will increase the number you can pack into a given area by a factor of (1.43)^2 = 2.05.

You can see this by looking at transistor counts and process sizes of past GPUs. Take, for example, the jump from GT200 to GF100:

* The GTX280 has a transistor density of 2.43 million transistors per mm^2 (1.4Bn transistors over 576mm^2), and uses a 65nm process.
* The GTX480 has a transistor density of 6.05 million per mm^2 (3.2Bn over 529mm^2), and uses a 40nm process.

A straight application of the area-density rule would predict an increase in transistor density of (65/40)^2 = 2.64. In reality we see an increase of 2.49.

The same thing applies for 8800GTX -> GTX280: 90nm->65nm predicts an increase of 1.98x in transistor density, but in reality we saw an increase of 2.06x. It's never a 100% precise calculation, since design specifics will also affect transistor density, but it's always a good estimate.

This generation, transistor density will be roughly double that of the previous generation. Assuming that die sizes are similar to recent generations, we can expect double the number of transistors. This allows double the number of shaders, texture units, cache etc etc.The only thing that may stop the next generation cards from exceeding 580GTX SLI or 6970x-fire levels would be power draw limitations.
 
Last edited:
Does that cat have a top hat in his wardrobe? :D


I expect we will see signficantly lower temps and power draw, with substantial performance gains. Iirc it will be new tech, not just something tweaked and refreshed.
 
So let's see if I understood.

The transistors make up the GPU and provide the power. The nm eg.40, 28 is the size of the transistor. The transistors can only take up a certain physical area. Therefore with smaller transistors you can fit more into that physical area and therefore have more power.

If that's correct, then why don't we just have bigger cards with more power?
 
The transistors make up the GPU and provide the power. The nm eg.40, 28 is the size of the transistor. The transistors can only take up a certain physical area. Therefore with smaller transistors you can fit more into that physical area and therefore have more power.

If that's correct, then why don't we just have bigger cards with more power?

Yep - that's right :) For purposes of illustration you can think of transistors as being like a 2D "box" of size 28nm by 28nm (or 40nm by 40nm or whatever). If you reduce the size of the boxes you increase the number of them you can pack into your GPU.


As why we don't have bigger chips with more power: Well firstly the heat output from the GPU is directly proportional to the power draw: Double the power draw and you will double your heat output, so you need a far more effective cooler.

Secondly, the bigger the GPU the more unstable it will be. In practice this means using lower clockspeeds. If I double the number of shaders but this means I can only get 2/3rds the original clockspeed, then I've only gained an extra 33% performance for a GPU that's double the size (...poor economics). You need smaller, more efficient transistors to scale up a GPU properly, so you need to wait for a new smaller process in order to make big increases in GPU size without running into diminishing returns.
 
Thanks duff man!
BTW, you don't happen to have a ridiculously long article explaining every single bit of a GPU in depth for me to read by any chance?
 
Thanks duff man!
BTW, you don't happen to have a ridiculously long article explaining every single bit of a GPU in depth for me to read by any chance?

Heh :D

Not off-hand, but over the years I've probably at least tried to explain most aspects somewhere on this forum :p
 
So let's see if I understood.

The transistors make up the GPU and provide the power. The nm eg.40, 28 is the size of the transistor. The transistors can only take up a certain physical area. Therefore with smaller transistors you can fit more into that physical area and therefore have more power.

If that's correct, then why don't we just have bigger cards with more power?

The main reason for bigger cards with more power not being an option is simply manufacturing.

Nvidia's Fermi top end cards are 529mm2 in size, wafers are 300mm diameter circles in shape and so thats the manufacturing limit. Now, if you made a core stupid big that covered almost the entire wafer, it WOULD fail, you always get at least a few defects in a wafer, sometimes a lot more. The smaller the core essentially the smaller the part of the wafer effected where the defect is. In terms of the original Fermi wafers, you get around 100 Fermi cores on a single 300mm wafer, because there were so many defects and because the cores were so big there were essentially no fully working Fermi's on the entire die.

Almost a year later and several multi million dollar respins and they've got working cores though I'm not sure how good yields are to be honest.

Now if you make much smaller cores, you both get say 150 cores per wafer, AND more of them work and less fail. If you paid per good core this wouldn't be an issue, but you pay TSMC for each wafer you have made.

Essentially Fermi is bordering on the absolute limit of maximum realistic die size on 300mm wafers, AMD were at a hugely safer, cheaper, higher yields, lower power and higher profits 340mm2 or so, and 380mm2 for the newer 6970's.

Basically there isn't a chance in hell you could make anything bigger than Fermi at all, and as it is, profits are low, production is low and yields aren't great. AMD have it about spot on in terms of size vs yield vs cost per core. If they made a core twice as big with twice the power, you'd go from 150 wafers per die with 80% yields at 5k a wafer = $42 a core, to something like 75 cores per wafer at 20% yields(if lucky) at the same 5k a wafer cost = $333 a core cost.

Thats why two small cores is MASSIVELY cheaper than one epically sized core, it isn't linear so 1x 380mm2 cores aren't significantly more expensive than 2x190mm2 cores, its an exponential curve where above a certain size yields go to crap and cost goes up exponentially.

For now xfire/sli is vastly better than a stupidly big core, 400mm wafers in the future will likely change the realistic top core size but were quite a few years from that at the moment.

28nm will bring WAY more than 20%, but I'm not convinced this time around that it will bring the circa 80% performance increases, though its very hard to know because AMD's architecture is having such a massive change to go with it.
 
Last edited:
Cheers drunken master.
So what stops them from putting 30 smaller cores on a card and making better drivers to support it? I believe Nvidia were planning to do something like that. Wasn't it called re voodoo?
 
Back
Top Bottom