• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD on 16nm GPUs: up to 200% better power efficiency over previous gen

Caporegime
Joined
24 Sep 2008
Posts
38,284
Location
Essex innit!
We know that the shift over to 16nm is going to be an incredible one, but AMD is really aiming for some super jumps in power efficiency.

image: http://imagescdn.tweaktown.com/news...better-power-efficiency-over-previous-gen.jpg
amd-16nm-gpus-up-200-better-power-efficiency-over-previous-gen_10

During an interview with Tom's Hardware, AMD said that the move to 16nm FinFET will have a possible 2x energy efficiency improvement over previous generation GPUs. 16nm is going to be a large jump for both NVIDIA and AMD, but we are going to see HBM2 used at the same time. NVIDIA's use of HBM2 will be the first time they will use High Bandwidth Memory, but it'll be AMD's second time, since Fury X will be powered by the next generation RAM.


Read more at http://www.tweaktown.com/news/45818...power-efficiency-over-previous-gen/index.html

Some good news comes with 16nm and 200% potential power efficiency leaves a lot of room for raw power. I don't know about you guys but I am sick to death of 28nm and want something new to play with.
 
so AMD need 16nm tech to match nvidias 28nm tech.

nice.

i dont think AMD have taken into account what will happen when nvidia go to 16nm tech too.

AMD need to sort the architecture out and not rely on process shrinks alone to solve their issues.
 
so AMD need 16nm tech to match nvidias 28nm tech.

nice.

i dont think AMD have taken into account what will happen when nvidia go to 16nm tech too.

AMD need to sort the architecture out and not rely on process shrinks alone to solve their issues.

True. They needed HBM to (probably) match GDDR5 Titan X / 980 Ti or thereabouts.
 
so AMD need 16nm tech to match nvidias 28nm tech.

nice.

i dont think AMD have taken into account what will happen when nvidia go to 16nm tech too.

AMD need to sort the architecture out and not rely on process shrinks alone to solve their issues.

Nvidia marketing doing its job here, people will believe anything they say.

Maxwell is not 200% more efficient than GCN 1.1, its about 40%. but even then there are times when its nothing, the GTX 970 can and sometimes does use similar power to the 290X

GCN 1.1 is old tech about to be replaced, no one knows what the efficiency levels are on AMD's new GPU's, only a few days to go and we will know.
 
Last edited:
They need it. Having to cool their top end cards with water is riduculous. Shame it took 16nm for them to achieve it.

The new AMD high end cards will have water and air cooling. So that's that dealt with.

Regarding heat Titan X and GTX 980 are hardly cool running GPU's. Look the in the 980 Ti thread you can see many users talking about their cards running hot.

Nvidia need 16nm just as much as AMD to add even more CUDA core etc.

I've just put my TX under an AIO just to keep it quiet and cool, GM200 is a hot chip.
 
True. They needed HBM to (probably) match GDDR5 Titan X / 980 Ti or thereabouts.

Which does not change from the fact that the standard Nvidia had to abandon was HMC,so they couldn't use HBM this generation,which AMD has been working with Hynix since at least 2011,so they have to use HBM2,which means AMD has more experience working with HBM type memory.

The same stuff was said when the HD4870 had GDDR5,before Nvidia,and "they need GDDR5 to do xyz" and Nvidia only need GDDR3.

Have some of you not realised that AMD is probably looking at effiency improvements past the process node improvements too??

Look at the work they did with the IGP on Carrizo.
 
Last edited:
Which does not change from the fact that the standard Nvidia had to abandon HMC,so couldn't use HBM,which AMD has been working with Hynix since at least 2011,so has to use HBM2,which means AMD has more experience working with HBM type memory.

AMD have a long history of coming out first.
 
Which does not change from the fact that the standard Nvidia had to abandon was HMC,so they couldn't use HBM this generation,which AMD has been working with Hynix since at least 2011,so they have to use HBM2,which means AMD has more experience working with HBM type memory.

The same stuff was said when the HD4870 had GDDR5,before Nvidia,and "they need GDDR5 to do xyz" and Nvidia only need GDDR3.

Have some of you not realised that AMD is probably looking at effiency improvements past the process node improvements too??

Look at the work they did with the IGP on Carrizo.

What they did with Carrizo is impressive, something like a 220% efficiency increase, astonishing.
 
What they did with Carrizo is impressive, something like a 220% efficiency increase, astonishing.

Just makes you wonder were AMD would be if they had access to decent fabrication plants. Also shows just how good ARM technology is that it takes Intel to produce chips on a 14 nm process just to keep up with there competitors.
 
I think what has happened is nvidia Nuclear Powered Fermi made nvidia, went oh bugger we need to improve the efficiency.

Keplar came out everything was better... (Architecture change)
Maxwell made everything better again...

AMD had it easier on years proceeding as as when Fermi hot AMD didn't need to do anything. With efficiency/power consumption etc.

Now its been a while and AMD hasn't focused on efficiency that much until recently. AMD is at the fermi stage for efficiency now its their turn (albiet not so bad as fermi). New range (as in the new stuff not re branded ones) is going to be similar what keplar was to nvidia.
 
Nvidia marketing doing its job here, people will believe anything they say.

Maxwell is not 200% more efficient than GCN 1.1, its about 40%. but even then there are times when its nothing, the GTX 970 can and sometimes does use similar power to the 290X

GCN 1.1 is old tech about to be replaced, no one knows what the efficiency levels are on AMD's new GPU's, only a few days to go and we will know.

+1 I did a couple of tests when I had a 970 and 290x and the 290x used the same power as the 970 at some things with at most 50/60watts more Sometimes when gaming.
 
Sounds good they can use 16nm and HBM to get nice improvements alone. Not to mention if they decided to move to AIO offically on the high-end to further help control temps and realy push for performance.


Not to mention we get PCIE 4.0 and DX 12_ next year. Yum either way each side is still going to have what i would guess is the first 4K single card hopefully.
 
And this is why I plan to go easy in the next couple of weeks and grab a 970/390x instead of a 980/980ti/Fury. I'm fine on 1080p for now, especially as a good 1440p monitor is quite costly and I can't afford one. But most of all, we don't have a single card powerful enough to hold 60fps at 1440p without reducing game settings. Or one at an affordable price point at least. I'm saving my pennies for Pascal/R9 4XX so I'm spending £300 max on a graphics card this year. I'd not bother at all, but currently I have no graphics card at all.
 
Back
Top Bottom