Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I've heard that like three times today, all in different context! lolGuys if Leicester City can win the title anything can happen.
Guys if Leicester City can win the title anything can happen.
We actually have pretty good sources on the sizes of Polaris 10 and GP104. Unless Nvidia have seriously botched Pascal or AMD have hired literal wizards to empower Polaris, then it should give a pretty good indication of 'which will be faster' at the very least.
But believe what you need to believe.
Just because calculating Polaris scaled up as "slightly larger than a 390X" is absolutely no indication thats where its performance is at.
It completely ignores performance upgrades from the node, from the new architecture and from differences in clock rate abilities.
What we do know:
#Polaris scaled up to 28nm is slightly larger than a 390X
#GCN 4.0 has significant performance improvements over current GCN 1.2 (0 of which is included in your analysis)
What we don't know:
#How much if any performance increase per transistor from the new node.
#How much performance improvements from the GCN 1.2 architecture to GCN 4.0
#What clock rates.
All that considered to say "oh its as big as a 390X so its the same performance" is short sighted to put it politely.
The 390X is only about 15% larger than a 7970, is it only 15% faster? No, its about 40% faster. different architecture.
Nvidia release a flagship card that sets new records for value?
No it's not ignoring these things. Quite the opposite. It is specifically taking these things into direct account, clock rate abilities aside.Just because calculating Polaris scaled up as "slightly larger than a 390X" is absolutely no indication thats where its performance is at.
It completely ignores performance upgrades from the node, from the new architecture and from differences in clock rate abilities.
A full system using an 8800 GT only used 200w *under load*.The 8800 GT was cheap wasn't it ?, and that was GTX performance.
No it's not ignoring these things. Quite the opposite. It is specifically taking these things into direct account, clock rate abilities aside.
AMD have told us what their general efficiency improvements have been with Polaris and the new process shrink. 2x the performance per watt. Which is pretty typical of new arch/node shrink. After that, it doesn't take anything but rough arithmetic to estimate where a card at a given chip size will be at.
Combined with specific talk about Polaris being aimed at the 'mainstream', I think a lot of what we've been talking about is hardly some 'out there' sort of conjecture.
320mm^ die vs 230mm^ die both using Finfet improvements on the general 20nm design on new architectures. Like I said, Nvidia would have had to have botched things badly, or AMD injecting some actual magic for this not to be a foregone conclusion. I dont know what the specific gap will be, but it's very hard to imagine any way that these will turn out to be equal products, much less a win in AMD's favor. I'm not saying it's physically impossible or anything, who the hell knows, but it just seems like a very long shot at this point.
A full system using an 8800 GT only used 200w *under load*.
http://www.anandtech.com/show/2365/13
GPU design today has gotten a lot more beastly since then. We have midrange cards now that use more than that on their own.
"Previously, AMD had claimed that it would deliver a 2x performance-per-watt improvement with Polaris as compared with its 28nm-class hardware. Now, the company has bumped that prediction to 2.5x improved performance-per-watt. Readers should keep in mind that like all metrics, performance-per-watt is not an absolute. Because silicon power consumption is not a simple linear curve, these figures are likely best-case estimates based on midrange hardware, not the worst-case scenario when comparing top-end parts clocked at maximum frequency."Actually, AMD have been saying 2.5x performance per watt.
By all means, show me the gaming tests that show this.And the only concrete thing we've seen from the new processes is that Nvidia's P100 on TSMC 16FF+ is less than 2x the performance per watt.
The point is both companies will be quoting the 'best case' numbers.
AMD has updated theirs to 2.5x presumably because the 14LPP process turned out better than predicted.
Nvidia have stuck with 2x, and their P100 specifications show:
- 1.295x increase in FP32 performance per watt over Maxwell
- 1.647x increase in FP32 performance per watt over Kepler
- 2.47x increase in FP64 performance per watt over Kepler, since Maxwell was cut
- 1.88x increase in transistor density
So it's quite a mixed bag, since they've clearly sacrificed FP32 performance for FP64 on the P100. But presumably these are 'best case' figures, and the 1.88x density certainly isn't a good sign.
At the very least, hopefully GP104 has its FP64 cut out, otherwise it'll be disappointing. Since FP64 isn't for gaming.
Those are theoretical numbers, not real world. Real-world performance can be vastly different.
As for the density, there are very good reasons to reduce transistor density such as improving clock frequency, heat dissipation etc. the GP100 is sold at massive margins so the chip size is fairly irreverent to nvidia's profits.
For crying out loud !!!
None of the cards are out yet, please cease this pointless bickering.
I really don't see the point in all of this because it makes no sense at all.
AMD/Nvidia please release the cards so we can stop all this crazy speculating.
Anyway, if rumours are true then Nvidias 1080 will be faster than Polaris 10 because the 1080 will be aimed at a higher market than the Polaris 10 anyways. P11 and P10 are aimed at the low - mid end of the market whereas 1080 is aimed mid - high mid end of market. 1070 probably aimed at same market or thereabouts as P10 if I am correct in my assumptions.
The real enthusiast battle will be between Vega and 1080/Ti and that's not until End of Q4 2016 (at the earliest) or Q1 2017. AMD are aiming to get back market share where the most cards are sold.....Laptops and low-mid market.
It makes sense because they are both waiting on GDDR5X and HBM2 yields and this needs to be right for release.
![]()