• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Big Polaris or Big Pascal, which do you think will win the fastest GPU crown.

Polaris 20% faster in synthetic benchmarks.

Pascal 50% faster in actual games, due to Gameworks.

This is the main reason I'll probably be buying Pascal next generation.
 
Polaris 9% faster @ Stock

Pascal 13% Faster Overclocked


Personal feeling is that Polaris will be decent, but still won't be the Overclockers Dream (TM)
 
I think it will be too close to notice any difference while gaming.But im buying polaris as they need the money more lol.
 
I'm pretty sure Nvidia will have a halo product that is the fastest, at a price, they nearly always have even if it means clock the chip very high. there has only been a few occasions when the AMD/ATI chip was outright faster (9700pro for example, and that was because nvidia made a bad bet on API lifetimes and a new smaller node coming along on time).

When it comes to affordable cards I'm pretty sure Polaris will be a little faster band for buck wise. AMD's attempt at raising prices hasn't done them any favors and I think they will try to go back to the budget approach.


I also think there will be more variance since with DX12 game performance is more dependent on the quality of the developers. with DX11 Nvidia and AMD have more control over replacing poor code form the developer with code more suited to the architecture. It will be even more important for the IHVs to work with the developer to ensure the product performs well before release. This is where Nvidia has a definite edge with a much larger developer relations program.
 
I also think there will be more variance since with DX12 game performance is more dependent on the quality of the developers.

On this point, devs have direct feedback as they code shaders themselves. they have a far better toolchain and debug chain for solving performance related problems with DX12. So while it will help using the ISV's for support. it is not as much of a requirement with Low abstraction API's.

Since take one case. dev tests game on one system and sees 30fps with code A. but 20 on second system.

He then alters code and the performance reverses.

with third alteration the performance balances but both gain lower performance overall. say arond 26-28fps.

dev could decide to write two code paths that suit the different architactures or carry on with trial and error.

for smaller devs we will more than likely see the middle ground approach while larger studios may check hardware ID's and seperate the code paths.
 
On this point, devs have direct feedback as they code shaders themselves. they have a far better toolchain and debug chain for solving performance related problems with DX12. So while it will help using the ISV's for support. it is not as much of a requirement with Low abstraction API's.

Since take one case. dev tests game on one system and sees 30fps with code A. but 20 on second system.

He then alters code and the performance reverses.

with third alteration the performance balances but both gain lower performance overall. say arond 26-28fps.

dev could decide to write two code paths that suit the different architactures or carry on with trial and error.

for smaller devs we will more than likely see the middle ground approach while larger studios may check hardware ID's and seperate the code paths.

Good Developers already do this, and those with larger budgets will make different code paths if they see an issue. The problem is there are loads of bad developers out there, or they are constrained by resources so they don't always have to optimize the code.

With DX11, a lot of the optimization was done by the driver, with DX12 the developer has more responsibility but developers are under tighter deadlines than ever before. So what is more likely to happen is many games will come out with poor performance, or poor performance for certain features on certain hardware. While big developers will be able to get better performance and more easily optimize different hardware.
 
I know Big Polaris or Big Pascal will be the fastest.

I can't possibly predict which will be the fastest due to lack of technical details.

Could this be the most pointless thread ever ?
 
Good Developers already do this, and those with larger budgets will make different code paths if they see an issue. The problem is there are loads of bad developers out there, or they are constrained by resources so they don't always have to optimize the code.

Yes, they can now sort major performance degrading issues themselves with DX12/Vulkan etc. even when being on a tighter deadline they can get these out of the way quicker since they can see down to the hardware. which you can't with DX11. Even with different code paths on DX11 they can perform better optimisaion with low abstraction API's.

And a lot of devs are already used to these kinds of low abstraction performance optimisations since it is no different to how they work with consoles.

The main this is that they don't have to deal with the driver getting in the way. Resulting in the driver needing patching to work with the game.
 
IIRC TSMC 16nm actually results in a slightly larger core but also more power efficient when all else is equal resulting in about 2-3% net lead to 16nm for the same design.

(Does give potential for a mature 14nm to eventually take the lead).

You're talking about TSMC 16nm FF vanilla (not '+') and Samsung 14nm FF LPE. With a sample size of one chip (current iphone Ax chip).

The only comparison of + and LPP you might be able to do is on Apple's next gen Ax, IF they use + for TSMC. No other chip is likely to be produced on both in the forseeable future.

I highly doubt + is more efficient than LPP as it's designed to be compatible with true high power chips. Samsung's legit HP 14nm probably won't arrive until '17 (possibly a reason for enterprise Zen not turning up for some time after desktop FX?).
 
I don't know why anyone thinks AMD will have the fastest chip once both companies have released their new gear. Unless there has been a fundamental strategic shift in AMD to develop a chip that's built for out and out pure performance AMD will do as they always hve done for the last years and give us the best value top end chip and card.
 
Back
Top Bottom