• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

I do think gcn 4.0 is a big improvement as they have changed a lot in the gpu and hopefully they see a lot of the bottlenecks from this gen gone.

the inefficiency of compute units utilisation is mostly due to the API(dx11) not necessarily the architecture DX12 get ride of that with async compute, polaris will still have ACEs instead of a giga thread like maxwell, but polaris will have far less compute units(i think something like 2.3k SP instead of 4k SP in fury), so yeah it will be more efficient, and by the time Vega comes out hopefully Devs would be using async compute more often, especialy if pascal adds it too.
 
Indeed - the results seen by various companies using TSMC 16nm FF+ gives a fairly hopeful picture for performance though - granted not many of them are working with something of the scale/demands of a GPU. (Adding onto that GP100 has a fairly healthy boost in clocks which generally for those kind of platforms is on the conservative end of what is possible).

The clocks for Gp100 are for a 300W card, in previous gens the Tesla cards were 225W to a desktop 250W. So while you might get 1.2Ghz in a Titan X(I actually have zero clue what their stock clocks are) and you might get 1Ghz for the Tesla. This gen a 1.3Ghz Tesla at 300W probably means 1.2Ghz Titan at 250W situation. Either way I wouldn't expect noticeably higher clocks with a consumer part over the Tesla part when it has a 300W tdp.
 
to me the chip need to be 400-500mm², for 4k to become like 1440p of today, and if we believe the rumors about nvidia's 1080GTX, will have 312mm², that would make it 20% over 980Ti, which i find pointless, what would any enthusiast benefit from an upgrade titan/Ti/fury to the next flagship ? pretty much nothing, he will still have the same issues, running 1080p at 140fps instead of 120, or running 1440p at 90 instead of 70, and running 4k at 43fps instead of 34.
so the new line up would be a battle for mainstream and market share not flagship, because the latter have no real incentive to it other than ppl who like to have the fastest card just for the sake of having it, and these are minority compared to the rest

Well Hawaii is a 440MM2 GPU,so I suspect it might be more next year or 2018 when we might get a true successor of sorts.
 
the inefficiency of compute units utilisation is mostly due to the API(dx11) not necessarily the architecture DX12 get ride of that with async compute, polaris will still have ACEs instead of a giga thread like maxwell, but polaris will have far less compute units(i think something like 2.3k SP instead of 4k SP in fury), so yeah it will be more efficient, and by the time Vega comes out hopefully Devs would be using async compute more often, especialy if pascal adds it too.

Yes dx12 is helping more so the 390/x than the fury as it isn't balanced as well.
 
Your problem is that 'maxed' out for you means 8XAA. I'm guessing that less than 1% of PC gamers bother using such a high AA setting.

The vast majority will use 2XAA at call it a day.

Not my problem at all.

I am just pointing out that you need a flagship card if you want to max out 1080p.

What settings people choose to use in practice is entirely up to them.
 
The clocks for Gp100 are for a 300W card, in previous gens the Tesla cards were 225W to a desktop 250W. So while you might get 1.2Ghz in a Titan X(I actually have zero clue what their stock clocks are) and you might get 1Ghz for the Tesla. This gen a 1.3Ghz Tesla at 300W probably means 1.2Ghz Titan at 250W situation. Either way I wouldn't expect noticeably higher clocks with a consumer part over the Tesla part when it has a 300W tdp.

The tesla is rated to 1480mhz at 300w

To put this in to perspective the 980ti is rated to 1075 @ 250w, but actually does 1200and raising the boards power limit to 275w they'll hit 1450-1480 with 8000 on the memory too.
 
Last edited:
Your problem is that 'maxed' out for you means 8XAA. I'm guessing that less than 1% of PC gamers bother using such a high AA setting.

The vast majority will use 2XAA at call it a day.



I still don't understand this mentality, it is either maxed out or it is not, maxed out minus a bit isn't maxed out. I do agree that different people have different ideas of what maxed out means, but really they shouldn't have, maxed out is maxed out.
 
Not my problem at all.

I am just pointing out that you need a flagship card if you want to max out 1080p.

What settings people choose to use in practice is entirely up to them.

In other words, the majority of the statements you type on this forum day in day out apply to 0.1% of the population.
 
I still don't understand this mentality, it is either maxed out or it is not, maxed out minus a bit isn't maxed out. I do agree that different people have different ideas of what maxed out means, but really they shouldn't have, maxed out is maxed out.

Why is 8x AA maxed out though, why not 16x aa, or 32xaa, or 32x aa + transparency AA and whatever else.

Maxed used to mean highest main settings in the game, nothing more or less, 8xaa isn't always an option in games. Regardless maxed has always been a worthless notion, DoF reduces IQ yet uses more power. Just because a developer puts an option in doesn't mean it's a good idea to use it or that everyone should aspire to enable it. Maxed is an irrelevant concept. Best IQ for a given game is something people should focus on.

The difference between 2x aa and 4x aa and the difference between max and medium or low textures is night and day. Most people would struggle to see the difference with the AA change where you'd usually have to be blind(rare games have almost no difference) to not see a fundamental difference in how good the game looks from texture settings.
 
Not my problem at all.

I am just pointing out that you need a flagship card if you want to max out 1080p.

What settings people choose to use in practice is entirely up to them.

you will always have those "exception to the rule" games, like crysis, from time to time you get a game or 2 that demand more than the average releases.
but the point is 2x390 perf is more than enough to break the wall of 4k@60 for the vast majority of the games, and that is still doable with 14/16finfet, if they manag 400-500mm² chips.
i just hope AMD doesnt release just 1 Vega GPU, they should do a high end one like 300-350mm², then a 2nd GPU to break that wall even if they have to price it over 1k$, if they do this and keep the driver optimisation we are seeing these last few months, AMD might very well turn the table on Nvidia, but thats wishfull thinking :D
 
Last edited:
I still don't understand this mentality, it is either maxed out or it is not, maxed out minus a bit isn't maxed out. I do agree that different people have different ideas of what maxed out means, but really they shouldn't have, maxed out is maxed out.

I think the 'mentality' arises when you reach a point that returns have diminished so much that you can no longer see the difference. Especially when Kaap starts talking about wanting maximum AA at 4K as in some other instances.

No-one would argue that no AA isn't worse than 2xAA and that therefore you've maxed out with no AA. But when you get to the levels that Kaap frequently talks about it's like someone tying lead weights to their belt and saying you need stronger leg muscles to run a half-marathon. People just don't see that this adds anything, which is why they stop accepting undiscernible extra levels as being "maxing something out".
 
Not according to TPU:

https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_670/images/perfrel.gif

The GTX670 was around 30% faster than a GTX570.
Went over this already. The difference is definitely more than that when talking about modern gaming at the time(people who upgrade GPU's tend to want to be able to play the newest games obviously).

A GTX670 was also more overclockable than the 570. You could easily get it to basically stock 680 levels without much trouble.

And like I said, you're still comparing a 300mm die card with a nearly 600mm die card. The nomenclature changed between Fermi and Kepler to represent rising costs and yield problems going from 40nm to 28nm. The 670 was a firm midrange card while the 570 was basically the 780 of the Fermi lineup. Kinda important to keep that in mind when comparing the jump.

This is the thing - expecting a 232MM2 chip to be 50% faster than a GTX980TI is a lot.
Woah now, I never said anything like that whatsoever. I was making a specific point, not supporting some overarching argument about this.
 
I think the 'mentality' arises when you reach a point that returns have diminished so much that you can no longer see the difference. Especially when Kaap starts talking about wanting maximum AA at 4K as in some other instances.

No-one would argue that no AA isn't worse than 2xAA and that therefore you've maxed out with no AA. But when you get to the levels that Kaap frequently talks about it's like someone tying lead weights to their belt and saying you need stronger leg muscles to run a half-marathon. People just don't see that this adds anything, which is why they stop accepting undiscernible extra levels as being "maxing something out".

Using max AA even @2160p with 200% scaling (8K) is quite noticeable in some games.

I did post some screenshots a while ago but I won't be posting anymore as people want to believe what they want rather than what is in front of them.
 
you will always have those "exception to the rule" games, like crysis, from time to time you get a game or 2 that demand more than the average releases.
but the point is 2x390 perf is more than enough to break the wall of 4k@60 for the vast majority of the games, and that is still doable with 14/16finfet, if they manag 400-500mm² chips.
i just hope AMD doesnt release just 1 Vega GPU, they should do a high end one like 300-350mm², then a 2nd GPU to break that wall even if they have to price it over 1k$, if they do this and keep the driver optimisation we are seeing these last few months, AMD might very well turn the table on Nvidia, but thats wishfull thinking :D

You can game @2160p on almost any card if you turn down enough settings.

What settings people use is entirely up to them, it is not my place to tell them what to use.
 
Using max AA even @2160p with 200% scaling (8K) is quite noticeable in some games.

I did post some screenshots a while ago but I won't be posting anymore as people want to believe what they want rather than what is in front of them.

But you aren't using max AA, you're using the level determined appropriate by you and no one else. I'm playing Borderlands TPS at the moment, there is no MSAA/SSAA option in the game settings, just FXAA(puke) on or off... so how is 8x aa max settings for the game. To use it you would have to go into the Nvidia control panel and force 8x AA, and MSAA or SSAA saying 8 x aa without saying what type is almost meaningless, and again why not 16x or 32x? What makes 8x aa that you choose to force the defacto max setting because you and you only are making that choice?
 
But you aren't using max AA, you're using the level determined appropriate by you and no one else. I'm playing Borderlands TPS at the moment, there is no MSAA/SSAA option in the game settings, just FXAA(puke) on or off... so how is 8x aa max settings for the game. To use it you would have to go into the Nvidia control panel and force 8x AA, and MSAA or SSAA saying 8 x aa without saying what type is almost meaningless, and again why not 16x or 32x? What makes 8x aa that you choose to force the defacto max setting because you and you only are making that choice?

I use the maximum settings available in the game, simple.
 
You can game @2160p on almost any card if you turn down enough settings.

What settings people use is entirely up to them, it is not my place to tell them what to use.

yes but you have settings that gives you a very high quality image at first sight, and you have settings that you need a magnifying glass and 30sec of staring to spot a difference.
as many others explained, you can get max image quality or extremly close to it , without throwing away performance in useless settings.
therefor you dont need 300% increase of flagship perf to have playable 4k, a 2x390 would be plenty enough.
 
Back
Top Bottom