Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
7950 on higher settings>670 because of the vram, owned both.![]()
Don't forget how good the 290/290X holds on DX12 games also.
http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/
I can only assume they want owners of older cards to move to newer GPUs and are deliberately gimping older cards. Makes business sense but its a poor, underhanded move on customers.
Don't forget how good the 290/290X holds on DX12 games also.
http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/
What I would be asking if I owned a 290X is why it takes AMD so long to get drivers up to speed. People seem happy that it has taken a couple of years to get this performance.
How about that for a different spin, makes a change from the the Nvidia gimping the 780 theory's. Maybe Nvidia just got the most of what they could from the start?
1. The ratio of render:compute is tilting higher towards compute than a few years ago when Kepler was introduced.
2. The console effect is pushing developers into using more compute resources in order to extract as much performance as possible from the consoles GCN APUs.
3. Console titles are being optimized for GCN.
If a program is optimized for GCN then the work groups will be be divisible in increments of 64 (matching a wavefront).
If a program is optimized for Kepler/Maxwell then the work groups will be divisible in increments of 32 (matching a warp).
Prior to the arrival of GCN based consoles, developer's would map their work groups in increments of 32. This left GCN compute units idling and not being utilized in every CU.
Take the 2015 titles. Isolate console vs non-console titles and this is what you get..
R9 290x vs GTX 780 Ti (non-console)
1080p: GTX 780 Ti is 8.3% faster
1440p: GTX 780 Ti is 1.7% faster
R9 290x vs GTX 780 Ti (console)
1080p: R9 290x is 16.2% faster
1440p: R9 290x is 20.8% faster
See a pattern?
Let's do the same thing but comparing a GTX 980 vs R9 290x..
R9 290x vs GTX 980 (non-console)
1080p: GTX 980 is 33.5% faster
1440p: GTX 980 is 29.3% faster
R9 290x vs GTX 980 (console)
1080p: GTX 980 is 8.1% faster
1440p: GTX 980 is 5.6% faster
Now as we move towards DX12 and Async Compute titles. That 10-20% boost Async compute offers, as well as the API overhead alleviation of DX12 should result in the R9 290x being around 5-15% faster than a GTX 980. (We can ignore the Rise of the Tomb Raider DX12 patch as it is broken but once fixed you'll see).
What we're seeing is the console effect. With Microsoft pushing unity between the PC and console platforms then we're going to see this push NVIDIA towards a more GCN-like uarch or they won't be able to compete
Glad I've gone with Nvidia the last couple of generations.
Seems AMD still cant get performance on day 1 which is when I play my games, instead they improve performance later on when it doesn't matter to me. At least Nvidia can get good performance from the start which is what I pay for![]()
Console titles are being optimized for GCN.
Is it just me or if you compare those figures from the 2016 tests the "uber OC" 290X only does better against the stock 780ti? If you compare it with the 780ti OC column then it still wins most of the tests does it not?![]()
Is it just me or if you compare those figures from the 2016 tests the "uber OC" 290X only does better against the stock 780ti? If you compare it with the 780ti OC column then it still wins most of the tests does it not?![]()
AMD gives the 290 an “up to” rating of 947 MHz, but our seven games average 832 MHz. In the most taxing situations, the clock rate floor, or base clock, appears to be 662 MHz.
Both improved. 290x improved more. The end.
exactly this. Its more of a failure on AMD's part that the 290x was so "slow" on launch vs a 780ti. Nvidia got the most out of the 780ti from launch hence only adding a few percent gains over its life time. AMDs 290x with immature drivers was held back at launch, but massive gains over its life (and extended life due to 390s).
I think id rather have a card that performs 98-100% on release day than a card that only does 60-70% but over its life may (or may not) get up to 100%.
exactly this. Its more of a failure on AMD's part that the 290x was so "slow" on launch vs a 780ti. Nvidia got the most out of the 780ti from launch hence only adding a few percent gains over its life time. AMDs 290x with immature drivers was held back at launch, but massive gains over its life (and extended life due to 390s).
I think id rather have a card that performs 98-100% on release day than a card that only does 60-70% but over its life may (or may not) get up to 100%.
Really? A card released to go against the 780 and performs better than that card and forces Nvidia to lower the pricing on the 780. So does exactly what it's supposed to do and gets great reviews.
Then Nvidia release the 780Ti, it isn't that much faster than a 290x. 10% on average? And you are saying it was "so slow" at launch?
I know which I would prefer, a card that while only using 60% of it's performance nearly matches a card that's much more expensive.