• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Has Nvidia Forgotten Kepler? The GTX 780 Ti vs. the 290X Revisited

Haven't been on in long time but it's nice to come back and see the my 280x beating titan in a few games.

Add in the 30% overclock I got and it's probably about even lol

I think titan was around £850 when i got my 280X for £200
 
There may not be a conspiracy with desktop GTX 780/780ti drivers but there appears to be with mobile GPU counterparts.

You only need to take a brief look at notebookreviews.com or type "Mr. Fox's GeForce 345.20 Desktop Driver Mod for Alienware and Clevo Mobile GPUs" on google to see the issues. Nvidia essentially are forcing throttling on GTX 780M and 880M cards. I'm having to use a modified set of desktop drivers (345.20) to overcome this on my GTX780M. Nvidia are unwilling to accept there is an issue and have not fixed this in over 18 months. I can only assume they want owners of older cards to move to newer GPUs and are deliberately gimping older cards. Makes business sense but its a poor, underhanded move on customers.
 
Last edited:
I can only assume they want owners of older cards to move to newer GPUs and are deliberately gimping older cards. Makes business sense but its a poor, underhanded move on customers.

Tbf, I certainly wouldn't put it past them!!! They seemed overly skewed on their bottom line imo and not so much on customer satisfaction...
 

Just a shame that for me, Hitman was a dire game and although you could approach each enemy and mission in a set or different way, I knida felt let down by the graphics and content.

The game however is very playable on my Titan X and on occasions can look fantastic, I just feel the game isn't quite the Hitman Contracts I was hoping for.

What are your thoughts on playing this game?
 
What I would be asking if I owned a 290X is why it takes AMD so long to get drivers up to speed. People seem happy that it has taken a couple of years to get this performance.

How about that for a different spin, makes a change from the the Nvidia gimping the 780 theory's. Maybe Nvidia just got the most of what they could from the start?

I wouldn't say it's all drivers:
http://forums.anandtech.com/showpost.php?p=38117833&postcount=74
1. The ratio of render:compute is tilting higher towards compute than a few years ago when Kepler was introduced.
2. The console effect is pushing developers into using more compute resources in order to extract as much performance as possible from the consoles GCN APUs.
3. Console titles are being optimized for GCN.

http://forums.anandtech.com/showpost.php?p=38117913&postcount=79
If a program is optimized for GCN then the work groups will be be divisible in increments of 64 (matching a wavefront).

If a program is optimized for Kepler/Maxwell then the work groups will be divisible in increments of 32 (matching a warp).

Prior to the arrival of GCN based consoles, developer's would map their work groups in increments of 32. This left GCN compute units idling and not being utilized in every CU.

http://forums.anandtech.com/showpost.php?p=38137242&postcount=114

Take the 2015 titles. Isolate console vs non-console titles and this is what you get..

R9 290x vs GTX 780 Ti (non-console)
1080p: GTX 780 Ti is 8.3% faster
1440p: GTX 780 Ti is 1.7% faster

R9 290x vs GTX 780 Ti (console)
1080p: R9 290x is 16.2% faster
1440p: R9 290x is 20.8% faster

See a pattern?

Let's do the same thing but comparing a GTX 980 vs R9 290x..

R9 290x vs GTX 980 (non-console)
1080p: GTX 980 is 33.5% faster
1440p: GTX 980 is 29.3% faster

R9 290x vs GTX 980 (console)
1080p: GTX 980 is 8.1% faster
1440p: GTX 980 is 5.6% faster

Now as we move towards DX12 and Async Compute titles. That 10-20% boost Async compute offers, as well as the API overhead alleviation of DX12 should result in the R9 290x being around 5-15% faster than a GTX 980. (We can ignore the Rise of the Tomb Raider DX12 patch as it is broken but once fixed you'll see).

What we're seeing is the console effect. With Microsoft pushing unity between the PC and console platforms then we're going to see this push NVIDIA towards a more GCN-like uarch or they won't be able to compete

Add the extra benefit from DX12 lower overhead and async compute to see GCN Hawaii shine even brighter - http://media.bestofmicro.com/D/G/561652/gallery/AsyncCompute_On_Off_w_600.png

Also a R290 was $400, R290X $550 while GTX780 went for $650, 780ti for $700. The huge price difference is what makes people go "nuts" over the matter, more so when you have basically the same chip with some overclock going toe to toe with newer, better chips from nVIDIA.

As a buyer of R290/X I would expect to be perhaps slower than those other cards since they were priced way lower. I don't mind I'm only getting now extra performance, it was a given that top efficiency was not something that will happen day 1 and either way, they weren't far from competition. Not those tens/hundreds of greens worth far. :)
Also if you check the peak theoretical performance, R290X = 980ti stock at around 5,6 something TF, if I remember correctly. Should I be upset it isn't that in real life on a daily basis? :p

Also the fact nVIDIA build their cards with was needed back then and nothing much, as a customer, I would be upset even more, considering they claim working with Microsoft for so many years on low level API aka milking the customers.
 
Glad I've gone with Nvidia the last couple of generations.

Seems AMD still cant get performance on day 1 which is when I play my games, instead they improve performance later on when it doesn't matter to me. At least Nvidia can get good performance from the start which is what I pay for :)
 
Glad I've gone with Nvidia the last couple of generations.

Seems AMD still cant get performance on day 1 which is when I play my games, instead they improve performance later on when it doesn't matter to me. At least Nvidia can get good performance from the start which is what I pay for :)

But looking at it another way, you get to pay less at the time :)
 
Is it just me or if you compare those figures from the 2016 tests the "uber OC" 290X only does better against the stock 780ti? If you compare it with the 780ti OC column then it still wins most of the tests does it not? :confused:
 
Console titles are being optimized for GCN.

People get a bit hung up on this especially the hype but the truth is it is a lot less relevant than people think.

Increasingly with for instance the Unreal Engine which many many games are based on there are separate output paths for the different platforms and the actual fundamental game mechanics are abstracted from the underlying hardware in development.

Even with games where they are developed on top of or alongside the engine development rather than using an off the shelf engine often there is very little shared at the lower level between the console and PC versions of the engine.

While there are some truths largely this image of most console developers working extensively to optimise for GCN and that having an impact on how the game runs on the PC is little more than a fabrication. While there is starting to be a little more relevance when talking in terms of async compute it is still a long way from the hype.

Is it just me or if you compare those figures from the 2016 tests the "uber OC" 290X only does better against the stock 780ti? If you compare it with the 780ti OC column then it still wins most of the tests does it not? :confused:

Any GK110 numbers where they aren't showing the specific boost clock in action should be completely ignored IMO.
 
Last edited:
Is it just me or if you compare those figures from the 2016 tests the "uber OC" 290X only does better against the stock 780ti? If you compare it with the 780ti OC column then it still wins most of the tests does it not? :confused:

Uber mod means the fan is clocked higher so that the GPU doesn't downclock, issue nonexistent on the other cards which are not made by AMD. They managed to get a lot of bad PR because they cheap out on the basic stuff. This is a problem with both R290 and R290X in their cooling solutions that were provided by AMD - http://www.tomshardware.com/reviews/radeon-r9-290-review-benchmark,3659.html

AMD gives the 290 an “up to” rating of 947 MHz, but our seven games average 832 MHz. In the most taxing situations, the clock rate floor, or base clock, appears to be 662 MHz.

R290/X can overclock above that "uber" frequencies that are actually "stock" or above for every custom card. For instance mine (Sapphire R290 Trixx), came with 1000/1300 instead of 947/1250 and can do ~1100/1425 at stock voltage and no power limit increase.
 
Last edited:
Both improved. 290x improved more. The end.

exactly this. Its more of a failure on AMD's part that the 290x was so "slow" on launch vs a 780ti. Nvidia got the most out of the 780ti from launch hence only adding a few percent gains over its life time. AMDs 290x with immature drivers was held back at launch, but massive gains over its life (and extended life due to 390s).

I think id rather have a card that performs 98-100% on release day than a card that only does 60-70% but over its life may (or may not) get up to 100%.
 
exactly this. Its more of a failure on AMD's part that the 290x was so "slow" on launch vs a 780ti. Nvidia got the most out of the 780ti from launch hence only adding a few percent gains over its life time. AMDs 290x with immature drivers was held back at launch, but massive gains over its life (and extended life due to 390s).

I think id rather have a card that performs 98-100% on release day than a card that only does 60-70% but over its life may (or may not) get up to 100%.

TBH pretty much how I look at it - Kepler has had its day and I'm looking forward to when a decent replacement is out not trying to drag a bit more life out of it with newer generations of games where it is going to be quickly left behind anyhow.
 
exactly this. Its more of a failure on AMD's part that the 290x was so "slow" on launch vs a 780ti. Nvidia got the most out of the 780ti from launch hence only adding a few percent gains over its life time. AMDs 290x with immature drivers was held back at launch, but massive gains over its life (and extended life due to 390s).

I think id rather have a card that performs 98-100% on release day than a card that only does 60-70% but over its life may (or may not) get up to 100%.

Really? A card released to go against the 780 and performs better than that card and forces Nvidia to lower the pricing on the 780. So does exactly what it's supposed to do and gets great reviews.

Then Nvidia release the 780Ti, it isn't that much faster than a 290x. 10% on average? And you are saying it was "so slow" at launch?

I know which I would prefer, a card that while only using 60% of it's performance nearly matches a card that's much more expensive.
 
Really? A card released to go against the 780 and performs better than that card and forces Nvidia to lower the pricing on the 780. So does exactly what it's supposed to do and gets great reviews.

Then Nvidia release the 780Ti, it isn't that much faster than a 290x. 10% on average? And you are saying it was "so slow" at launch?

I know which I would prefer, a card that while only using 60% of it's performance nearly matches a card that's much more expensive.

You miss my point really. I also said so "slow" rather than "so slow", taking slow as a pinch of salt since it was still miles faster than a 7970 etc what im still rocking with now...

I mean more: "slow then, compared to what it achieves now", or that the 780ti released with 95% performance, and has only gained 5% over its life, while a 290x released with 80% performance and over time has got upto the 100% margin.

I thought I made it obvious.


*percentage figures are just made up to set the example.
 
Back
Top Bottom