• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The first "proper" Kepler news Fri 17th Feb?

Max overclock 7970 vs max overclock 580gtx is a no contest, its exactly as much faster than a 580gtx I've said it would be since 6 months ago, and exactly as much faster than 6970 I said it would be.

It turns out you were pretty much right but at least admit it was for entirely the wrong reason:

It will be completely unsurprising when the 7970 releases with clock speeds between 950Mhz and 1Ghz, and even less surprising will be, if you overclock just about any gpu from 950 to 1Ghz, a sub 5% overclock, you would see MAYBE 3% in real FPS gain, which other than being not noticeable in use, also pales in comparison to a 70-80% performance gain from shader/architecture, memory clock, memory bus, and everything else.

IE clock speed is all but irrelevant in this situation, let's say you expect 1Ghz clock speeds, and get 900Mhz, because the card won't always be clock speed limited, this will likely make barely a 6-7% overall difference in the speed of the card........ or less than 10% of the total expected improvement in card speed.

Clock speed is the LEAST important difference in terms of the specs in a new gen card.
 
Don't forget that the GTX670 will be NVidia's md-high end mainstream card. In the past these have typically launched at £150-£200, so £240 would actually be expensive according to past history. AMD's recent price hikes may make £240 seem like a bargain, but it's not really. A £160 GTX460 replacement should not really cost 50% more. That has not happended before.

Wouldn't the GTX670 by its very naming convention be the equivalent of the GTX470 and GTX570 which all launched at around the £300+ mark? And even now the 570 hasn't change price much at all, whereas dropping down a level the new 560 is actualy around £240.

Maybe if it was the GTX660 I could see it at around £160-£180

But don't blame AMD and the 7 series for high prices, thats just crazy.
 
See, its funny because you both try that then I pull a link out that you either click and realise your posts were stupid, or you ignore and claim you're still right either way.

http://hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/7

48-80% faster, 60% was a rough average, not best case scenario...... up against a 580gtx with a 12% overclock or not far off max on air.

Oh no, so a bog standard first 7970 they tried, ends up AVERAGE 60% faster than a 12% overclocked 580gtx.......

Oh wait, and every other review that shows an overclocked 580gtx vs an overclocked 7970...... shows the same thing.

Then you'll come at me, again with a but they didn't use a 3Gb card so obviously those results aren't valid.

Before you do that I'll ask you to link benchmarks of those games at those settings being limited by memory on a 1.5GB 580gtx... you won't because the last time someone tried to blame it all on the memory, I provided links to prove otherwise and no one could provide links proving the memory limited the performance.

Max overclock 7970 vs max overclock 580gtx is a no contest, its exactly as much faster than a 580gtx I've said it would be since 6 months ago, and exactly as much faster than 6970 I said it would be.

Had to laugh.

And the replies are ?
 
It turns out you were pretty much right but at least admit it was for entirely the wrong reason:

Again, no, that post was in response to talking about a small clock speed difference, IE assuming the clock speed was 1000Mhz or 900Mhz, in which case what I said holds true. That post was NOT in response to a card capable of running 1.25Ghz being sold as a 925Mhz card for TDP reasons.

Ultimately I said all along which assumptions I was making, as most were. 80% faster was what AMD would have been roughly aiming for all along as Nvidia/AMD have done for as many "real" generations as I can remember, there has been screwups by both companies along the way and for me limiting its TDP so poorly is another one, not least because it can use 250W(only with powertune set above stock) in furmark while gaming doesn't come remotely close to that. The entire point of powertune, was to prevent breaching a certain TDP.

The card could be 1250Mhz, STILL have a 250W max tdp, in gaming the clocks would rarely if every drop below 1250Mhz, however furmark would end up massively limited, so the hell what.

Everyone made assumptions and all rumours were it had a circa 950Mhz clock speed to achieve X performance, it turns out the really attainable clock speed is circa 1200mhz, what I said holds true. If it is 1250Mhz, or 1200Mhz performance would barely be different. Again IIRC because unlike others I don't save posts to use out of context to prove a non point, my post was in response to someone saying along the lines of missing clock speeds by 5% would massively change performance.... it wouldn't. Underclocking a card by 30% for no reason for really really crap TDP reasons..... considering you implemented a should be game changing powertune tech, will do though.
 
See, its funny because you both try that then I pull a link out that you either click and realise your posts were stupid, or you ignore and claim you're still right either way.

http://hardocp.com/article/2012/01/09/amd_radeon_hd_7970_overclocking_performance_review/7

48-80% faster, 60% was a rough average, not best case scenario...... up against a 580gtx with a 12% overclock or not far off max on air.

Oh no, so a bog standard first 7970 they tried, ends up AVERAGE 60% faster than a 12% overclocked 580gtx.......

Oh wait, and every other review that shows an overclocked 580gtx vs an overclocked 7970...... shows the same thing.

Then you'll come at me, again with a but they didn't use a 3Gb card so obviously those results aren't valid.

Before you do that I'll ask you to link benchmarks of those games at those settings being limited by memory on a 1.5GB 580gtx... you won't because the last time someone tried to blame it all on the memory, I provided links to prove otherwise and no one could provide links proving the memory limited the performance.

Max overclock 7970 vs max overclock 580gtx is a no contest, its exactly as much faster than a 580gtx I've said it would be since 6 months ago, and exactly as much faster than 6970 I said it would be.
Ahh, so by choosing selected bencharks at very high resolutions and settings it can be proven conclusively that a 7970 with 3GB frame buffer can beat a GTX580 by 60%. Surely that is mostly due to frame buffer rather than GPU power or card architecture?

According to this rather in-depth review, the average difference accross all resolutions and settings is closer to 10%, and even at 2560x1600 the gap is just 20%. It also shows the choking effect at 2560x1200 for skyrim, where the GTX580 runs out of frame buffer. At all resolutions below this, the 580 is extremely competative with skyrim.

I am sure we can pick specific AMD or NVidia favorable benchmarks forever, but the truth is that the 7970 is neither 10% nor 60% faster than a GTX580. It will lie somewhere in the middle dependent upon game and resolution. It would be foolish for me to make a sweeping statement that the 7970 is only 10% faster, in the same way it is foolish for someone else say that it is 50-60% faster. A 20-30% difference within the 1200P-1600p range is probably more accurate with both cards at stock and overclocked.
 
Last edited:
Ahh, so by choosing selected bencharks at very high resolutions and settings it can be proven conclusively that a 7970 with 3GB frame buffer can beat a GTX580 by 60%. Surely that is mostly due to frame buffer rather than GPU power or card architecture?

According to this rather in-depth review, the average difference accross all resolutions and settings is closer to 10% and even at 2560x1600 the gap is just 20%. It also shows the choking effect at 2560x1200 for skyrim, where the GTX580 runs out of frame buffer. At all resolutions below this, the 580 is extremely competative with skyrim.

I am sure we can pick specific AMD or NVidia favorable benchmarks forever, but the truth is that the 7970 is neither 10% nor 60% faster than a GTX580. It will lie somewhere in the middle dependent upon game and resolution. It would be foolish for me to make a sweeping statement that the 7970 is only 10% faster, in the same way it is foolish for someone else say that it is 50-60% faster. A 20-30% difference within the 1200P-1600p range is probably more accurate with both cards at stock and overclocked.

It has nothing to do with frame buffer. A 3GB GTX 580 provides about a 5% performance boost or less overall than the 1.5GB version when you take everything into account, and an overclocked GTX 580 1.5GB can easily outpace a stock 3GB GTX 580 despite "running out" of "frame buffer".
 
It has nothing to do with frame buffer. A 3GB GTX 580 provides about a 5% performance boost or less overall than the 1.5GB version when you take everything into account, and an overclocked GTX 580 1.5GB can easily outpace a stock 3GB GTX 580 despite "running out" of "frame buffer".
I disagree with you. In skyrim for example, the 580 is pretty much equal to a 7970 all the way up to 2560x1600 where the 580 performance falls off a cliff (skyrim is a game that usually favors NVidia architecture). This has to be due to franebuffer, otherwise performance would also be significantly lower at lower resolutions (most test systems are not CPU bottlenecked at 1200/1080p). It is similar for quite a few other games where 2560x1600 is the breaking point for the 580.

At the more common reslotions of 1200p/1080p the performance gap averages approx 20% accross a very wide range of games.

What we need is for AMD to launch the 1.5GB 7900's. This would allow totally fair comparison of both architectures.
 
I disagree with you. In skyrim for example, the 580 is pretty much equal to a 7970 all the way up to 2560x1600 where the 580 performance falls off a cliff (skyrim is a game that usually favors NVidia architecture). This has to be due to franebuffer, otherwise performance would also be significantly lower at lower resolutions (most test systems are not CPU bottlenecked at 1200/1080p). It is similar for quite a few other games where 2560x1600 is the breaking point for the 580.

At the more common reslotions of 1200p/1080p the performance gap averages approx 20% accross a very wide range of games.

What we need is for AMD to launch the 1.5GB 7900's. This would allow totally fair comparison of both architectures.


http://www.techpowerup.com/reviews/AMD/HD_7970/22.html

Where is this cliff one speaks of? Oh wait, at a resolution that has almost twice the pixels, and IS still very cpu limited at 1920x1200, performance drops all of 25%, or, no where near the amount expected when memory becomes a limit?

http://www.techpowerup.com/reviews/AMD/HD_7970/11.html

Wait, what is that, Cod 4 using all of about 12mb of memory dropping by a larger performance margin than Skyrim when going from 1920x1200 to 2560x1600.

Performance drops between one resolution and another...... it must have run out of memory, wow, you've proved it fantastically well, oh wait, no you haven't.

http://www.hardwarecanucks.com/foru...44390-evga-geforce-gtx-580-3gb-review-12.html

Highest settings, 3gb providing effectively zero performance difference throughout that review.

http://www.hardwarecanucks.com/foru...lack-edition-double-dissipation-review-4.html

here's another more recent one, if you want to see memory limits effecting a card, look at the difference on the 570gtx, 2560x1600 res max settings no AA and the 570gtx still isn't memory limited, but with 4xaa performance drops well over 50%, THAT is a memory limit problem. The 580GTX 3gb drops MARGINALLY less than the 580gtx 1.5GB at the same settings and gives performance that you would not be able to tell apart.

Should we point out the performance difference of teh 580gtx 3gb going from 1920x1200 to 2560x1600, with no memory limit..... yup, 30% performance drop from one res to the other on the 3gb card, and an all but identical drop on the 1.5Gb card.

What's that, oh right, the 7970 drops around 30% in performance jumping up a resolution at the same settings again.

So what have I shown, absolute proof that 1.5gb and 3gb cards drop somewhere between 30% performance between the resolutions, while you are adamant that a 25% drop in performance in Skyrim between those resolution PROVES a memory limit? No, what you've done is prove that even the 1920x1200 results were still quite cpu limited which is why the drop to the next resolution is SMALLER than standard on any card without hitting a memory limit.

Skyrim is NOT memory limited on a 1.5GB card, at all.
 
Again, no, etc.

Again yes.

The performance gain from architectural changes isn't anywhere near the level you predicted:

It will be completely unsurprising when the 7970 releases with clock speeds between 950Mhz and 1Ghz, and even less surprising will be, if you overclock just about any gpu from 950 to 1Ghz, a sub 5% overclock, you would see MAYBE 3% in real FPS gain, which other than being not noticeable in use, also pales in comparison to a 70-80% performance gain from shader/architecture, memory clock, memory bus, and everything else.


GPU clock speed has turned out to be a large factor in the performance increase so you were wrong when you said this:

IE clock speed is all but irrelevant in this situation, let's say you expect 1Ghz clock speeds, and get 900Mhz, because the card won't always be clock speed limited, this will likely make barely a 6-7% overall difference in the speed of the card........ or less than 10% of the total expected improvement in card speed.

Clock speed is the LEAST important difference in terms of the specs in a new gen card.


And don't tell me that your reference to architectural changes includes the GPU clock speed, you've quite clearly differentiated between the two in the quotes above.
 
Last edited:
So it sounds like the high end "7970 compete" isn't due any time soon then unless they have a massive surprise tucked away. I guess the high end card October time for a holiday launch then? Shame, I really wanted to go for a single card solution for 1440p, a single 7970 is close but not quite there so I guess the options are 2x7970s or hope AMD release a 7990 soon.
 
Last edited:
So it sounds like the high end "7970 compete" isn't due any time soon then unless they have a massive surprise tucked away. I guess the high end card October time for a holiday launch then? Shame, I really wanted to go for a single card solution for 1440p, a single 7970 is close but not quite there so I guess the options are 2x7970s or hope AMD release a 7990 soon.

Surely the 7990 wouldn't be that much faster than Crossfired 7950s?

I mean no doubt it will probably have the full fat count, but clocks will be reduced as ever to keep the heat down.
 
So it sounds like the high end "7970 compete" isn't due any time soon then unless they have a massive surprise tucked away. I guess the high end card October time for a holiday launch then? Shame, I really wanted to go for a single card solution for 1440p, a single 7970 is close but not quite there so I guess the options are 2x7970s or hope AMD release a 7990 soon.

I'm the same boat, looking for a single high end card.

I can't use sli/xfire and I'd rather avoid the the 7990 if possible due to PSU limitations and driver/xfire issues in some games.
 
So it sounds like the high end "7970 compete" isn't due any time soon then unless they have a massive surprise tucked away. I guess the high end card October time for a holiday launch then? Shame, I really wanted to go for a single card solution for 1440p, a single 7970 is close but not quite there so I guess the options are 2x7970s or hope AMD release a 7990 soon.

I'm the same boat, looking for a single high end card.

I can't use sli/xfire and I'd rather avoid the the 7990 if possible due to PSU limitations and driver/xfire issues in some games.

Am I reading this right a single 7970 is struggling with games @ 2560x1440 resolution ?
 
Am I reading this right a single 7970 is struggling with games @ 2560x1440 resolution ?

It seems to do pretty well at 2560x1600 according to Guru3D:

http://www.guru3d.com/article/amd-radeon-hd-7970-review/12

It's perhaps a bit borderline for Metro 2033 (but then pretty much everything is at that resolution) and some may say BF3 at 30 odd FPS is a bit so so.

But at 2560x1440, and after taking advantage of the generous amount of overclocking headroom, all those games should be playable with the highest settings.
 
It seems to do pretty well at 2560x1600 according to Guru3D:

http://www.guru3d.com/article/amd-radeon-hd-7970-review/12

It's perhaps a bit borderline for Metro 2033 (but then pretty much everything is at that resolution) and some may say BF3 at 30 odd FPS is a bit so so.

But at 2560x1440, and after taking advantage of the generous amount of overclocking headroom, all those games should be playable with the highest settings.

thx good to know :)
 
Am I reading this right a single 7970 is struggling with games @ 2560x1440 resolution ?
It seems "ok" but doesn't look like it leaves much room for manoeuvre. OK for the short term but probably not a long term solution.

I was hoping for a major update, if I'm spending the best part of £500 I don't want to be having to turn down a bit of detail here and a setting there. I was hoping a next gen high end nvidia card would give me a no compromise single card solution for 1440p. Without it it's going to have to be plan B, 2 x 7950 (or 70s) or a 7990 if it's due any time soon (being a successor to my existing 5970 that has done sterling service for 2 years.)
 
Back
Top Bottom