• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Memory Overclocking Gains On Hawaii With A 512 Bit Bus

Using the settings:
Options: Resolution: 1920 x 1080; DirectX: DirectX 11; Quality: High; Antialiasing: AAA; Texture filtering: AF 4X; Advanced PhysX: Enabled; Tesselation: Enabled; DOF: Disabled
(because for some reason that's what I picked when I ran it on my 670s)

670 SLI:
Average Framerate: 135.00
Max. Framerate: 278.94
Min. Framerate: 17.01

290 CF (with PhysX off):
Average Framerate: 143.50
Max. Framerate: 271.74
Min. Framerate: 11.84
 
Using the settings:
Options: Resolution: 1920 x 1080; DirectX: DirectX 11; Quality: High; Antialiasing: AAA; Texture filtering: AF 4X; Advanced PhysX: Enabled; Tesselation: Enabled; DOF: Disabled
(because for some reason that's what I picked when I ran it on my 670s)

670 SLI:
Average Framerate: 135.00
Max. Framerate: 278.94
Min. Framerate: 17.01

290 CF (with PhysX off):
Average Framerate: 143.50
Max. Framerate: 271.74
Min. Framerate: 11.84

The only thing i can tell you is both Metro 2033 and PS2 are heavy CPU bottlenecked.

My i7 930 bottlenecks the crap out of my 7870XT in both games, In Mentro 2033 FPS go from 12 to 200 or some silly number, while 90% of the time the GPU is strangled by the CPU, PS2 is no better. so upgrading the GPU isn't going to do a lot for the FPS. i could run 3 780TI's and not see a difference.
 
Last edited:
Humbug nailed it Googaly. Metro2033 at 1080P with the settings you favoured is going to be cpu bottlenecked. I mean you're only at 1080P and you don't even have 4xAA selected or even x16AF. Metro2033 is just a crappy benchmark with terrible optimization. Even at 1440P gpu usage drops halfway through the bench, probably to do with physx even though its disabled. I can tell you one thing though, at the highest settings my stock 7950's were faster than tepics overclocked 670's. Speaking of tepic, what happened to him? Not seen him here in ages.
 
Humbug nailed it Googaly. Metro2033 at 1080P with the settings you favoured is going to be cpu bottlenecked. I mean you're only at 1080P and you don't even have 4xAA selected or even x16AF. Metro2033 is just a crappy benchmark with terrible optimization. Even at 1440P gpu usage drops halfway through the bench, probably to do with physx even though its disabled. I can tell you one thing though, at the highest settings my stock 7950's were faster than tepics overclocked 670's. Speaking of tepic, what happened to him? Not seen him here in ages.


Turning off PhysX or unchecking "Advanced PhysX" in Metro and PS2 does not actually turn PhysX off

In the Case of PS2 its still running on the CPU with really horrible single threaded optimisation, or no optimisation at all as the case may be.

Metro 2033 is different, you can run Advanced PhysX and its all on the CPU, but much better optimised on 8 threads, that i can see, however, advanced PhysX or not, it still hammers the CPU to death, especially in the Benchmark tool which really chains the GPU down.
 
Hey Matt do you know what? Ok I will tell you with my fantastic broken english :D
I have read that you undervolted your 290 and afaik you love to test everything and i think you might be the right guy i can ask this.

So, as we know the 290 Reference Cards have a "Baseclock" (not official) of 662 MHz.
With these clocks they are as fast as a 280X or GTX 770 according to this:

http://www.computerbase.de/artikel/grafikkarten/2013/amd-radeon-r9-290-im-test/9/ (watch AMD Radeon R9 290 (40 %) here)

Here you can see what clocks you get with a Fanspeed of 40%. http://www.computerbase.de/artikel/grafikkarten/2013/amd-radeon-r9-290-im-test/8/ (early results, AMD updated later the Driver and Fanspeed etc)

Now straight to the point, is it possible, that you post different undervolting results down to 662MHz, for example with 700-800 MHz etc. You can fix f.e. Powertarget to -50% and -40% etc. It should be possible, that you manage fantastic FPS results with much much less Power Consumption. I hope you can understand what i am up to, because a GTX 770/R9 280X are good cards and if you can get their potential with only 662 MHz, it would be worth checking that out. If someone is playing @ Full HD, there is no need to push the card to its limits, because in most games and scenarios you should be able to get +60 FPS with only 800 MHz Coreclock, so why don t undervolt these Baby to the extreme?
 
Last edited:
Hey Matt do you know what? Ok I will tell you with my fantastic broken english :D
I have read that you undervolted your 290 and afaik you love to test everything and i think you might be the right guy i can ask this.

So, as we know the 290 Reference Cards have a "Baseclock" (not official) of 662 MHz.
With these clocks they are as fast as a 280X or GTX 770 according to this:

http://www.computerbase.de/artikel/grafikkarten/2013/amd-radeon-r9-290-im-test/9/ (watch AMD Radeon R9 290 (40 %) here)

Here you can see what clocks you get with a Fanspeed of 40%. (early results, AMD updated later the Driver and Fanspeed etc) http://www.computerbase.de/artikel/grafikkarten/2013/amd-radeon-r9-290-im-test/8/

Now straight to the point, is it possible, that you post different undervolting results down to 662MHz, for example with 700-800 MHz etc. You can fix f.e. Powertarget to -50% and -40% etc. It should be possible, that you manage fantastic FPS results with much much less Power Consumption. I hope you can understand what i am up to, because a GTX 770/R9 280X are good cards and if you can get their potential with only 662 MHz, it would be worth checking that out. If someone is playing @ Full HD, there is no need to push the card to its limits, because in most games and scenarios you should be able to get +60 FPS with 800 MHz Coreclock, so why don t undervolt these Baby to the extreme?

Interesting theory and one im happy to test for you. Lowering the powertune to -50 will throttle the core sooner to stay within the new lower TDP. However lowering the core clock at stock should have that same effect so it may not actually throttle per say as you've lowered the core clock enough to stop the card exceeding the new lower power tune TDP. If you run the core clock that low you should be able to undervolt the card even further, maybe. Happy to run some tests if you want. :)
 
It would be really nice and thx for your efforts.
I just want to know what is possible, how much can i lower the power consumption to stay above 60 FPS @ Full HD in most Games with very good graphic settings.
 
Last edited:
Good find on the voltage affecting memory stability. Maybe upping core voltage will help get rid of a few black screen issues? (have no idea as my memory is good to 1550 on +100mv and have never seen a black screen on the 290x).
 
The only thing i can tell you is both Metro 2033 and PS2 are heavy CPU bottlenecked.

My i7 930 bottlenecks the crap out of my 7870XT in both games, In Mentro 2033 FPS go from 12 to 200 or some silly number, while 90% of the time the GPU is strangled by the CPU, PS2 is no better. so upgrading the GPU isn't going to do a lot for the FPS. i could run 3 780TI's and not see a difference.

Humbug nailed it Googaly. Metro2033 at 1080P with the settings you favoured is going to be cpu bottlenecked. I mean you're only at 1080P and you don't even have 4xAA selected or even x16AF. Metro2033 is just a crappy benchmark with terrible optimization. Even at 1440P gpu usage drops halfway through the bench, probably to do with physx even though its disabled. I can tell you one thing though, at the highest settings my stock 7950's were faster than tepics overclocked 670's. Speaking of tepic, what happened to him? Not seen him here in ages.

Still, maybe you can now see why I'm not impressed with the 290 CF setup. Pretty much everything I do, with the exception of Heaven, runs about as well on my last gen 670s as my new 290s.
There may be reasons behind it, but I spent £600 on new graphics cards and as a result Heaven runs slightly better and most other things seem to be within a few fps of what they were. Now if you'd be pleased with that for £600 then you value money differently from me.
 
Still, maybe you can now see why I'm not impressed with the 290 CF setup. Pretty much everything I do, with the exception of Heaven, runs about as well on my last gen 670s as my new 290s.
There may be reasons behind it, but I spent £600 on new graphics cards and as a result Heaven runs slightly better and most other things seem to be within a few fps of what they were. Now if you'd be pleased with that for £600 then you value money differently from me.
To be honest for the Metro bench, you really should have PhysX disabled and DOF enabled and 16xAF and Very High for a proper comparison to set the men apart from the boys :p
 
To be honest for the Metro bench, you really should have PhysX disabled and DOF enabled and 16xAF and Very High for a proper comparison to set the men apart from the boys :p

Yeah, to be honest I'm not sure why I set the settings to that for the 670 SLI test. I'm not currently using the 670s though and can't be bothered switching them back in just to re-run the test.
I suspect (but don't know for sure) that it was to compare with my 570 SLI that I was running before that. Maybe those were the settings I used in games to make it playable (not sure, guessing). So made sense to test the 670s at this and never thought to then run at higher settings for when I got a new card.

If my memory serves my GPU history has gone like this...

8800GTS (Nice)
5870 (WoW!)
570 SLI (nice)
670 SLI (meh)
7950 CF (after hearing how the 12.11 drivers made 7950 CF faster than 680 SLI - meh, sidegrade at best)
290 CF (meh)
 
Yeah, to be honest I'm not sure why I set the settings to that for the 670 SLI test. I'm not currently using the 670s though and can't be bothered switching them back in just to re-run the test.
I suspect (but don't know for sure) that it was to compare with my 570 SLI that I was running before that. Maybe those were the settings I used in games to make it playable (not sure, guessing). So made sense to test the 670s at this and never thought to then run at higher settings for when I got a new card.

If my memory serves my GPU history has gone like this...

8800GTS (Nice)
5870 (WoW!)
570 SLI (nice)
670 SLI (meh)
7950 CF (after hearing how the 12.11 drivers made 7950 CF faster than 680 SLI - meh, sidegrade at best)
290 CF (meh)
What monitor(s) are you using btw? (Sorry if you already posted and I missed it).
 
Currently using Samsung S23A700Ds.
Bought them a while back when Overclockers had them cheap and Samsung were doing them with cashback. I think they ended up at ~£155 each (I got 3 for Nvidia Surround (at the time), didn't like it though).
 
No. I don't actually like Metro, it's only installed for the benchmark. Which either runs like a dog on AMD setups or just runs like a dog on my setups...

Not sure why it's still installed really!
You mean you don't like the gameplay or something else? I had a brief run of the game on Tridef 3D and it looks really good. You should try it on your CF290 and set the graphic details to max except for AA.

As for running like a dog...you sure you didn't enable PhysX despite the lack of Nvidia card?
 
Last edited:
Back
Top Bottom