Associate
- Joined
- 25 Nov 2002
- Posts
- 2,218
- Location
- Somerset
I wanted to discuss the way the power setting works on the 780 cards as I'm finding it hard to fully understand what its doing.
Let me explain my findings and see if anyone can explain them.
Test constants:
2x 780 Lightnings in SLI
Windows 7 x64 Pro OS
CPU at 4.8GHz / System Ram at 2400MHz
using MSI 300% bios and Rbby258 overvoltage tool and MSI Afterburner
+200 on voltage (1.3v actual)
+200 on GPU ram
Fans on Full speed.
3D Test used: 3DMark11 Graphics Test 3
Data captured from AfterBurner log files after each run.
Results:
The log files clearly show that the full power setting is getting used at peek times i.e. when the GPU speed is at its highest, but it really does not make any tangible or consistent difference to the benchmark score or the maximum GPU frequency.
I know in our benchmark threads we ask for the GPU and Memory speeds used for the benchmark. My log files show that, in most cases the actual GPU speed varies quite dramatically during the benchmark. Memory remained fixed.
In my case a +405 on the GPU core should equate to a speed at 1400 but in the benchmark +400 is only peeking at 1358 and most of the time its closer to 1330.
Now either I'm missing something or can I really say that I can bench at 1400?
This is a thought and comment provoker - so please post your thoughts.
Simon
Let me explain my findings and see if anyone can explain them.
Test constants:
2x 780 Lightnings in SLI
Windows 7 x64 Pro OS
CPU at 4.8GHz / System Ram at 2400MHz
using MSI 300% bios and Rbby258 overvoltage tool and MSI Afterburner
+200 on voltage (1.3v actual)
+200 on GPU ram
Fans on Full speed.
3D Test used: 3DMark11 Graphics Test 3
Data captured from AfterBurner log files after each run.
Results:

The log files clearly show that the full power setting is getting used at peek times i.e. when the GPU speed is at its highest, but it really does not make any tangible or consistent difference to the benchmark score or the maximum GPU frequency.
I know in our benchmark threads we ask for the GPU and Memory speeds used for the benchmark. My log files show that, in most cases the actual GPU speed varies quite dramatically during the benchmark. Memory remained fixed.
In my case a +405 on the GPU core should equate to a speed at 1400 but in the benchmark +400 is only peeking at 1358 and most of the time its closer to 1330.
Now either I'm missing something or can I really say that I can bench at 1400?
This is a thought and comment provoker - so please post your thoughts.
Simon