GPU crunching setup ?

Soldato
Joined
19 Apr 2003
Posts
2,529
I know nothing about graphics cards, my previous card a Matrox G450 says it all.

Ive got a 9600gso running Seti. Following some advice in another thread, I've turned the shaders up using Riva, but I'm seeing talk of PhsX and 3d whatever, what's supposed to be set to what for best crunching performance ?
 
I already have it running, I downloaded the latest drivers and followed instructions in a thread here to get it going. Since then I've been snooping through some threads, this one in particular

http://forums.overclockers.co.uk/showthread.php?t=18043896

which made me wonder if anything (other than the sliders for overclocking the core, memory and shaders) needs adjusting from the default GPU settings to get the most from it.

I have no idea what they're talking about in that thread, but it gives me the impression they got a lot more points in 3d than 2d (whatever that is), so thought I'd ask.
 
The difference between 3d and 2d is:

3d uses the full scale of the GPU i.e. games
2d is for basic stuff i.e. desktop and interent

3d utilises the GPU at full clock speeds for memory/core/shader whilst 2d slows them down to save power.

Hope this helps.
 
Oh, and you might want to get advice about the nvidia drivers you use.

Some of the latest ones have stability issues regarding the 2d/3d clocks i.e. it keeps dropping into the 2d clock mode which dramatically reduces your crunching power.
 
Thanks loudbob

So looking at the tab in rivatuner where the sliders are for core/shader/mem theres a tick box 'enable driver level hardware overclocking' then a drop down, its default setting is 'Standard 2D' I should set this to 3D or I'm loosing out, anything else while I'm in there ?
 
From a personal point of view I prefer EVGA Precision over Rivatuner.......I use to to control 3xGTX260's in the one rig.

Using EVGA you can set core/memory/shader clocks as well as fan speed and if your using nvidia driver 186.18 you should not have any problems with the GPU dropping into 2d mode.

You can set EVGA to start on boot-up to the clock speeds you specify and you can monitor clock speed from EVGA to make sure.

;)
 
Yes, but very simple to use.

Too many options with Riva and I found that it had issues with running all three gtx's (may have just been me).

EVGA has run my rig with no problems :)
 
I find the gui rich versions tend to run like treacle when I have tried them wheras Rivatuner is just unobtrusive. Don't get me wrong I wanted to use the other tools otherwise I wouldn't have bothered to try them :) Just somehow couldn't get on with them for some reason.. wierd.
 
I tried EVGA, now I'm confused as theres no mention of 2D or 3D anywhere I can find. I did try setting Riva the way I thought it might work, low clocks for 2D and high for 3D but don't seem to be able too, whatever I set one to affects the other. It may well be user error, or maybe some cards just run one speed for both 2D/3D ?

The card is the ASUS EN9600GSO Magic, it was in that thread months back, people were unsure if it had 96 shaders or not (it does).

edit/ I remember now Riva complaining about compatibility with the drivers, I need to roll back to a older version anyway so hopefully that solves that. So I do that set two sets of clocks for 2D and 3D and alls good (hopefully), I think I've got it.
 
Last edited:
it if does have 96 shaders is a 8800gs rebadged (which is a good think)
drop the core clock, up the shadders and leave the memory as is.

You should be able to get the shadders to 1500-1600 easy if your lucky they will reach 1900+
Had mine at 1882 with a 50Mhz drop on the core
 
I set them to 1650 just as a temp measure until I have time to look at it properly, it seems happy enough, all the wu that have had errors have also had errors for my wingmen.
 
So the other days outage saw my gpus sat idle and again this last 24 hours. I've tried the rebranding tool, it did nothing. What am I missing, why do I have lots of cpu wu's but no gpu ?

My cache is set to 3 days I think, maybe 4, but I seem to run out of gpu wu in hours.
 
Last edited:
The utility will not move VLARs to the GPU, and also has a switch to prevent VHARs being moved. Post the entry from the log file where it does nothing, something like this..
---------------------------
Reschedule version 1.9
Time: 29-12-2009 08:53:54
User testing for a reschedule
CPU tasks: 103 (60 VLAR, 0 VHAR)
GPU tasks: 110 (0 VLAR, 0 VHAR)
No reschedule needed
 
---------------------------
Reschedule version 1.9
Time: 29-12-2009 14:58:10
User testing for a reschedule
CPU tasks: 122 (27 VLAR, 30 VHAR)
GPU tasks: 0 (0 VLAR, 0 VHAR)
No reschedule needed
---------------------------
Reschedule version 1.9
Time: 29-12-2009 14:58:13
User forced a reschedule
Option "Only VLar+VHar to CPU" is enabled
Stopping BOINC application
BOINC application is stopped
Boinc applications
setiathome_enhanced 603 windows_x86_64
setiathome_enhanced 608 windows_x86_64 cuda
CPU tasks: 122 (27 VLAR, 30 VHAR)
GPU tasks: 0 (0 VLAR, 0 VHAR)
After reschedule:
CPU tasks: 122 (27 VLAR 30 VHAR)
GPU tasks: 0 (0 VLAR 0 VHAR)
Starting BOINC application
BOINC application started

edit/ Fixed, theres a button on the settings tab for Only VLar+VHar to CPU, I deselected that and it worked.
 
Last edited:
Uncheck the 'Only VLAR+VHAR to CPU' option (Settings tab), then go back to the Main tab. The slider above the Run and Test buttons should now be enabled. Set that to the percentage of WUs you want to convert to GPU WUs, then click Run. Bear in mind that all the VLAR and VHAR WUs will be retained for the CPU.

Edit: You found it then :D
 
Last edited:
I read that as it would only move vlar and hlar from the gpu to the cpu, not what it actually means.

Its been a learning curve but I think I'm there now. (hope so anyway, I havn't got time for this).
 
Back
Top Bottom