Odd PPD

Associate
Joined
23 Aug 2007
Posts
1,699
Location
Rothesay
As title suggest's im getting some odd PPD since i installed my third graphics card.

At present i have 2x 8800 gts 320's and a 9800 gx2 installed in my system as well as a new quad core cpu.
Now if i only run 3 GPU clients my gx2 has roughly twice the PPD as the 8800
however if i turn on the 4th GPU client my gx2's PPd seem to drop to around half what they usually are.

HELP !!!!!!!!!!!!!!
 
Have you set the environmental variable allowing your GPU folding processes to permanently use all four cores?

Can be done like this.
http://www.hardforum.com/showpost.php?p=1032821446&postcount=2
Setting the core affinity with NVIDIA GPU2 client (original by Sunin)

If you run a NVIDIA client with core version 1.07 or above (this can be found by looking at fahlog.txt, before a workunit start), you can configure the affinity precisely. This can often boost the performance, especially when you are trying to run multiple clients or you want to avoid using core 0 to reduce the graphics slowdown.

With core 1.07 or later, they introduced a new environment variable called NV_FAH_CPU_AFFINITY and it take a binary mask. A binary mask is just a programming trick to set the bits but for the majority, it may be a bit confusing so here is a full list to help you get the correct value :

Code: 0 = all cores
1 = core 0
2 = core 1
3 = core 0 + 1
4 = core 2
5 = core 0 + 2
6 = core 1 + 2
7 = core 0 + 1 + 2
8 = core 3
9 = core 0 + 3
10 = core 1 + 3
11 = core 0 + 1 + 3
12 = core 2 + 3
13 = core 0 + 2 + 3
14 = core 1 + 2 + 3
15 = core 0 + 1 + 2 + 3 Alternatively, there is also a simple way to determine the value :

Code: Core 0 = 1
Core 1 = 2
Core 2 = 4
Core 3 = 8

If you want core 1 and 3 only, add 2 and 8 to get 10.
If you want core 0, 2 and 3, add 1, 4 and 8 to get 13. For your information, if you have a quad-core, setting a value or 0 or 15 set the same affinity to all cores. If you have more than 4 cores, each additional core always take a 2^n value where n is the core number, starting from 0.

Now that you know how to determine the value to enter in the environment variable, we need to create that variable. Here is the simple steps (OS independent) :

-Go to start and run type in: sysdm.cpl then a system config panel should pop up.
-Under the Advanced tab click the environment variables button.
-Click new and under Variable name put NV_FAH_CPU_AFFINITY
-Under Variable value put the corresponding # you desire... Look above to determine the desired value.
-Walla, you have now setup the environmental variable hit Ok and close the window.
-Stop and restart the GPU2 client to make use of the new environment variable.

Your GPU should now utilize whatever cores you specified! I generally use 15 to allow it to use all cores. Then just check the box as indicated above in Xil's guide to not lock the GPU to a single core and your ready to roll!
 
Last edited:
What PCI-E speeds are the lanes running at? Has the board defaulted to running at 8x or 4x (or something else?) after inserting the extra card that means the bandwidth to the GX2 is limited? I know that a huge bandwidth is not required for F@H, but the GX2 needs a bit more than normal as two cards need to communicate through the same bus.
 
ive set them so that pci 1 gets 16 and the other 3 get 8 GPU-z shows it this way as well

FHScreen.jpg


GPU 1 and 3 are the GX2
 
Last edited:
4x is more than enough for the GX2 to fold properly. When it was cooler, both my GX2's got the same PPD output although 1 card runs 15 degrees hotter than the other. Now that the temprature has risen, one of the 4 cards thermal throttles so I'm seeing a massive drop in performance from the one card running at 112c

Does forcing the GX2 to use the G80 core flag help at all?
 
Its -forcegpu_g80 if im not mistaken. You can use it to trick Quadro cards/cards that dont behave into thinking they are g80 series cards, so the client works even if the card is 'unsupported'.
 
Last edited:
Stanford added flags a while ago to force the client into thinking a different GPU is used. Nvidia cards can be forced as G80 and G92. The flag you're looking for is -forcegpu nvidia_g80

EDIT: Just noticed the clocks on your GX2, how the hell did you get 720 core 1800 shaders stable? :eek: Most I can get out of both of mine is 658 core 1703 shaders.
 
Last edited:
Stanford added flags a while ago to force the client into thinking a different GPU is used. Nvidia cards can be forced as G80 and G92. The flag you're looking for is -forcegpu nvidia_g80

EDIT: Just noticed the clocks on your GX2, how the hell did you get 720 core 1800 shaders stable? :eek: Most I can get out of both of mine is 658 core 1703 shaders.

through trial and error i work out if i went 1mhz higher on any of them i would get eue's
by using that flag wont it force the gpu to run at g80 speeds

@Senture you might want to remove the metal shroud from your GX2 as that seems to help drop the temps by about 10c

EDIT:- think ive solved it bit of a N00B moment i forgat to add the -gpu 3 flag to the fourth gpu client
 
Last edited:
@senture you might want to remove the metal shroud from your GX2 as that seems to help drop the temps by about 10c

EDIT:- think ive solved it bit of a N00B moment i forgat to add the -gpu 3 flag to the fourth gpu client

Done that and taken both cards apart, replaced the thermal paste and pads, casing removed took off about 15 degrees and the thermal paste another couple.

Oh and whoops at the PEBKAC :D
 
Last edited:
Quick question about my psu voltages
using asus probe its showing my as
12 @ 11.78
5 @ 5.03
3.3 @2.96

its the last one im worried about as my pc has shutdown for no reason a couple of times this afternoon not long after i set the -gpu 3 switch
 
That's a little low, however try using Everest to verify the PSU rails. Since Asus Probe is pretty rubbish in reporting consistent results. The key rail is the 12v rail, which looks ok. What size PSU have you got?
 
ive had to remove one of the video cards to restore stability
the only thing i can think of causing it is the PSU as its fine till i start stressing the gpu's
looks like i nneed a larger psu with more Amps availeable on the 12v rail/rails
 
It might be PSU related

I had 4 9800GT's on a 550W Coolermaster PSU

Currently running 3 GTX260's on a 850W TX Corsair
 
It might be PSU related

I had 4 9800GT's on a 550W Coolermaster PSU

Currently running 3 GTX260's on a 850W TX Corsair

I think your right cause if im right my 2 8800 gts 320,s will use the same power as three of your old 9800gt's and thats without adding in the power hungry gx2
 
Back
Top Bottom