Associate
I posted on Stanford's F@H Forums, but didn't get allot of respoce. So I'll post it here as well and see if any of you all have any input:
I've installed several 9800GT's using Hyperlife's GPU2 Linux/Wine Headless Install guide. All went according to plan, no issues. The only package(s) I've installed apart from what is needed per the guide is xorg-dev, dtach, samba and then compiled NVClock 0.8 Beta-4. NVClock required gtk2+ to compile that's the only reason for the xorg-dev install, only added abuot 25MB to the build.
NVClock has allot of useful information, but it takes several query's to get it, and it only seems to do one card at a time. So, I hacked together a little scripts and I do mean hacked, bits an bobs from all sorts of scripts that I found on the web.
I'm not sure how this would all work with Multi-GPU cards, but I think NVClock counts cores, not physical cards, so may have to re-think the ouput for Multi-GPU cards.
Here's the output from the script:
It works, but I was lookong for a better way to do this and learn a bit of scripting along the way. If anyone has any input / suggestions, would be much appreciated.
.
I've installed several 9800GT's using Hyperlife's GPU2 Linux/Wine Headless Install guide. All went according to plan, no issues. The only package(s) I've installed apart from what is needed per the guide is xorg-dev, dtach, samba and then compiled NVClock 0.8 Beta-4. NVClock required gtk2+ to compile that's the only reason for the xorg-dev install, only added abuot 25MB to the build.
NVClock has allot of useful information, but it takes several query's to get it, and it only seems to do one card at a time. So, I hacked together a little scripts and I do mean hacked, bits an bobs from all sorts of scripts that I found on the web.
I'm not sure how this would all work with Multi-GPU cards, but I think NVClock counts cores, not physical cards, so may have to re-think the ouput for Multi-GPU cards.
Code:
#!/bin/bash
# -- Requires NVClock 0.8 Beta-4 compiled from source
# -- Determines the number of installed Graphics Cards
# -- Chops out the CORE Temp, Board Temps, Clock Speeds, etc.
clear
for n in $(nvclock -s |grep "Card number:" |cut -d ' ' -f3|sed 's/^[ \t]*//;s/[ \t]*$//')
do
temp=$(nvclock -T -c $n)
speed=$(nvclock -s -c $n)
shaders=$(nvclock -i -c $n)
model=$(nvclock -s -c $n)
fanspeed=$(nvclock -i -c $n | grep "Fanspeed:")
fanduty=$(nvclock -i -c $n | grep "PWM")
echo
echo GPU Number: $n
echo $model | cut -d ' ' -f3,4
echo $temp | cut -d ' ' -f5-12
echo $speed | cut -d ' ' -f8-15
echo $shaders | cut -d ' ' -f22,25,26,27
echo $fanspeed
echo $fanduty
done
echo
Here's the output from the script:
Code:
GPU Number: 1
Geforce 9800GT
GPU temperature: 64C => Board temperature: 55C
Memory clock: 950.400 MHz GPU clock: 648.000 MHz
Shader Clock: 1620.000 MHz
Fanspeed: 1880 RPM
PWM duty cycle: 94.9%
GPU Number: 2
Geforce 9800GT
GPU temperature: 57C => Board temperature: 50C
Memory clock: 950.400 MHz GPU clock: 648.000 MHz
Shader Clock: 1620.000 MHz
Fanspeed: 1840 RPM
PWM duty cycle: 94.9%
GPU Number: 3
Geforce 9800GT
GPU temperature: 51C => Board temperature: 44C
Memory clock: 950.400 MHz GPU clock: 648.000 MHz
Shader Clock: 1620.000 MHz
Fanspeed: 1863 RPM
PWM duty cycle: 94.9%
It works, but I was lookong for a better way to do this and learn a bit of scripting along the way. If anyone has any input / suggestions, would be much appreciated.
.