SETI@home News Vol. 112 (08/08/2010)

Whats the issue with this?

I have had the same issue - all cpu work keeps getting computation errors (so I just aborted all of it) - I'm using WinXP with 4Gb Ram (3Gb due to 32bit OS).

Any solutions?

First things first - are you running einstein as that is a complete memory hog :eek:

You have more RAM than me, so should be alright, since stopping all unused processes, turning off aero, increasing the disk cache - things have been a bit better...
 
Get more ram or switch to a 64bit OS? Perhaps lower your cache a little bit. Anyone analysed if it is actually BOINC thats eating all the pies?

This issue has only just started (for me) over the past few days - always worked fine before.

Until the units upload I can't say what the problem is - but they all errored at the same point in each unit (and the same time - within 2 to 3 seconds).

So, it's anyones guess at the mo................
 
Now isn't that sweet i've just got an invite to join the GPU Users Group.

Will now be accepting bribes :p

No invite for me :( .......but it would have been a waste of time anyway :D

These problems some of you are experiencing.........well for once its not affecting me - so far, but same setup on 2 machines, xp and 4gb installed :confused:
 
Now isn't that sweet i've just got an invite to join the GPU Users Group.

Will now be accepting bribes :p

heh, not a bribe, but, don't you want to stomp all over me before going anywhere first? :p
(and I will then be accepting "donations" to keep ahead of reaver lol)

edit: welcome back to Biffa, looking forward to a good race! (one sided as it may be)
 
Has anyone done a definitive chart of RAC for different cards/hardware etc?

Shall we start doing something like this?

I've made it open to view, and can add ppl in the team who feel up to adding stats to it/editing it as we go along.
 
I am not aware of anything like that for SETI so yes its a good idea :)
There are so many different configurations though, I wish one system could try out all the gpu's for a definative answer.
Then there are people like me, who run different gpu's in the same box making it even more complicated.
If only reviewers would add a seti benchmark to thier articles, I'm sure they used to do just that.
 
Last edited:
Its very difficult to standardise with no canonical work unit for lets say turn around time. Or claimed credit. Maybe what we could do is take the granted credit that we know which card/cpu crunched on what configuration, and take an average of 100 work units?
although i am not sure for my system of mixed gtx260/285/295 I will be able to tell which goes with which.
 
I am not aware of anything like that for SETI so yes its a good idea :)
There are so many different configurations though, I wish one system could try out all the gpu's for a definative answer.
Then there are people like me, who run different gpu's in the same box making it even more complicated.
If only reviewers would add a seti benchmark to thier articles, I'm sure they used to do just that.

Somebody has started a thread over on the S@H forum along similar lines.....
 
Its very difficult to standardise with no canonical work unit for lets say turn around time. Or claimed credit. Maybe what we could do is take the granted credit that we know which card/cpu crunched on what configuration, and take an average of 100 work units?
although i am not sure for my system of mixed gtx260/285/295 I will be able to tell which goes with which.


Been thinking about this. We may actually be able to cobble together a 'standard' task. Basically, from my rummaging around client_state.xml (during my error cleaning script writing), we could pick any real task and manually inject it into any client_state.xml file. PROVIDED the task wasn't reported to Berkeley, we would then have a benchmark. I've thought this through, and can provide a perl script to insert a task, and another to ensure it is removed from a client_state.xml after a user has processed it........thoughts????
 
This issue has only just started (for me) over the past few days - always worked fine before.

Until the units upload I can't say what the problem is - but they all errored at the same point in each unit (and the same time - within 2 to 3 seconds).

So, it's anyones guess at the mo................

Very similar to my problem, all my cpu wu's would error around 1:14:xx
Its been a bit better since I suspended E@H, now one in 10 errors, so better but still something wrong :mad:

Now isn't that sweet i've just got an invite to join the GPU Users Group.

Will now be accepting bribes :p

You're not the first - Toxic was headhunted, probably due to his previous traitor tendencies when he crunched 100K for them as thanks for the help

Did they offer you a bribe around the latest version of the fermi app by any chance? Toxic was mad at that, so was I, as we both contributed to the fermi card bought for one of the lunatic developers

very cheeky - but at least it shows we are going places as a team


As for the benchmarking - its a great idea, but how practical is it at the moment with the outages and the 3000+ wu queues, finding a single unit that has been inserted for benchmarking
 
Some sort of database would be very handy as long as it was done well.

Using one WU as the benchmark would provide a direct comparison from one piece of hardware to another. However, the 'problem' with this is that only one angle range would be being benchmarked - there would be no data for extremes of angle range, and as we know lower angle ranges generally take longer to process (at least on GPUs) and so any figure obtained would be very limited.

The other method - taking averages of x number of WUs - also has issues, as WUs that ended early - due to noisy data contained in the WU or otherwise - would have to be extracted from the data and GPU and CPU tasks differentiated and separated. It would also mean users having to sift through their tasks to obtain the data, although importing completed valid tasks into, say, Excel and sorting it as required would avoid that problem.


EDIT: As a test I just imported 20 completed valid WUs into Excel and got an average time of 871 seconds for my GTS250, but the shortest time was 19.69 seconds (noisy WU) and the longest 1,272, so obtaining an average figure would have to be done carefully for it to have any meaning.
 
Last edited:
As for the benchmarking - its a great idea, but how practical is it at the moment with the outages and the 3000+ wu queues, finding a single unit that has been inserted for benchmarking

It depends on how clever Berkeley have been. Each task's deadline is represented numerically in client_state.xml, in a format I can't figure out (its not a Julian date, or if it is, I have no idea what epoch they have used). Now, providing that deadline isn't represented in the task file (and through checking a task file it doesn't appear to be - it certainly isn't in the task header) we can work around this - we simply find a task with a deadline of x weeks into the future, use that deadline for our test task(s) and use the intervening period to set the whole test up.

I will do a test tomorrow afternoon, and see if I can change the deadline on a processed task and then force my client into high priority mode. If it runs again, it prooves that the deadline is only represented in client_state.xml and we will then have a method of fooling the deadline date on the test task(s) which we can use to ensure the test task(s) are processed virtually immediately the test is initiated. All we need then is the test task(s), scripts and some willing peeps.

I take your point TT about one task not being representative. There are two ways of looking at this. The first is to assume that a GPU that's faster on one type of task is likely to be faster on all - which makes sense to me (I've not heard that GPU x is better at mid-range tasks than GPU y, but GPU y is better at VHARS that GPU x - for example). The other would be to use 3 test tasks, a low angle range, a high angle range and one in the middle. This second approach would take a lot more effort in terms of finding the datum tasks though (noise etc), but would allow us to provide a simple average (as suggested by Biffa)......
 
Did they offer you a bribe around the latest version of the fermi app by any chance? Toxic was mad at that, so was I, as we both contributed to the fermi card bought for one of the lunatic developers

As did I, and I don't even run Fermi hardware (and I've just blown my anonymity). I would be a bit ****sed if they were doing that........:eek:
 
Did they offer you a bribe around the latest version of the fermi app by any chance? Toxic was mad at that, so was I, as we both contributed to the fermi card bought for one of the lunatic developers

very cheeky - but at least it shows we are going places as a team

No bribes were offered just a polite i know your in a team but.....
It will take a lot more than an invite from a random team to get me to leave you lot especially as they are our next target in the RAC ranking ;) Sorry Grim but i'm still after you ;)

The only way i'm leaving OCUK SETI Team is.........









my death :D or the team being abolished :mad:
 
Back
Top Bottom