8th Annual BOINC Pentathlon: 5th May - 19th May 2017

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
I had thought about it but I think they got lucky this year and for some of the projects chosen they have actual Raccoon Lovers working on them.
I don't think anyone turned up last year and for very little effort I got them at least 1 point, can't find last year's stats as it takes me to this year's stats.
Also, don't think I can remember the password for the email account/ID I used, but here's a link to their POGS team list still showing me as top contributor by RAC...

https://pogs.theskynet.org/pogs/team_members.php?teamid=10522&offset=0&sort_by=expavg_credit
 
Soldato
Joined
23 Jan 2010
Posts
4,053
Location
St helens
right now all my systems are running POGS. I added cosmology to boinc and all tasks this morning got half way through and all failed. I added LHC and got no work for this at all

downloaded the boinc + virtual machine from boincs website but not sure how that's suppose to work.
 

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
right now all my systems are running POGS. I added cosmology to boinc and all tasks this morning got half way through and all failed. I added LHC and got no work for this at all

downloaded the boinc + virtual machine from boincs website but not sure how that's suppose to work.
Were the failed tasks on your big machines or all machines ?
What were the error messages reported for those tasks on the website in your account ?
Did you choose just one type of work (which?) or all of them ?

For LHC you probably just want to try Atlas so de-select the others in your preferences and create an app_config file to limit how many run and how many cores the multi-threaded app uses.

You just need VirtualBox downloaded and installed, Boinc will use it (if it sees it, I have 1 box that it doesn't work on at the moment because Boinc says it isn't installed when it is !).
 
Soldato
Joined
23 Jan 2010
Posts
4,053
Location
St helens
camb_boinc2docker is what is ticked in my cosmo account.

going to go onto LHC website now and try atlas

edit- seems cosmology is now downloading on my i5 pc which im using to test it with! using 3 cpus for 1 unit. in the application box it shows it shows (vbox64_mt)
 
Last edited:

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
They have just announced there will be no more Docker work...

Cosmology@Home: New camb_boinc2docker workunits temporarily disabled
Hi all, in an effort to keep the server functioning smoothly (its operating pretty close to capacity with all the extra traffic from the Pentathlon) I've disabled creation of new camb_boinc2docker jobs until the contest is over on May 19th. All your in-progress camb_boinc2docker work will finish and be validated, this only means no *new* work for this application will be sent out by the server. If you wish to continue crunching Virtualbox applications (like camb_boinc2docker was), there should be enough planck_param_sims jobs, just make sure you haven't explicitly disabled this application in your preferences. Good luck and happy crunching!

So you need to have Legacy and/or Planck options ticked in your preferences !
I found out because I'd run out and sitting idle :mad:
 
Soldato
Joined
23 Jan 2010
Posts
4,053
Location
St helens
can somebody tell me what an app cfg would look like for a 32 core system running plank_param one with 28gb ram and the other with 16gb

it seems my i5 has completed some units on this so ill run with it for now
 

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
Something like this...

<app_config>
<app>
<name>camb</name>
<max_concurrent>1</max_concurrent>
</app>
<app>
<name>lsplitsims</name>
<max_concurrent>1</max_concurrent>
</app>
<app_version>
<app_name>lsplitsims</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>4</avg_ncpus>
</app_version>
<project_max_concurrent>1</project_max_concurrent>
</app_config>

I removed the Docker stuff as you can't get that now.
'camb' is the legacy non-VB work.

'lsplitsims' is the Planck work.
Each multi-threaded Planck job uses 4 threads (avg_ncpus)

max_concurrent says only 1 job for each type, project_max_concurrent says only allow 1 job from the project to run at a time.

You need to tune these settings (how many you run, and how many threads) to your hardware.

Even running 8 Plancks (on 4 threads each) may be too much for the amount of memory you have.
You need to monitor memory usage to see how many you can do at a time, either pad out remaining cores with other low memory work or leave them idle.
 
Soldato
Joined
23 Jan 2010
Posts
4,053
Location
St helens
ok ill see how it goes with the cpu set at 4 then if its to much should I knock it down to 2?
if I change the app cfg will it adjust the WU's that I already have or will I need to remove them and download some more
 

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
ok ill see how it goes with the cpu set at 4 then if its to much should I knock it down to 2?
if I change the app cfg will it adjust the WU's that I already have or will I need to remove them and download some more
No, you will be able to run 4 threads at a time per job but you may not be able to run 8 jobs at the same time on the big machine (28Gb).

Try 4 jobs (using 4 threads each) and see what memory is like, if memory is ok try running 5 or 6 jobs (4 threads each).
You may not get 4 jobs (4 threads each) on the 16Gb machine, not sure. I run out of cores/threads before I run out of memory !
 

Xez

Xez

Associate
Joined
24 Jun 2005
Posts
2,021
Location
Lincolnshire
Sorry guys, been admitted to hospital earlier hours of this morning so ive had to turn off my main computer that has the GPU. My microserver is using its CPU though to continue just slowly...
 

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
Sorry guys, been admitted to hospital earlier hours of this morning so ive had to turn off my main computer that has the GPU. My microserver is using its CPU though to continue just slowly...
No worries, some things are more important than a fun yearly challenge, hope everything goes okay and you get back home soon.
 
Back
Top Bottom