8th Annual BOINC Pentathlon: 5th May - 19th May 2017

PDW

PDW

Associate
Joined
21 Feb 2014
Posts
1,867
Well you could let each task get close to 100% and then suspend it to let another one run.

Might be a bit hands on though :D
 
Associate
Joined
14 Apr 2017
Posts
85
From our friends at OCN:

@10esseeTony

Code:
C:\Windows\System32\drivers\etc


Find the hosts file. Open with notepad.
Code:
127.0.0.1 localhost
::1 localhost
127.0.0.1 www.cosmologyathome.org


Make sure it looks like this. To cancel the code and allow uploads, put a # before the address.
Edited by emoga - Today at 2:53 pm

EDIT: And... I can verify that it works. Well, I can verify that it stops the upload, but I'm not about to let my bunker of ONE go so soon, so I can't verify the undoing of the mod.
 

MGP

MGP

Soldato
Joined
24 Oct 2004
Posts
2,584
Location
Surrey
This Cosmology stuff that needs the virtual box thingy is a real pain in the derrière.

I only had the standard boinc client, without virtual box, but have downloaded the current version direct from the Oracle site (it's more updated than the embedded boinc version). Boinc says their version is the tried and tested one, but when I last used this a year ago it sucked taking over my PC making it unusable for anything else as the virtual box thing ran at too high a priority rather than background. Restart Boinc once the virtual box thing is added.

It seems that if you block the network communication for either of the virtual apps - camb docker or planck sims - then those will stop running. And, if you hold for too long, perhaps 24 hours, they risk timing out. So there is no point in bunkering these as far as I can see. I'd be pleased to be corrected.

You can decide what you get to do from the Cosmology website preferences. As far as I can tell, based on their site they want us to to planck > camb docker > legacy, with legacy being least useful and probably the least valuable in terms of points.

The app-config on Cosmology only refers to the camb docker thing, but there is also the planck stuff. I've no idea what the best ratio of cores to a WU is but working on the idea that most computing stuff seems to be in multiples of 2 and 4 this seems to at least work (planck seems to be lsplitsims):
app_config.xml
Code:
<app_config>
<app>
<name>camb_boinc2docker</name>
<max_concurrent>2</max_concurrent>
</app>
<app>
<name>lsplitsims</name>
<max_concurrent>2</max_concurrent>
</app>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>4</avg_ncpus>
</app_version>
<app_version>
<app_name>lsplitsims</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>4</avg_ncpus>
</app_version>
</app_config>

But I am also seeing that with the virtual box running, my CPU is not being fully utilised. on a i7 5820K - 12 cores, with 4 cores on Cosmology and the remaining 8 on something else, the overall system says it's using only 68% of the CPU :confused:
Edit: Looking in the VM, one of the Planck tasks suggested that it had all 12 cores available, but, only 4 were allocated by Boinc's app config. No doubt the app config had applied after the PC had started the WU at full tilt. Aborting that has restored CPU to full 100% use. Trouble is now with 3 Cosmology tasks running (2 plank ,one camb thingy) the PC is unresponsive. I can't even type this edit properly.
 
Last edited:
Soldato
OP
Joined
22 Oct 2010
Posts
2,961
Location
Ratae Corieltauvorum
I don't know what the something else is but that app_config you posted is set to run 2 works units simultaneously running 4 cores on each, so 8 threads in total.

Try 2 x 6 see if it uses less memory.

Plank credit is 50 pts flat if i remember right.
Camb_docker has a mind of its own when it comes to credit.
 
Last edited:

MGP

MGP

Soldato
Joined
24 Oct 2004
Posts
2,584
Location
Surrey
I'm experimenting. That app config, only ran 2 WUs. It seemed the number of threads was the limiting factor and max concurrent only comes into play, if there are sufficient spare threads to start up a second WU. It's doesn't seem to be forcing it to run multiple WUs. Perhaps a "0.5" in max concurrent would do that?

Ideally I'd like it to run 2 apps max, either planck or cambs letting the PC choose what is has to work with. Changing max concurrent to 2 for each of the planck and cambs meant three WUs ran, and killed the PC. Setting it with 1 max concurrent on each has reduced it to 2 WUs in progress, one planck, one camb. and eight cores. The PC is a bit more responsive, but I can tell it's not happy. Less than 40% RAM in use, so it's entirely a CPU thing that the Virtual Box is too high a priority. Hate to think what will happen if I try to play Battlefield later.

Edit: Seems like there is a project_max_concurrent option so my app_config.xml for Cosmology becomes.
Code:
<app_config>
<project_max_concurrent>2</project_max_concurrent>
<app_version>
<app_name>camb_boinc2docker</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>4</avg_ncpus>
</app_version>
<app_version>
<app_name>lsplitsims</app_name>
<plan_class>vbox64_mt</plan_class>
<avg_ncpus>4</avg_ncpus>
</app_version>
</app_config>

Now have two planck WUs on the go, using 8 theads in total, Boinc allocates another 4 threads to other non Cosmology stuff, CPU is flat out.

All I have to do now is get the network to prevent uploads so I can bunker .....

Another Edit: Looks like the hosts file thing is successfully blocking the upload, but the VMs keep going. But as I can't just turn networking off in Boinc, hoping that a similar hosts entry is going to stop the WCG Zeta WUs uploading ...
 
Last edited:

MGP

MGP

Soldato
Joined
24 Oct 2004
Posts
2,584
Location
Surrey
Reading around on this vbox <beep> it seems important not to try and run more stuff than you have cores, not threads available. So on hyperthreaded intel chips you may if you have 8 threads don't use more than 4 cores. Make sure you have enabled your virtualisation in the BIOS.

I messed around quite a lot with a fair few aborted WUs as things have gone wierd. You must completely shut down boinc and the vbox thing before any revised app_config can work. Even then opening the oracle vbox app can show some weirdness, and once a WU has started on one config say 2 cores, it wont use 4 cores just by changing the app_config and restarting all, hence the aborts.

Seems like the camb_docker WUs could be run using only ncpus=2 and may be OK. The plank does need more cores for that one allow ncpus=4, and if a mix of both then ncpus=4 seems OK.

I'm running Windows 10 on an i7 5820K (not clocked) which does allow 12 threads at once. PC becomes unresponsive if I allow all 12 cores on vbox so the config of 2 WUs concurrent with 4 cpus per WU so far is the best balance. The remainder will be on other more normal boinc cpu stuff like WCG Zika.

I'm reluctant to break the laptop and see how this might work there.

Edit: Meanwhile the camb_legacy stuff running on the linux box is really really bad. Too many WUs that have simply crashed with "computation error", and those that remain want something like 17 hours, and I very much doubt that will be worthwhile in points.
 
Last edited:
Associate
Joined
14 Apr 2017
Posts
85
I have had no luck running more than 3 threads per VM, and our most technical member on AT read that after 8 threads per VM, performance doesn't increase much. I THINK I have everything ready for VM's once the real race begins, but I guess I'd better do either the legacy tasks or some OpenZika in the meantime.

Good luck fellas! Science wins no matter the outcome!
 
Soldato
OP
Joined
22 Oct 2010
Posts
2,961
Location
Ratae Corieltauvorum
I got it working, uninstalled the VB version that came with boinc and install the latest version, it still didn't work after that but it is now. I probably haven't got app_config balanced right, but I don't care, It's working, what with me hosing a days worth bunkered openzika work units trying to get multiple clients on linux I just want to have a rest from it now, I've been prating around on it most of the day.

..& good luck to you and your team too Tony.
 
Soldato
OP
Joined
22 Oct 2010
Posts
2,961
Location
Ratae Corieltauvorum
We seem a bit thin on the ground here

From what I can tell, those in the pentathlon are:

PDW
Andy Taximan
ozaudio
MGP
Senture
& myself

Acme said he would help with GPU stuff

Anyone one else helping Team OcUK in the pentathlon?

have you sent any reminders out Oz?
 
Soldato
Joined
23 Jan 2010
Posts
4,053
Location
St helens
From what I can tell, those in the pentathlon are:

PDW
Andy Taximan
ozaudio
MGP
Senture
& myself

Acme said he would help with GPU stuff

Anyone one else helping Team OcUK in the pentathlon?

have you sent any reminders out Oz?
I have but not everybody responds these days. I did get this from ba though
120 core rig is back online now after lots of messing about "

I'll do what I can just let me know which projects.
COSMOLOGY & WCG ZIKA are the 2 which kick of
 
Associate
Joined
11 Jul 2010
Posts
843
Location
Derbyshire
Got just over 200 threads to help out and maybe a little gpu if it will push us over the top:)
I do pop by every now and then just to see whats up but really busy with work so cant put in too much time sorry.
 

MGP

MGP

Soldato
Joined
24 Oct 2004
Posts
2,584
Location
Surrey
I'm happy to help. We're signed up as a team, right, I don't have to do so individually?

Also, was it decided which projects?

Yes it's a team thing so anyone who has set their Boinc profile for each project to join OcUK then you're in. If you haven't crunched a project before you will need to join that project as a newbie, and then select the team in that project. Then the boinc stats sites will do the rest.

As for the projects, they are announced on the pentathlon site There will be five events some CPU some GPU based each run for different times from a fornight for the marathon to only a day or two for the sprints. They often overlap which is where the fun begins of how best to spread resources. They get announced a few days ahead of the competition which allows some bunkering (don't return WUs outside of the time period as they don't count). So far we know two projects - the marathon is Cosmology (see the posts above for discussion on the Virtualisation issues for best results) and the shorter City Run is World Community Grid's OpenZika (don't run any other WCG projects)
 
Soldato
OP
Joined
22 Oct 2010
Posts
2,961
Location
Ratae Corieltauvorum
Be careful anyone trying to bunker openzika on linux the way i tried.

As I already said, I lost a days bunker when the client became detached, trying to run multiple clients.
So this time, after I bunkered up again, i pulled the SSD out, and installed a HHD. I made 3 separate partitions on this disk, and installed 3 separate instances of Ubuntu on each partition, each with a different computer name.

Booted into the first, downloaded zika work, then shut down & booted into my 2nd instance, downloaded zika work, but the server didn't recognise my second instance as a different machine and subsequently it detached and aborted all my work units from the first instance.

I panicked a little and thought the SSD I pulled from it might have been hosed to, but thankfully it wasn't. I can only guess that it was because they were on the same disk? don't know, but I'm done bunkering WCG now.
 
Associate
Joined
29 Nov 2004
Posts
303
Location
Omnipresent
Anyone tried using windows firewall to block the outbound boinc traffic? I've just set up a rule to do so in the hope i can still get new tasks and bunker the rest.

Looks like the deadline is 10 days for Zika.
 

MGP

MGP

Soldato
Joined
24 Oct 2004
Posts
2,584
Location
Surrey
I think most of us have had more success blocking stuff with the hosts file than messing with the firewalls. It's the Cosmology Virtual stuff that is the problem as that needs to think the network is alive. If you are just doing zika then you can download and then suspend boinc's network activity.

The 10 days to return zika is irrelevant, that challenge will be over by then.

edit your hosts file (text editor using admin rights, I used notepad++ which when it couldn't write normally simply prompted me to restart in an admin mode).
Windows 10 host file is at C:\Windows\System32\drivers\etc\hosts

"Borrowed" from the guys over on anandtech :D
Code:
127.0.0.1 localhost
::1 localhost
127.0.0.1 www.cosmologyathome.org
127.0.0.1 www.worldcommunitygrid.org
127.0.0.1 scheduler.worldcommunitygrid.org
127.0.0.1 swift.worldcommunitygrid.org
127.0.0.1 grid.worldcommunitygrid.org

It's certainly worked for me but I did first download my WUs, then reboot windows so that it could see the change and it's queuing stuff up nicely.
 

MGP

MGP

Soldato
Joined
24 Oct 2004
Posts
2,584
Location
Surrey
Be careful anyone trying to bunker openzika on linux the way i tried.

As I already said, I lost a days bunker when the client became detached, trying to run multiple clients.
So this time, after I bunkered up again, i pulled the SSD out, and installed a HHD. I made 3 separate partitions on this disk, and installed 3 separate instances of Ubuntu on each partition, each with a different computer name.

Booted into the first, downloaded zika work, then shut down & booted into my 2nd instance, downloaded zika work, but the server didn't recognise my second instance as a different machine and subsequently it detached and aborted all my work units from the first instance.

Did you start boinc each time as a completely new client machine, or did you try and retain any part of it's identity so that your client would have been seen as trusted? If you kept the boinc identity others have reported that something is buried in the client and/or server that tells it that you already have WUs downloaded, but of course it can't then find them, gets confused, and you are in a world of hurt.
 
Associate
Joined
29 Nov 2004
Posts
303
Location
Omnipresent
I think most of us have had more success blocking stuff with the hosts file than messing with the firewalls. It's the Cosmology Virtual stuff that is the problem as that needs to think the network is alive. If you are just doing zika then you can download and then suspend boinc's network activity.

The 10 days to return zika is irrelevant, that challenge will be over by then.

edit your hosts file (text editor using admin rights, I used notepad++ which when it couldn't write normally simply prompted me to restart in an admin mode).
Windows 10 host file is at C:\Windows\System32\drivers\etc\hosts

"Borrowed" from the guys over on anandtech :D
Code:
127.0.0.1 localhost
::1 localhost
127.0.0.1 www.cosmologyathome.org
127.0.0.1 www.worldcommunitygrid.org
127.0.0.1 scheduler.worldcommunitygrid.org
127.0.0.1 swift.worldcommunitygrid.org
127.0.0.1 grid.worldcommunitygrid.org

It's certainly worked for me but I did first download my WUs, then reboot windows so that it could see the change and it's queuing stuff up nicely.
But that will stop getting new work as well ahead of the competitions once you you have finished a your allocation, would it not? If you can just block the outbound traffic, you can bunker and still download new tasks?

Windows 10 firewall rules are very easy to setup.
 
Back
Top Bottom