DC Vault

I googled your error message "process exited with code 193 (0xc1, -63)" and it's down to the Boinc client in the Ubuntu repo, versions 7.0.24 and 7.0.25 are broken, try reloading the repo and install a later version if you can, or revert back to v6 if it's there, I'm using v6.10.58 which is the latest version for my old 10.10 distro. Failing that you could not use the repo version and download it directly from Berkeley, though you probably will have to compile it.
 
We hit the top 20for RAC and were going up quick.. Top 10 is likely in a day or 2

We're in the top one hundred now, and should have over 8,000 vault points on the next update. I had to update my Ubuntu OS and Boinc client as the WUs started running slow, all's fine now. I've got both rigs on it at the moment, but will shutdown at the weekend, my new IB board has lost its sound, so RMA for that, and I'll be giving the other one a rest until its big brother comes back.
 
The pirate fleet have been boarded, stomped, we're up to 17th now :) A great effort by all considering we were in 28th just two months ago.
Next target is a moving one, EVGA, and they are concentrating on the DC Vault so this one will be a bit trickier.
 
I've requested a new OcUK team for PSP. I think that is the ace up our sleeve, we are 2700pts behind EVGA and they have nearly 6000pts on that project alone. Perhaps when it's up and running we can all jump on it for a short while to make the going easier. I've worked out that as soon as we return any result we'll get 333pts, and that again for every position change.
 
We now have a PSP team, I'm testing it at the moment, there's a couple of things I'm unsure of, so I've PM'd one of the mods there. If all goes well I'll make a mini guide as it's slightly tricky to set up.

If there are any OcUK team members wishing to help in the DC Vault there are two boinc projects that could do with a push, RNA world and Docking@home, I'm doing a bit on RNA world now, there are two types of WU, short and long, the short ones generally crunch for a few hours, the long ones days, sometimes weeks. Preference to which ones you get can be set in your RNA world site preferences, I'm running both with hyperthreading turned off to make the runtime as short as possible.

EDIT: I should say also that there are NO checkpoints with these WU, if the boinc client closes all progress is lost and you have to start from the beginning again.

Docking@home WU run for a couple/few hours.

For non boinc projects GIMPS and Seventeen or bust are good targets, I keep saying it but I will do GIMPS soon, I think what's putting me off apart from my attention being distracted by other projects is that it really works the GPU hard, and creates a lot of screen lag, the card I use for GIMPS is in my day in day out PC.

I'm hoping we can have a big push soon, we need it as were losing ground to EVGA.

But let's keep going!
Crunch on!
 
Last edited:
PSP is now up and running, here is a little quide if anyone wishes to participate, this is what I did for my Windows 64bit machine.
--------------------------------------------------------------------------------------------
1. Register an account here and log on.

2. In the forum section "Prime Sierpinski Project" there is a thread called " Teams:- Join a Team" post in that and request to join OcUK - Overclockers UK with your desired username, also PM 'ltd' with your request. Your username must not contain spaces, use underscore if need be, you can check the stats to see if your username is already in use.

3. Once you've been added, download the latest prpnet client from here (current windows version is prpclient-5.0.7-windows.7z)

4. Extract, and replace the "master_prpclient.ini" text with this one in the spoiler, but CHANGE "email=[email protected]" & "userid=PG_username" appropriately.
// email= is a REQUIRED field. The server will be use this address
// to send you an e-mail when your client discovers a prime.
email=[email protected]

// userid= is a REQUIRED field that will be used by the server
// to report on stats, etc. without having to reveal the user's
// e-mail address. DO NOT USE spaces. Instead use underscore _.
userid=PG_username

// This value differentiates clients using the same e-mail ID
// DO NOT USE spaces. Instead use underscore _.
clientid=clientID

// Tests completed by this "team" will be rolled-up as part of team stats. This
// will be recorded on tests that are pending and then updated when tests are
// completed. Since it is stored on the server per test, it is possible for a
// single user to be a member of multiple teams. If no value is specified for
// this field, then completed tests and primes will not be awarded to any teams.
// DO NOT USE spaces. Instead use underscore _.
teamid=

// server= configures the mix of work to perform across one or more
// servers. It is parsed as follows:
// <suffix>:<pct>:<workunits>:<server IP>:<port>
//
// <suffix> - a unique suffix for the server. This is used to distinguish
// file names that are created for each configured server.
// <pct> - the percentage of PRP tests to do from the server.
// <workunits> - the number of PRP tests to get from the server. The
// server also has a limit, so the server will never return
// more than its limit.
// <server IP> - the IP address or name for the server
// <port> - the port of the PRPNet server, normally 7101
//
// Setting pct to 0 means that the client will only get work from the
// server if it cannot connect to one of the other configured servers.
// Please read the prpnet_servers.txt in this directory for information
// on the latest PRPNet servers.

// The following servers are from the Prime Sierpinski Project
// These servers are external to PrimeGrid.
server=PSPfp:100:1:www.psp-project.de:8100
server=PSPdc:0:1:www.psp-project.de:8101

// This is the name of LLR executable. On Windows, this needs to be
// the LLR console application, not the GUI application. The GUI
// application does not terminate when the PRP test is done.
// On some systems you will need to put a "./" in front of the executable
// name so that it looks in the current directory for it rather than
// in the system path.
// LLR can be downloaded from http://jpenne.free.fr/index2.html
llrexe=llr.exe

// This is the name of the PFGW executable. On Windows, this needs to
// be the PFGW console application, not the GUI application.
// PFGW can be downloaded from http://tech.groups.yahoo.com/group/openpfgw/
// If you are running a 64 bit OS, comment out the pfgw32 line
// and uncomment the pfgw64 line.
//pfgwexe=pfgw32.exe
pfgwexe=pfgw64.exe

// This is the name of the genefer executables used for GFN searches. Up
// to four different Genefer programs can be specified. The client will
// attempt a test with genefercuda first if available...otherwise, genefx64
// will be first. If a round off error occurs in either, it will try genefer.
// If a round off occurs in genefer, it will try genefer80. If
// genefer80 fails, then the number cannot be tested with the Genefers. It will
// then be tested with pfgw if available. The order they are specified here
// is not important. (NOTE: Linux and MacIntel only have genefer available for CPU)
// Uncomment the line (genefx64) if you are running on a 64 bit machine.
//geneferexe=genefercuda.exe
geneferexe=genefx64.exe
geneferexe=genefer.exe
geneferexe=genefer80.exe

// This sets the CPU affinity for LLR on multi-CPU machines. It defaults to
// -1, which means that LLR can run on an CPU.
cpuaffinity=

// This sets the GPU affinity for CUDA apps on multi-GPU machines. It defaults to
// -1, which means that the CUDA app can run on an GPU.
gpuaffinity=

// Set to 1 to tell PFGW to run in NORMAL priority. It defaults to 0, which means
// that PFGW will run in IDLE priority, the same priority used by LLR, phrot,
// and genefer.
normalpriority=0

// This option is used to default the startup option if the PREVIOUS
// SHUTDOWN LEFT UNCOMPLETED WORKUNITS. If no previous work was left
// this will act like option 9.
// 0 - prompt
// 1 - Return completed work units, abandon the rest, then get more work
// 2 - Return completed work units, abandon the rest, then shut down
// 3 - Return completed, then continue
// 4 - Complete in-progress work units, abandon the rest, then get more work
// 5 - Complete in-progress work units, abandon the rest, then shut down
// 6 - Complete all work units, report them, then shut down
// 9 - Continue from where client left off when it was shut down
startoption=3

// stopoption= tells the client what to do when it is stopped with CTRL-C and there is
// work that has not been completed and returned to the server. Options 2, 5, and 6 will
// return all workunits. This will override stopasapoption. The accepted values are:
// 0 - prompt
// 2 - Return completed work units, abandon the rest, then shut down
// 3 - Return completed work units (keep the rest), then shut down
// 5 - Complete in-progress work units, abandon the rest, report them, then shut down
// 6 - Complete all work units, report them, then shut down
// 9 - Do nothing and shut down (presumes you will restart with startoption=9)
stopoption=3

// stopasapoption= tells the client that it needs to be shutdown automatically, i.e. without
// a CTRL-C. It is evaluated after each test is completed. It should be 0 upon startup.
// The accepted values are:
// 0 - Continue processing work units
// 2 - Return completed work units and abandon the rest
// 3 - Return completed work units (keep the rest)
// 6 - Complete all work units and return them
stopasapoption=0

// Timeout on communications errors
// (default is 60 minutes, minimum is 1 minute if not specified here...)
// Note that the actual used in the client is anywhere from 90% to 110% of this value
errortimeout=3

// Size limit in megabytes for the prpclient.log file...
// 0 means no limit.
// -1 means no log.
loglimit=1

// Set the debug level for the client
// 0 - no debug messages
// 1 - all debug messages
// 2 - output debug messages from socket communication
debuglevel=0

// Whether or not to echo "INFO" messages from server to console for accepted tests
// 0 - no echo
// 1 - echo (default)
echotest=1
5. You'll see lots of .bat files in the extracted folder, use the group of .bats that has the amount of cores you have or want to run, if you have a quad core machine for e.g. you would need to run in this order:

a) 4-quad-install-prpclient.bat ( this will create 4 new work folders for each core)

b) 4-quad-update-prpclient-ini.bat ( this updates the prpclient.ini file in each of the new folders from the master_prpclient.ini file)

c) 4-quad-start-prpclient.bat (starts all the clients at once)
 
Is it just me or is the Docking website REALT bloody slow?

Trying to figure out why im not getting any credit reported but it takes me about 5 mins just to get to my account page.

I just tried it and got to my account straight away, have you signed up as Johnathon? It says client detached on all your failed work units.
 
Docking, which you're already doing a grand job on Biffa.
Phil had a recent long run on yoyo, stomps on that are now a bit tougher now but go for whatever you like. GPUgrid could do with some TLC but I've no idea how your cards would perform on that.

EDIT: looking at GPUGrid it wouldn't be worth it, team scores are fairly spaced apart, you'd be slogging away for very little gain.
 
Last edited:
Have a look at the message boards Phil on the docking site, it seems to be a long standing problem with no solution (that I could see) a mods advise was to abort them. IIRC the work units took around 2-3 Hrs on my 3770K and I never had the 0% progress problem. The only possible solution I can think of is if you run it in Linux, I've found so far that eon, NFS & correlizer all run quicker in Linux.
 
Just finished my first set of PRP workunits. Think they took about 3 days. I am running 4 at a time on my 3770k but weirdly two of them seem to be running faster than the other two :confused:

Yeah I noticed that too when I ran it, no idea why.
I've been slacking as of late, sunning myself abroad, back now so I'll fire the machines up at the weekend.
 
There are a few PSPs, Proth, doublecheck and sieving, you and I have been doing Proth, DC Vault goes by these stats, the combined values.
I have a couple of cores on PSP now, also after all my procrastinating I've started on GIMPs, I've been all over the place on boinc, settled on docking for now.
Don't know if you've noticed but EVGA are catching us quick and unfortunately we seem to have lost Remos, hopefully just temporarily.
 
Last edited:
Ah I see :) I'm now having a go at Seventeen or Bust. The WUs seem to be massive. Its saying about 10 days to complete :eek:

Are you PG? if so it looks like the credit is coming through already, though I read somewhere that the credit is only borrowed until you complete the whole WU, don't know how true that is, never the less a nice little bump in the stats :)
 
Think i'll run 16 cores on GIMPS and a couple on e0n2 just to keep it ticking over.

I'm doing GIMPs myself on my card, and I'm going for it big time, it's much more productive than CPU.

Instead of GIMPs how about 17 or bust? Phil's been doing it, and I switched a rig over yesterday, it would be cool if we all attacked it. EDIT: I should say though, don't let me stop you doing whatever you want to do, it was just an idea :)
 
Last edited:
The WUs seem to be massive. Its saying about 10 days to complete :eek:

The ones I've picked up are going to complete in less than 4 days, this is on a 2700K, are you running 4 WUs and do you have HT enabled? I did notice that with HT on it was only using 50% CPU despite the client saying it was using 1-2 cores for one WU, 3-4 cores for another ..etc, so I turned it off, and it's now maxed out, I did try running 2 instances but it just seem to take twice as long, maybe there's a slight percentage advantage but I couldn't see it.
 
Back
Top Bottom