DC Vault

Its going to take some crative crunching to get near EVGA thats for sure.

Theres probably a few more hundred points to squeeze out of eon2 but i will need to move on soon and i'm not sure what works well on CPU only.
 
Yeah we are starting to run out of projects where there are easy points to be had. Probably only GIMPS and PRP (the only project we have no team for) where we can still gain a lot of points for not much effort.
 
I've requested a new OcUK team for PSP. I think that is the ace up our sleeve, we are 2700pts behind EVGA and they have nearly 6000pts on that project alone. Perhaps when it's up and running we can all jump on it for a short while to make the going easier. I've worked out that as soon as we return any result we'll get 333pts, and that again for every position change.
 
Well PSP sounds like a good target then. I can go back to GIMPS at some point but at over a week per WU on each core i get terrible results. Best left to those GPU people.

Theres points to be had on Muon1 and 17 or bust but i do hate moving away from Boinc, it makes switching a lot less hastle.
 
We now have a PSP team, I'm testing it at the moment, there's a couple of things I'm unsure of, so I've PM'd one of the mods there. If all goes well I'll make a mini guide as it's slightly tricky to set up.

If there are any OcUK team members wishing to help in the DC Vault there are two boinc projects that could do with a push, RNA world and Docking@home, I'm doing a bit on RNA world now, there are two types of WU, short and long, the short ones generally crunch for a few hours, the long ones days, sometimes weeks. Preference to which ones you get can be set in your RNA world site preferences, I'm running both with hyperthreading turned off to make the runtime as short as possible.

EDIT: I should say also that there are NO checkpoints with these WU, if the boinc client closes all progress is lost and you have to start from the beginning again.

Docking@home WU run for a couple/few hours.

For non boinc projects GIMPS and Seventeen or bust are good targets, I keep saying it but I will do GIMPS soon, I think what's putting me off apart from my attention being distracted by other projects is that it really works the GPU hard, and creates a lot of screen lag, the card I use for GIMPS is in my day in day out PC.

I'm hoping we can have a big push soon, we need it as were losing ground to EVGA.

But let's keep going!
Crunch on!
 
Last edited:
PSP is now up and running, here is a little quide if anyone wishes to participate, this is what I did for my Windows 64bit machine.
--------------------------------------------------------------------------------------------
1. Register an account here and log on.

2. In the forum section "Prime Sierpinski Project" there is a thread called " Teams:- Join a Team" post in that and request to join OcUK - Overclockers UK with your desired username, also PM 'ltd' with your request. Your username must not contain spaces, use underscore if need be, you can check the stats to see if your username is already in use.

3. Once you've been added, download the latest prpnet client from here (current windows version is prpclient-5.0.7-windows.7z)

4. Extract, and replace the "master_prpclient.ini" text with this one in the spoiler, but CHANGE "email=[email protected]" & "userid=PG_username" appropriately.
// email= is a REQUIRED field. The server will be use this address
// to send you an e-mail when your client discovers a prime.
email=[email protected]

// userid= is a REQUIRED field that will be used by the server
// to report on stats, etc. without having to reveal the user's
// e-mail address. DO NOT USE spaces. Instead use underscore _.
userid=PG_username

// This value differentiates clients using the same e-mail ID
// DO NOT USE spaces. Instead use underscore _.
clientid=clientID

// Tests completed by this "team" will be rolled-up as part of team stats. This
// will be recorded on tests that are pending and then updated when tests are
// completed. Since it is stored on the server per test, it is possible for a
// single user to be a member of multiple teams. If no value is specified for
// this field, then completed tests and primes will not be awarded to any teams.
// DO NOT USE spaces. Instead use underscore _.
teamid=

// server= configures the mix of work to perform across one or more
// servers. It is parsed as follows:
// <suffix>:<pct>:<workunits>:<server IP>:<port>
//
// <suffix> - a unique suffix for the server. This is used to distinguish
// file names that are created for each configured server.
// <pct> - the percentage of PRP tests to do from the server.
// <workunits> - the number of PRP tests to get from the server. The
// server also has a limit, so the server will never return
// more than its limit.
// <server IP> - the IP address or name for the server
// <port> - the port of the PRPNet server, normally 7101
//
// Setting pct to 0 means that the client will only get work from the
// server if it cannot connect to one of the other configured servers.
// Please read the prpnet_servers.txt in this directory for information
// on the latest PRPNet servers.

// The following servers are from the Prime Sierpinski Project
// These servers are external to PrimeGrid.
server=PSPfp:100:1:www.psp-project.de:8100
server=PSPdc:0:1:www.psp-project.de:8101

// This is the name of LLR executable. On Windows, this needs to be
// the LLR console application, not the GUI application. The GUI
// application does not terminate when the PRP test is done.
// On some systems you will need to put a "./" in front of the executable
// name so that it looks in the current directory for it rather than
// in the system path.
// LLR can be downloaded from http://jpenne.free.fr/index2.html
llrexe=llr.exe

// This is the name of the PFGW executable. On Windows, this needs to
// be the PFGW console application, not the GUI application.
// PFGW can be downloaded from http://tech.groups.yahoo.com/group/openpfgw/
// If you are running a 64 bit OS, comment out the pfgw32 line
// and uncomment the pfgw64 line.
//pfgwexe=pfgw32.exe
pfgwexe=pfgw64.exe

// This is the name of the genefer executables used for GFN searches. Up
// to four different Genefer programs can be specified. The client will
// attempt a test with genefercuda first if available...otherwise, genefx64
// will be first. If a round off error occurs in either, it will try genefer.
// If a round off occurs in genefer, it will try genefer80. If
// genefer80 fails, then the number cannot be tested with the Genefers. It will
// then be tested with pfgw if available. The order they are specified here
// is not important. (NOTE: Linux and MacIntel only have genefer available for CPU)
// Uncomment the line (genefx64) if you are running on a 64 bit machine.
//geneferexe=genefercuda.exe
geneferexe=genefx64.exe
geneferexe=genefer.exe
geneferexe=genefer80.exe

// This sets the CPU affinity for LLR on multi-CPU machines. It defaults to
// -1, which means that LLR can run on an CPU.
cpuaffinity=

// This sets the GPU affinity for CUDA apps on multi-GPU machines. It defaults to
// -1, which means that the CUDA app can run on an GPU.
gpuaffinity=

// Set to 1 to tell PFGW to run in NORMAL priority. It defaults to 0, which means
// that PFGW will run in IDLE priority, the same priority used by LLR, phrot,
// and genefer.
normalpriority=0

// This option is used to default the startup option if the PREVIOUS
// SHUTDOWN LEFT UNCOMPLETED WORKUNITS. If no previous work was left
// this will act like option 9.
// 0 - prompt
// 1 - Return completed work units, abandon the rest, then get more work
// 2 - Return completed work units, abandon the rest, then shut down
// 3 - Return completed, then continue
// 4 - Complete in-progress work units, abandon the rest, then get more work
// 5 - Complete in-progress work units, abandon the rest, then shut down
// 6 - Complete all work units, report them, then shut down
// 9 - Continue from where client left off when it was shut down
startoption=3

// stopoption= tells the client what to do when it is stopped with CTRL-C and there is
// work that has not been completed and returned to the server. Options 2, 5, and 6 will
// return all workunits. This will override stopasapoption. The accepted values are:
// 0 - prompt
// 2 - Return completed work units, abandon the rest, then shut down
// 3 - Return completed work units (keep the rest), then shut down
// 5 - Complete in-progress work units, abandon the rest, report them, then shut down
// 6 - Complete all work units, report them, then shut down
// 9 - Do nothing and shut down (presumes you will restart with startoption=9)
stopoption=3

// stopasapoption= tells the client that it needs to be shutdown automatically, i.e. without
// a CTRL-C. It is evaluated after each test is completed. It should be 0 upon startup.
// The accepted values are:
// 0 - Continue processing work units
// 2 - Return completed work units and abandon the rest
// 3 - Return completed work units (keep the rest)
// 6 - Complete all work units and return them
stopasapoption=0

// Timeout on communications errors
// (default is 60 minutes, minimum is 1 minute if not specified here...)
// Note that the actual used in the client is anywhere from 90% to 110% of this value
errortimeout=3

// Size limit in megabytes for the prpclient.log file...
// 0 means no limit.
// -1 means no log.
loglimit=1

// Set the debug level for the client
// 0 - no debug messages
// 1 - all debug messages
// 2 - output debug messages from socket communication
debuglevel=0

// Whether or not to echo "INFO" messages from server to console for accepted tests
// 0 - no echo
// 1 - echo (default)
echotest=1
5. You'll see lots of .bat files in the extracted folder, use the group of .bats that has the amount of cores you have or want to run, if you have a quad core machine for e.g. you would need to run in this order:

a) 4-quad-install-prpclient.bat ( this will create 4 new work folders for each core)

b) 4-quad-update-prpclient-ini.bat ( this updates the prpclient.ini file in each of the new folders from the master_prpclient.ini file)

c) 4-quad-start-prpclient.bat (starts all the clients at once)
 
Excellent work :) The three teams above us are now quite close together so hopefully that should mean three fairly easy stomps :)

I will run down the Correlizer WUs on my 3770K then give PSP a go on there
 
Is it just me or is the Docking website REALT bloody slow?

Trying to figure out why im not getting any credit reported but it takes me about 5 mins just to get to my account page.
 
Is it just me or is the Docking website REALT bloody slow?

Trying to figure out why im not getting any credit reported but it takes me about 5 mins just to get to my account page.

I just tried it and got to my account straight away, have you signed up as Johnathon? It says client detached on all your failed work units.
 
Ok did a bit of a rejigg.

Heres what I've got going, BOINC is limited to GPU only, CPU's are reserved for F@H except on the laptop.

Seti: 4 x GTX470, 1 x HD7870, 1 x 9800GT
Milkyway: 1 x 5870
Rosetta/Docking/Cosmology: All sharing the CPU (i7-2720 4 Core with HT) on my laptop

Anything else need a push? I've got a NV4200M in the laptop that has 48 cores, I could move the 9800GT to something else.

Both would need reasonable returns on a lower spec card to be worthwhile for the team :D

Looking at the list the following boinc apps seem to support CUDA (other than what I am doing)

Collatz
DistrRTgen
Einstein
GPUgrid.net
Moo!
PrimeGrid
 
Last edited:
Docking, which you're already doing a grand job on Biffa.
Phil had a recent long run on yoyo, stomps on that are now a bit tougher now but go for whatever you like. GPUgrid could do with some TLC but I've no idea how your cards would perform on that.

EDIT: looking at GPUGrid it wouldn't be worth it, team scores are fairly spaced apart, you'd be slogging away for very little gain.
 
Last edited:
Back
Top Bottom