compiling cores > clockspeed ?

Associate
Joined
19 Jan 2011
Posts
658
Whats going to be better for compiling large source, say a kernel


4 core 2500k oc to 4.4
vs
6 core 1090T oc to 3.8

Opinions please
 
I don't think you would see any majorly worthwhile differences between the two, but my pick would be the 6 core purely because you can do more jobs at once, and things like kernel compiling make use of multi processing very well.

I have a 6 core with a 4ghz oc, ill post some kernel compile times up in a bit.
 
Just tested the latest 3.0-rc2 kernel from source. Compiling with 12 threads on my 6 core @ 4ghz (hyper threading) it took 58 seconds on a stock kernel config and 5 minutes 6 seconds with the Arch Linux config (majority of stuff on).

Would be interesting to see some comparisons with a quad core, but I don't think you will see a big enough difference to be very meaningful.
 
Last edited:
Grabbed the 3.0-rc2 sources, make gconfig and saved the defaults, time make running now, will report back when it's done (results may skew a little because Im crunching hashes on my gpu). Q6600 @ 3.8
 
Yeah I did, it's probably the inefficiencies in my chip compared to yours showing me up here more than anything, I expected your chip to thrash mine but not that badly haha
 
I know a lot about building distros, it's my job; and I can tell you that most of the time is wasted running single thread tasks, like "autotools" and that the compilation itself is usually quite fast and not very significant.
Only if you compile bit C++ code bases like Webkit, or toolchains, that all your cores running 100% really happend, and THERE the frequency of the cores matters a lot.
 
Gonna try mine out in a min. i5 760 @ 4Ghz.

Btw I read it's quicker to pass one extra job to make than the number of CPUs you have. So -j5 for a quad core CPU, and presumably -j13 for a six-core with hyper-threading.
 
Actually in my experience, you can add quite a few mores than that. I use -j8 on a quad, -j12 on a quad+HT.

But if you build distros, the REAL gain is to be able to not just use make -j, but also to run make jobs (of components) in parallel because in that case the dreaded "autocrap" parts also run in parallel.
In wich case, you can reduce a bit the amount on -j (-j5 and -j9) to make a bit more room for concurrent jobs
 
It was slower on my box with a -j8
real 1m42.320s
user 5m32.268s

What are you building exactly ? In most cases if it's stuff like big c++bits, you can run out of memory and /that/ will kill your perfs dead. Webkit is a good example of that, even with -j8 and 8GB sometime you see the ram usage go over 4GB!
Otherwise, the bottleneck might be the disk, but I never had that problem on a workstation...

But really, on a quad, -j8 should be close to optimum

PS: Whoops sorry lost track of the beginning of the thread, kernel building :-)
 
phoronix seems to says the perfs of .39 were a bit lower than .38; right now I like .38, it's nice and stable.
I use .39 and the rcXs on ARM, because board support is always gradually better, but on the workstations I stick to .38 for now. One thing I check is btrfs, because I'd like to convert to mainly btrfs soonish
 
On that note is anyone using the 3.0 RC? I'm struggling to find reasons to go from 2.6.38 to 9 let alone fool around with 3.0

2.6.38 has the nice scheduler improvements that are probably a worthy upgrade IMO.

I'm tempted to try out the Xen integration in 3.0, but other than that I don't think it's anything special, Linus just went to 3.0 "because he could" :p
 
Back
Top Bottom