• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

1090T or 950 for Scientific Computing

Associate
Joined
1 Sep 2010
Posts
19
Location
Oxford
Hi there,

I'm quite new to this forum and thinking about build my first DIY system via OcUK.

I have seen many discussions comparing AMD 1090T and i7 950 either from OcUK or somewhere else. I am just wondering how should I choose between these two?

I am mainly use this computer for scientific computing (very intensive), and occasionally some games ;). Can some one give me some ideas?

Thanks!
 
I would go with the Intel i7 950.

1. Beats AMD at single/dual threaded apps.
2. Has triple-channel memory.
3. Hardly anything uses 6 cores atm.

I also believe Intel CPU's are better at crunching numbers?
 
If your scientific computing software will make full use of all 6 cores then go for the AMD 1090T. From the comparisons that have been done on these forums comparing quad core i7s & the 6 core phenoms the phenoms provide the same or fractionally better performance in programs that make full use of all 6 cores (cinebench, handbrake) at the same clockspeed as the i7s. As the 1090T has a higher default clock speed than the i7 950 then it will provide better performance. If you will be overclocking the system then you will probably get to around 4 - 4.2 GHz with either system (depending on cooling) so they will both perform about the same.

If your software will not make full use of the 6 cores than you will probable be better off with the i7 950 at it has better single threader performance than the 6 AMD phenom processors due to hyperthreading.
 
It would all depend on what you are looking to crunch

If you are looking at distributed computing like Seti, Milkyway@Home, Folding@Home then all depends on the type of units you intend to process.

For Instance with Folding @ Home there are 3 type's of CPU Unit

1 Standard single threaded unit which will only use one core of the cpu at a time

2 SMP A3 Unit which will use all the cores of your cpu including the 4 extra HT Cores from the intel i7

3 SMP BigAdvtage unit which really can only be completed corectly with an i7 running 8/12 threads and clocked to min 4Ghz.

Out of the 2 cpu's you have chosen the i7 is capable of producing more science when overclocked to 4Ghz+ and running the SMP BigAdvantage units.
If running them at stock then from the info ive seen around the various forums they both seem to be evenly matched and roughly produce the same amount of science.

Also if we go by a price point of view the CPU's are roughly around the same price, However for in i7 setup you will need to add another £275 min to get the system runiing where as the AMD Phenom setup can be had for only an extra £150 going by current OCUK Price's.
 
w0t type of computing? highly parallel ie. multiple pairwise comparisons? if so, consider a gpu based solution...
 
If your software will not make full use of the 6 cores than you will probable be better off with the i7 950 at it has better single threader performance than the 6 AMD phenom processors due to hyperthreading.

You make no sense.

An i7 950 would be superior in both instances.

Hyperthreading is: 1core/2threads

So a 950 is 4core/8threads

And a AMD X6 is 6core/6 threads
 
i think hes trying to point out that in multi core aware programs that will use all the amd's 6 cores, the amd system will have the edge compared to an i7. Hyperthreading is all good and well but is no substitute for real cores.
 
It all depends what the OP means by "scientific computing (very intensive)". What software are you using? For example Markov chain Monte Carlo analysis used in Bayesian statistics doesn't respond well to parallel hardware but compilers are able to make use of parallel hardware and FFTs, image processing for example can take advantage.

What calculations are you doing?
 
It's mainly how much you want to spend in the system. A 6-core AMD can be build for cheaper and still give very good performance. A i7 with triple channel will be more expensive but potentially faster overall. Then there is cooling and overclocking if performance is that critical. Extra power requirements will require extra care with the PSU and cooling, and up the cost.

All in all, the AMD may have the edge if you have a use for all the 6 cores. Then you also have to consider your storage, if there is a potential bottleneck with your applications, hard drive setups (RAID configs, backups), RAM size and benefits of triple-channel.

Then there is also GPUs technologies like NVIDIA CUDA, SLI, and how relevant it is to you.

See what would be the ideal system using an AMD and an Intel build, and how much they compare for cost vs performance.

Bottom line, the choice is about your budget, and if you have the potential to max the 6 cores.
 
Just for interests sake http://www.csiro.au/resources/GPU-cluster.html#2

"Compared to the latest quad-core CPUs, the Tesla C2050s inside the CSIRO GPU cluster deliver the equivalent supercomputing performance at 1/10th the cost and 1/20th the power consumption.
The GPU technology can be accessed using the CUDA parallel computing architecture or using new compiler technology released by the Portland Group. CSIRO science applications have already seen 6-200x speedups on NVIDIA GPUs."

computational biology
climate and weather
multi-scale modelling
computational fluid dynamics
computational chemistry
astronomy and astrophysics
computational imaging and visualisation
advanced materials modelling
computational geosciences.
 
Last edited:
It all depends what the OP means by "scientific computing (very intensive)". What software are you using?

What calculations are you doing?

This is the crux of the matter.

Be warned that this thread will attract replies from people who dont know what they're talking about. Good luck
 
Originally Posted by Someone
If your software will not make full use of the 6 cores than you will probable be better off with the i7 950 at it has better single threader performance than the 6 AMD phenom processors due to hyperthreading.

Originally Posted by NathWraith
You make no sense.

An i7 950 would be superior in both instances.

Hyperthreading is: 1core/2threads

So a 950 is 4core/8threads

And a AMD X6 is 6core/6 threads

Why would the fact that the i7 is 4core/8thread instantly mean that it will provide better performance?

Hyperthreading does not provide a 100% improvement in performance for the i7 cores. If you disable hyperthreading on the 950 it will not perform at 50% compared to it being on. The i7 appears in windows task manager as having 8 cores, but these are 8 virtual cores; the processor only actually has 4 true cores & the hyperthreading allows the cores to perform more instructions per cycle which is why performance of each core is increased & also why the software only has to be able to scale to 4 cores for the i7 to operate at full load rather than 8. If the i7 had 8 true cores without hyperthreading then the software would have to be able to scale to 8 cores for the processor to operate at full load.

From the comparisons we did on this forum at the same clock speed a single i7 core with hyperthreading had higher performance than a single core from the phenom x6 in single threaded applications due to the fact that hyperthreading would allow the i7 core to carry out more instructions per clock. If the software was able to make full use of 6 cores (eg cinebench, handbrake) then at the same clock speed the phenom x 6 would perform the same or fractionally better than the i7.

In the case of the 1090T & 950 at default they do not have the same clock speed. The 1090T has a higher clock speed so it will provide higher performance if the software can make full use of the 6 cores. If the software can only scale to 4 or less cores then the i7 will probably provide better performance than the 1090T even with the lower default clock speed & if both processors are overclocked to the same speed then they will perform about the same.
 
It all depends what the OP means by "scientific computing (very intensive)". What software are you using? For example Markov chain Monte Carlo analysis used in Bayesian statistics doesn't respond well to parallel hardware but compilers are able to make use of parallel hardware and FFTs, image processing for example can take advantage.

What calculations are you doing?


Hi, thank you very much for your response!

I am indeed currently working on producing new algorithm based on Markov chain Monte Carlo, and I will think about optimize it maybe next year by involving some parallel computing. So I have to put all those uncertain factors into consideration when I build up this computer.

Any more suggestions? How about the effect of (3-channel vs 2-channel) memory in my case?
 
What sort of demands will you be placing on the memory / how much do you need?

IE 6 12 , 4 or 8...

Will your work respond better to lower latencies or higher bandwidth?
 
What sort of demands will you be placing on the memory / how much do you need?

IE 6 12 , 4 or 8...

Will your work respond better to lower latencies or higher bandwidth?

well, that's why i am indeed a fresher here, i don't even know the differences between lower latencies and higher bandwidth?
 
Just for interests sake http://www.csiro.au/resources/GPU-cluster.html#2

"Compared to the latest quad-core CPUs, the Tesla C2050s inside the CSIRO GPU cluster deliver the equivalent supercomputing performance at 1/10th the cost and 1/20th the power consumption.
The GPU technology can be accessed using the CUDA parallel computing architecture or using new compiler technology released by the Portland Group. CSIRO science applications have already seen 6-200x speedups on NVIDIA GPUs."

computational biology
climate and weather
multi-scale modelling
computational fluid dynamics
computational chemistry
astronomy and astrophysics
computational imaging and visualisation
advanced materials modelling
computational geosciences.

Well, from what my supervisor suggests, he believes that GPU is a technology with many low efficient cores that will not improve the computation significantly.
 
Hi, thank you very much for your response!

I am indeed currently working on producing new algorithm based on Markov chain Monte Carlo, and I will think about optimize it maybe next year by involving some parallel computing. So I have to put all those uncertain factors into consideration when I build up this computer.

Any more suggestions? How about the effect of (3-channel vs 2-channel) memory in my case?

I don't know how you'll speed up MCMC. Have you considered integrated nested Laplace approximations? I haven't but it's meant to be faster or more general.
 
Back
Top Bottom