I am a computational physicist working with very fragile (numerically) codes, now I have access to clusters etc. to run my work (some simulations are requiring upward of 300,000 CPU hours to complete), but I was wondering whether overclocking a processor would change the outcome of these simulations, or whether we are now at a stage with CPU design that it wouldn't make any difference.
As an aside, I have spoken to people who think that increasing the number of cores on the processor will not have any tangible effect on the speed up of some of the quantum codes I use, as the cache on the die will be too small for the instructions, and so will hamper the speed of the program. Is there any computational scientists who know about all this, and whether these people are correct?
Don't worry before anyone asks I am not the cluster manager, and I won't be overclocking it
As an aside, I have spoken to people who think that increasing the number of cores on the processor will not have any tangible effect on the speed up of some of the quantum codes I use, as the cache on the die will be too small for the instructions, and so will hamper the speed of the program. Is there any computational scientists who know about all this, and whether these people are correct?
Don't worry before anyone asks I am not the cluster manager, and I won't be overclocking it
