Server CPUs are not overclocked. On top of this trying to look at desktop workloads and trying to predict how well it will do in a server environment does not make sense. It depends on what sort of applications you are running,what OS you are running and the infrastructure you are using.
This is very true. You'd have to do some stats analysis of your common loadings.
Simple Option
However, let's consider throughput as a simple function of work / cores:
i = time taken to run the job on 1x i5 core
p = time taken to run the job on 1x 1100T core
And assuming perfect scaling to n threads...
IF ( i / 4 < p / 6 ) buy i5
ELSE buy 1100T
And there is genuinely nothing I can suggest other than testing your specific app on each processor and seeing whether the above is true.
Really Hard Stuff
This is of course the more realistic option, but bugger me it's a headpeck!
u is the number of simultaneous users that your server can support
h is the number of actual threads used per user request
c is the number of execution cores available
tE is the total amount of execution time required (if run on a single core)
tI is the idle time (per user) between requests
Assumption: perfect thread scaling, i.e. with 4 thread the job takes 1/4 as much time as with 1 thread. This is a lie, but it's a starting point.
One single user on average will chew up a portion of available computing resource equal to:
1 / (((tE + tI) / tE) / (h / c))
E.g. if a task takes 20 seconds on 1 core, with 60 seconds between requests, and we allow 2 threads per user on a quad core...
1 / (((20 + 60) / 20) / (2 / 4))
= 1 / (4 / 0.5)
= 1/8 th of all computing resources are going to one user
(This is the common sense maths: 20 seconds per 80 seconds uses half the CPU cores, i.e. one quarter of one half is one eighth.)
To find your maximum number of concurrent users, take away the 1/ at the start:
Max users = (((tE + tI) / tE) / (h / c))
Guess #1
I'm going to go with blind guesswork now...
tI (user idle time) is a flat 60 seconds between queries.
tE (total execution time) can be 20 seconds for an i5, 25 seconds for an 1100T
h (allocated threads per user) is 2
c is 4 for an i5, or 6 for an 1100T
So for an i5, u = 8, as described above
For an 1100T: u = ((25 + 60) / 25) / (2 / 6) = 10.2 users
More Fancy
Now for the clever bit: let's find how much faster an i5 has to be to deliver better throughput. An i prefix is intel, an a prefix is AMD! An i5 is better when:
(((itE + tI) / itE) / (h / 4)) > (((atE + tI) / atE) / (h / 6))
Multiply across...
(h/6) * ((itE + tI) / itE) > ((atE + tI) / atE) * (h/4)
Both sides are now being multiplied by h, we can in fact remove it...
((1/6)(itE + tI)) / itE > ((1/4)(atE + tI) / atE)
Simplifying, the i5 is better when...
(itE + tI) / itE > 1.5 * (atE + tI) / atE
(Intel single thread run time + user idle time) / (Intel single thread run time) > 1.5 * (AMD single thread run time + user idle time) / (AMD single thread run time)
Unfortunately at this point there's nothing to be done without some numbers. Eventually you end up at an inequality where the i5's processing time has to be less than 5 times itself minus 60, which is a bit rubbish
So what you need is your per-request-runtime running single threaded on each type of CPU, and the average time each user will allow between requests.
Or my maths could be wrong
*edit* Bugger me I must be bored...