• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Bulldozer Finally!

If those benchmarks are based on an engineering sample (which I presume means not what the final product will represent?) then surely that reviewer would have been aware of this and reflected it in his final comments?
 
If those benchmarks are based on an engineering sample (which I presume means not what the final product will represent?) then surely that reviewer would have been aware of this and reflected it in his final comments?

My thoughts exactly.
 
If those benchmarks are based on an engineering sample (which I presume means not what the final product will represent?) then surely that reviewer would have been aware of this and reflected it in his final comments?

+1up

I think people are disappointed with BD. Oh well. I guess Intel are better after all.
 
You knew my point.

And its is easier to hit a TDP and stabilty at lower clock than it is at higher clock for the same silicon.

Both heat and stability are the most important factors for servers.

Most home users & the average consumer don't have the luxury of multi socket servers so the reliance is on a single chip so more is required from that single chip and a higher clock made be required to get satisfactory results.

This was my point you even said it, heat and tdp are a BIGGER issue for servers, not for home, and its NOT easier to get the TDP right with lower clocks. Sure its easier to get 95W max at 2.3Ghz instead of 4.2Ghz, the problem is server parts are aiming for 2 entire chips at what 125-130W, so at 2.3Ghz they aren't aiming for 95W still, but closer to 60W. Because of the various parts of the cpu that will use similar power, memory controller, etc, dropping clocks doesn't scale very well with power usage, gating them and turning them off at idle is awesome for power saving, dropping clocks does smeg all. Phenom 2's, I get about 10W saved going from 3.5Ghz at 1.35v, to 3.8Ghz at 1.4v. The clocks are lower, the target TDP is also lower. The server parts HAVE to hit the targeted TDP, AMD and Intel have had no issue with higher wattage parts, and releasing them at higher than expected even 140W wouldn't have hurt them all that much, especially in desktop where no one really cares or pays attention to power bills or the real power usage.

If you have chips working over their TDP then server isn't the place for them, its the last place to put them. One chip at 125W as the highest parts are rated is far easier than 2, at a similar tdp.
 
Last edited:
Obviously, at the moment they have a place.
But we're seeing the gap increase, my point is, in a few years, if the current trend continues, will AMD be able to offer the performance needed?


But Llano's is newer than Phenom II and has the same IPC as Athlon II, Llano is also newer than SB.
SB-E is out this year, and Ivy is out in a few months.

The gap will only increase. That's what I'm getting at.

but you believe they are doing it on purpose ? or they just CANT ? i highly doubt they cant, but more as aiming for another market maybe ?
 
Llano is old though, it was just waiting on the process which was late, it will be updated with bulldozer, and either way Intel has nothing that can compete with the gpu on Llano. It has the same IPC as ath 2 instead of Phenom 2 due to the cache, nothing more or less.

SB-e is out this year, its irrelevant, its expensive and its a market AMD haven't gone after at any stage successfully, nor should they. Ivy isn't out in a few months, its at this stage 6 months away bare minimum, SB-e was originally due late Q1 2011, its now barely going to make Q4, if indeed it does. Ivy was supposed to be very early Q1, its looking like it might be lucky to make Q2, and its still far enough away that it could slip further. Bulldozer, sounds like 3 quarters late, but it was always going to be VERY late Q2 at the earliest, and will now launch very early Q4, which is realistically only just over a quarter late, or, a smaller slip than SB-e...... which is based on an existing architecture on a 2 year old process.

Process problems are effecting everyone.

As for the latest numbers, if real, its being suggested that the ES's had prefetching disabled, preteching cut latency in HALF for Sandybridge and is responsible for most of the increase in IPC in Sandybridge....... IE A bulldozer with prefetching turned on would have vastly different performance.

In one of the benchies, I forget which, it gets 54ish, a quad core phenom 2 gets 50, and a hex core phenom gets 70....... so 54 is just WAY way off, theres nothing at all to suggest its final performance, if the IPC improves quite a bit, and it gets a bump in clock speed, no telling where it will be vs Sandy.

Likewise Ivy isn't going to offer a significant IPC increase, they won't till Haswell and that was going to be Q4 2012 when SB-e was still due Q1 2011..... basically this early theres no way to tell how far Haswell is at all, its possible it won't be delayed at all, in reality I'd expect a slip around the same as for ivy, so 5-6 months, which means Intel's next big IPC increasing product is due mid 2013, and AMD have 2 new bulldozers before then to come.

When you look at things without the rose tinted glasses, Intel don't have anything dramatically improved for the best really at least 18 months, maybe more.
 
This was my point you even said it, heat and tdp are a BIGGER issue for servers, not for home, and its NOT easier to get the TDP right with lower clocks. Sure its easier to get 95W max at 2.3Ghz instead of 4.2Ghz, the problem is server parts are aiming for 2 entire chips at what 125-130W, so at 2.3Ghz they aren't aiming for 95W still, but closer to 60W. Because of the various parts of the cpu that will use similar power, memory controller, etc, dropping clocks doesn't scale very well with power usage, gating them and turning them off at idle is awesome for power saving, dropping clocks does smeg all. Phenom 2's, I get about 10W saved going from 3.5Ghz at 1.35v, to 3.8Ghz at 1.4v. The clocks are lower, the target TDP is also lower. The server parts HAVE to hit the targeted TDP, AMD and Intel have had no issue with higher wattage parts, and releasing them at higher than expected even 140W wouldn't have hurt them all that much, especially in desktop where no one really cares or pays attention to power bills or the real power usage.

If you have chips working over their TDP then server isn't the place for them, its the last place to put them. One chip at 125W as the highest parts are rated is far easier than 2, at a similar tdp.

I'm not going to split hairs with you.

Which again is easier with lower clocks & requires lower v core which in turn produces less heat.

Example stability at 2.6 Ghz at 95W is easier to hit than 3.2 at 95W, more of the chips from a batch will be capable of 2.6 Ghz at 95W than 3.2 at 95W because more will require vcore increase for stability at 3.2 than the the few that do not, but 2.6 is enough for the server market initially so most go out as 2.6.
And im not going to spilt hairs over amps.

Even though the heat is less of an issue for home users that does not guaranty that you can hit clocks that you would like & the clocks maybe a bigger issue for the design & process , but they need to be at a given clock for home use & even at 125w-140w as they may of hit a clock wall which they are finding it hard to get around.

AMD Ships World's First 16-Core Server CPUs, New Fusion Chips

Company, however is reportedly struggling with clock speeds for quad-, octa-core Bulldozers
http://www.dailytech.com/AMD+Ships+Worlds+First+16Core+Server+CPUs+New+Fusion+Chips/article22658.htm

So basically your arguing because i said 4+4 instead of 1+2+5 which is not necessary to go into such detail in this thread because people get the gist already when im totally not interested in trying to give out the long winded exacts.
I'm not saying that i don't enjoying reading yours but its not something that i can be bothered to do unless i think its really necessary.
 
Last edited:
I'm not going to split hairs with you.

Which again is easier with lower clocks & requires lower v core which in turn produces less heat.

Example stability at 2.6 Ghz at 95W is easier to hit than 3.2 at 95W, more of the chips from a batch will be capable of 2.6 Ghz at 95W than 3.2 at 95W because more will require vcore increase for stability at 3.2 than the the few that do not, but 2.6 is enough for the server market initially so most go out as 2.6.
And im not going to spilt hairs over amps.

I'm not splitting heirs, you're missing what I'm saying completely.

A chip finds it easier to get under 95W at 2.6ghz than 3.2Ghz, the point you're missing is, the 2.6Ghz for the server chip, actually has to be 55-60W, because they stick two on a die and that entire 16 core chip has to use, not much more than the single core.

Clock speed hasn't got anywhere near as much to do with heat as voltage. While you can run lower voltage at 2.6Ghz than 3.2Ghz, its not "that" much less. If you take a Phenom 2 as an example, the voltage required to run at least 1.5Ghz isn't much less than 3.2Ghz, and not even that much more to hit 3.8Ghz, while hitting 4-4.2Ghz is a huge increase.

If the TDP target was the same, then yes, you're right, but the TDP target is much lower than two individual chips, its not a 190-250W chip(with Bulldozer listed as 95-125W). Thats not including the quad channel memory controller which will increase power usage noticeably.

The idea of sticking 2x8 core cpu's with quad channel memory controllers on a single package with TDP not much more than a single bulldozer being "easier" is just plain wrong.

Of course, we just ran into an issue didn't we....... AMD server quad channel, AMD desktop, dual channel, its very unlikely they are infact the same dies and can be binned for one use or another as an extra dual channel memory controller would be hugely wasteful on desktop Bulldozers.

Which again puts us back to, they have limited capacity thanks to Glofo's 32nm not being great, and with the choice of limited production and the ability to easily sell everything you can make....... £1500+ chips, or, £100-250 chips, pretty simple.


Don't forget that Llano is also heavily capacity constrained, yet its got lower tdp, and is 33% smaller than the desktop Bulldozer, even more so than Interlagos(if you look at each individual 8 core chip inside it). We've seen afaik, earlier bulldozers clocked up pretty easily, we've seen the targeted turbo speeds of 4.2Ghz, we're STILL seeing 95W version available for launch, none of that adds up to clock speed nor TDP problems, just yields.

Theres chips that work but at lower speeds, and chips that just don't work, when its the former, you release but release a touch slower and add a couple lower end ones to recoup as much money as possible, when its the later, you have delayed launches, supply problems, etc, etc. Llano hit its tdp targets, its a fairly large chip with a pretty huge gpu for a cpu, they just can't make enough of them, again every single thing I can see suggest yield problems, but not actual problems with the chips that come out working.
 
Last edited:
I'm not splitting heirs, you're missing what I'm saying completely.

A chip finds it easier to get under 95W at 2.6ghz than 3.2Ghz, the point you're missing is, the 2.6Ghz for the server chip, actually has to be 55-60W, because they stick two on a die and that entire 16 core chip has to use, not much more than the single core.

Well you are splitting hairs because your pointing out 55W-60W and i all ready told you actual specifics was not my point.

My point was nothing to do with what the server chips actually use, but on average they use lower clocked parts, pointing out the specifics why was not my point and the fact is they are shipping & the consumer chip is not.

They are having more problems with the consumer chip than that server chip so clearly in this case the server chip was easier to meet IMO, not requiring to have the same high clock as a consumer chips helps allot because the power and heat restrictions would have made it really hard and it would likely be shipping after the consumer chip.
Its easier to add more cores than it is to keep adding Ghz, but just adding more cores for the consumer does not cut it because of the software that the avg consumer uses so they still need to try to push the Ghz, so even if both consumer & server parts had the exact same power and heat restrictions the consumer chip will still need to have higher Ghz to get the job done as there will be less of them in a consumer product which would be harder to produce.

I do complex and simple work, simple not to often but while a job maybe intrinsically more complex and mentally challenging is actually easier to produce lager quantities faster than some of the simpler jobs because of different aspects and requirements.
 
Last edited:
So, it's well over due and slower than Intel, nice one AMD.

Slower than SB clock for clock, but should be a monster in the app's that can make use of 8 cores.
But there isn't many yet is there?
Sure, if a set of benchmarks showed 10 programs that can utilise 8 cores, BD would be massively in the lead, make it 4 v 4, like the FX4100 and SB? SB would come out ahead.

HOWEVER, it's depending on price, you never know, the 8150 might cost only a few quid more than a 2500k..
 
Quote a few of the brainy chaps on semiaccurate think the results from sisandra were very quuestionable, apparently the same benchmarks were floated around a week ago and were rather dubious in validity. The search continues haha.

How long do you think it will be before someone gets a server bulldozer and slaps some benchmarks with a pcie card in the system? :D
 
Back
Top Bottom