• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Official Bulldozer Reviews

Since Windows 7 Kernel isnt making full advantage of the way Bulldozer works, once Windows 8 comes out would Bulldozer with its 8 cores end up being faster than a locked sandybridge 2400?
 
going by the layout below, the L3 cache is cut in to 4 x 2mb so it's not really a full 8mb chip. also surely if u disable a module it will also disable one L3 chip?

bulldozercore.jpg
 
I get you now.
Erm, do we know if that's the case or not?
If it is the case (Probably is, AMD like oversights) then it's just bad design, I was banging on about its own module design approach was what cripples itself.
 
Since Windows 7 Kernel isnt making full advantage of the way Bulldozer works, once Windows 8 comes out would Bulldozer with its 8 cores end up being faster than a locked sandybridge 2400?

We will have to wait and see when somebody gets a hold of one and trys windows 8 on it, I am sure it has the updated scheduler etc.
 
Genuinely angry about this debacle....:mad:

The only winners today are Intel, everyone else will lose, prices will go up, and the move to 6 core mainsteam Intel CPU's has probably just been put back 18mths.

Something tells me that after the inevitable sackings and the dust settles, some real horror stories about this CPU's development will emerge.

Yep! I am in total agreement with this:(
 
The thing is, whether games are using 'up to date engines' or not is a bit of a moot point when it comes to buying a CPU for gaming; you want good performance regardless of the reason for it. Bashing game engines is pretty common on forums.... "you can't count GTA4, that is just badly coded" or whatever, but as a gamer all that matters to me is what hardware is giving the best results regardless of how 'fair' the game may be on given hardware.

Also looking at the benchmarks it seems that not all games are running at high FPS, for example Anand's SC2 bench clocks the 8150 at under 50fps average (meaning min fps probably significantly lower). Likewise DoW2, average 51.5fps compared to SandyBridge series averaging over 30fps more. The worry is that it seems to be the more traditionally CPU-intensive games/genre (RTS) where BD is suffering the most, whereas obviously for the more GPU intensive games everything tends to bunch together at 'proper' resolutions.

I understand where you are coming from in terms of the suggestion that BD's biggest deficiencies arise in scenarioes that shouldn't be that common, but sadly it just doesn't have the redeeming features to make up for it. If BD had 12 cores and/or was churning out truly stellar multithreaded performance we might forgive lacklustre performance in lightly threaded applications, assuming the power consumption wasn't any higher than it is already. But it simply isn't there, aside from maybe the odd scenario it certainly isn't crushing the 2600k even on its 'home turf'.

I don't disagree with your point, but I'd like to counter with the fact that poor performance on the tasks such as RTS games are probably a bit less relevant simply because they're RTS games. Whilst there are indeed some use cases where the lack of ST performance will matter to a user, the average user like me will probably be multi-tasking a bunch ST apps which don't require any real CPU grunt, gaming, using compression, encoding, and possibly some compiling.

In the case of the 2500k, the majority of your advantages in the former tasks will be spent at idle, wait, or gaining additional FPS which don't add anything to your gaming experience, whilst you'll be losing out in all the latter tasks mentioned above. On the other hand the 2600k will be about equal on the latter tasks, but will simply cost more. I'm one of those hardcore competitive gamer types who demands minimum FPS over 100, but I'm not confident that the Intel CPUs wouldl offer much real world value beyond what I'd get from AMD today, never mind when software is optimised in the future.

We all seem to have different opinions based on anecdotal evidence though. At the end of the day I'm not overly enthusiastic about the new chips, I just don't understand everyones concerns stating they're oh so bad, and I do believe they can serve their purpose in the market.
 
Last edited:
The way the L3 cache is built with 4 x 2MB chunks, effectively 2MB per module, does this imply that the 6 core and 4 core (both having 8MB L3) are actually crippled 8core chips?

Edit: And is there actually any reviews yet of the proper 4 and 6 core chips?
 
The way the L3 cache is built with 4 x 2MB chunks, effectively 2MB per module, does this imply that the 6 core and 4 core (both having 8MB L3) are actually crippled 8core chips?

Edit: And is there actually any reviews yet of the proper 4 and 6 core chips?
thats what im getting at, if the L3 cache is 2MB per module, then basicly each core only as 2mb to use or share with the other core,

on a p II the cores can have access to the full 6mb or share between the other cores
 
The way the L3 cache is built with 4 x 2MB chunks, effectively 2MB per module, does this imply that the 6 core and 4 core (both having 8MB L3) are actually crippled 8core chips?

Edit: And is there actually any reviews yet of the proper 4 and 6 core chips?

I believe the implication is that they'll only make one chip, but will disable modules based on errors in manufacturing. The end result is your 4 core CPU will be an 8 core CPU with modules disabled, which may open the market to module unlocking. On the other hand they could go to efforts to permanently disable additional cores, but AMD haven't done that in a while as far as I know.
 
thats what im getting at, if the L3 cache is 2MB per module, then basicly each core only as 2mb to use or share with the other core,

on a p II the cores can have access to the full 6mb or share between the other cores

L3 cache is shared across all modules. If you have only 1 thread running the core that it is running on has access to it's own level 1 cache, the shared 2MB L2 cache and all 8MB of the level 3 cache.
 
L3 cache is shared across all modules. If you have only 1 thread running the core that it is running on has access to it's own level 1 cache, the shared 2MB L2 cache and all 8MB of the level 3 cache.
so why the L3 cache is cut in to 4 x 2mb chips, surely it would be 1 x 8mb
 
so why the L3 cache is cut in to 4 x 2mb chips, surely it would be 1 x 8mb

Probably so that they can disable L3 cache in cheaper 4 and 6 core variants. That way they can still sell chips that were manufactured with faulty L3 cache, if it were one big 8MB chunk they would have to sell all of them with 8MB L3 and any faulty chips would have to be discarded instead of sold as a cheaper variant.

Poorly manufactured Phenom X4's with one big chunk had to be sold as Athlons with L3 cache disabled completely, that was probably quite wasteful once yields improved.
 
Last edited:
Back
Top Bottom