Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
your missing the pointWhy would you disable a module, unless you mean emulating other CPU's.
i dont know 100% but going by that pic it looks that way .I get you now.
Erm, do we know if that's the case or not?
Since Windows 7 Kernel isnt making full advantage of the way Bulldozer works, once Windows 8 comes out would Bulldozer with its 8 cores end up being faster than a locked sandybridge 2400?
Genuinely angry about this debacle....
The only winners today are Intel, everyone else will lose, prices will go up, and the move to 6 core mainsteam Intel CPU's has probably just been put back 18mths.
Something tells me that after the inevitable sackings and the dust settles, some real horror stories about this CPU's development will emerge.
i dont know 100% but going by that pic it looks that way .
because why have 4 x 2mb, why not have 1 x 8mb, like the P II have 1 x 6mb
The thing is, whether games are using 'up to date engines' or not is a bit of a moot point when it comes to buying a CPU for gaming; you want good performance regardless of the reason for it. Bashing game engines is pretty common on forums.... "you can't count GTA4, that is just badly coded" or whatever, but as a gamer all that matters to me is what hardware is giving the best results regardless of how 'fair' the game may be on given hardware.
Also looking at the benchmarks it seems that not all games are running at high FPS, for example Anand's SC2 bench clocks the 8150 at under 50fps average (meaning min fps probably significantly lower). Likewise DoW2, average 51.5fps compared to SandyBridge series averaging over 30fps more. The worry is that it seems to be the more traditionally CPU-intensive games/genre (RTS) where BD is suffering the most, whereas obviously for the more GPU intensive games everything tends to bunch together at 'proper' resolutions.
I understand where you are coming from in terms of the suggestion that BD's biggest deficiencies arise in scenarioes that shouldn't be that common, but sadly it just doesn't have the redeeming features to make up for it. If BD had 12 cores and/or was churning out truly stellar multithreaded performance we might forgive lacklustre performance in lightly threaded applications, assuming the power consumption wasn't any higher than it is already. But it simply isn't there, aside from maybe the odd scenario it certainly isn't crushing the 2600k even on its 'home turf'.
thats what im getting at, if the L3 cache is 2MB per module, then basicly each core only as 2mb to use or share with the other core,The way the L3 cache is built with 4 x 2MB chunks, effectively 2MB per module, does this imply that the 6 core and 4 core (both having 8MB L3) are actually crippled 8core chips?
Edit: And is there actually any reviews yet of the proper 4 and 6 core chips?
The way the L3 cache is built with 4 x 2MB chunks, effectively 2MB per module, does this imply that the 6 core and 4 core (both having 8MB L3) are actually crippled 8core chips?
Edit: And is there actually any reviews yet of the proper 4 and 6 core chips?
We will have to wait and see when somebody gets a hold of one and trys windows 8 on it, I am sure it has the updated scheduler etc.
thats what im getting at, if the L3 cache is 2MB per module, then basicly each core only as 2mb to use or share with the other core,
on a p II the cores can have access to the full 6mb or share between the other cores
so why the L3 cache is cut in to 4 x 2mb chips, surely it would be 1 x 8mbL3 cache is shared across all modules. If you have only 1 thread running the core that it is running on has access to it's own level 1 cache, the shared 2MB L2 cache and all 8MB of the level 3 cache.
The way the L3 cache is built with 4 x 2MB chunks, effectively 2MB per module, does this imply that the 6 core and 4 core (both having 8MB L3) are actually crippled 8core chips?
Edit: And is there actually any reviews yet of the proper 4 and 6 core chips?
so why the L3 cache is cut in to 4 x 2mb chips, surely it would be 1 x 8mb