• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Detailed Intel Haswell specs revealed

Not sure how accurate this is but there was a rumor that B2 (i.e. 8150/8120) was only released because any more delays would be a PR disaster, and B3 (supposedly 8170) is what Bulldozer was supposed to be from the start.

that cant be that accurate cause how can a cpu that will have a higher clock speed all of sudden draw less power, run colder and still beat SB ?
seems like more AMD lies to me
 
that cant be that accurate cause how can a cpu that will have a higher clock speed all of sudden draw less power, run colder and still beat SB ?
seems like more AMD lies to me

A new stepping and improved process technology could easily mean CPUs are faster, cooler and use less power. Just look at the GTX480 (GF100) vs GTX580 (GF110), yes I know this isn't a perfect comparison but it gives you an idea of how things can be made faster and more efficient. Infact the rumor isn't really whether B3 will be better all round (thats almost a given), but that AMD let out a chip they new wasn't ready (B2) because they couldn't handle another delay.
 
Last edited:
A new stepping and improved process technology could easily mean CPUs are faster, cooler and use less power. Just look at the GTX480 (GF100) vs GTX580 (GF110), yes I know this isn't a perfect comparison but it gives you an idea of how things can be made faster and more efficient. Infact the rumor isn't really whether B3 will be better all round (thats almost a given), but that AMD let out a chip they new wasn't ready (B2) because they couldn't handle another delay.

that last part does make sense actually. judging by how bad their delays were and people getting impatient and buying SB (i almost did cause i was fed up of waiting) sounds like good marketting to buy them time though, all though they have maybe ruined some credability with the consumers though over it. maybe they new it was going to happen it was damage limitation?
 
To be fair, I expected a new socket - whenever any half-significant change is made to the architecture or the memory systems, a new socket is necessary.
 
To be fair, I expected a new socket - whenever any half-significant change is made to the architecture or the memory systems, a new socket is necessary.

i agree, i think a new socket is needed but they will have to wait ages for that now. as people have am3+ boards already and will be pretty annoyed if they are told "oh that mb you just spalshed out on for a our awfull CPU, yeah you need a different one for it to actually work "
 
Something else to blame AMD for, if they hadn't dropped the ball with bulldozer Intel would be forced to up the core count.

By then Bulldozer will be mature and AMD will be adding cores at will. Windows 8 that will love more cores will help. (Windows 8 is Arm ready and there'll be numerous cores on a die.)

AMD may not be as daft as they look.
 
By then Bulldozer will be mature and AMD will be adding cores at will. Windows 8 that will love more cores will help. (Windows 8 is Arm ready and there'll be numerous cores on a die.)

AMD may not be as daft as they look.

The O/S might love more cores, but unless programs can make use of them, much like it doesn't make use of the FX8 now, it nets the same result.
 
The O/S might love more cores, but unless programs can make use of them, much like it doesn't make use of the FX8 now, it nets the same result.

Possibly, but you only have to look as far as the latest Direct X and video card drivers to see the future is currently intended to be multi-threaded.

I think BF3 is the first game to make use of these new enhancements, but I may be wrong.
 
Possibly, but you only have to look as far as the latest Direct X and video card drivers to see the future is currently intended to be multi-threaded.

I think BF3 is the first game to make use of these new enhancements, but I may be wrong.

BF3's a GPU limited game, so it's a bad comparison, although it could use all 8 threads, but it wouldn't make a difference..

Everything's pretty much multithreaded, but how heavily threaded a program is etc all contribute to end performance.

So, in BF3 for example, you'd probably notice nothing over an i5 2300 and an FX8.
 
BD B3 won't change the IPC, what it might do is tune the existing design to reduce gate leakage and decrease signal propagation enough to increase the clock rate and reduce power consumption.
 
BF3's a GPU limited game, so it's a bad comparison, although it could use all 8 threads, but it wouldn't make a difference..

Everything's pretty much multithreaded, but how heavily threaded a program is etc all contribute to end performance.

So, in BF3 for example, you'd probably notice nothing over an i5 2300 and an FX8.

I was merely making the point that devs are just starting to play with the tech that's provided to them. I suppose the biggest driver of this aspect will be the hardware that goes into the next gen of consoles.
 
the hardware that goes into the next gen consoles will probs only be equvielent to maybe an i5 and a 6850 probably so i dont think that will effect things too much. pc will always be the top dog in terms of performance.


it also wouldnt suprise me if the next gen consoles feature a Trinity APU or something similar
 
BD B3 won't change the IPC, what it might do is tune the existing design to reduce gate leakage and decrease signal propagation enough to increase the clock rate and reduce power consumption.

Thats the first thing they need to get control of, as they can then start to get more liberal with transistor placement.
 
the hardware that goes into the next gen consoles will probs only be equvielent to maybe an i5 and a 6850 probably so i dont think that will effect things too much. pc will always be the top dog in terms of performance.


it also wouldnt suprise me if the next gen consoles feature a Trinity APU or something similar

PS3 already runs 8 threads, its a given PS4 will be more than that.
 
Possibly, we don't know what's in those cores yet ;)

If Intel added a second interger core and had 4 modules, but decided to still call them cores, it would be a fundamentally different chip......

Core count means nothing, core composition is everything.

I was under the impression for a while now that Haswell will be the move to 6 or 8 cores in the mainstream, and it makes sense when you consider the size of the die and frankly, how long they've been on quad cores. Was the Q6600 out in 2006-2007 maybe?

Honestly right now SB is tiny and thats with a gpu, Ivy will have a bigger gpu but the chip will still be significantly smaller, a octo core without gpu would be, or should smaller than a current Sandy bridge.

Die size dictates cost realistically, 22nm should have seen an increase in core count, if not an increase in individual core size, which would happen if each core got much stronger. I said before bulldozer, and after, what you call it is irrelevant. Hopefully Haswell will offer a big increase in performance, if not, meh :(


Why should they bring out anything more that quad when they can release a quad then double it when AMD gets their act together.
 
Since images of a pre-production Haswell chip have been leaked I figured I may as well give this thread a bump rather than creating a new one...

haswell2.png


While the Sandy Bridge architecture is currently still the latest from Intel and we have just had a taste of the 22 nm shrink Ivy Bridge, pictures have been leaked of the next-generation architecture Haswell processor. The GPU area of the die is said to be twice the size of Sandy Bridge’s iGPU, and this shows with a larger overall die size. While Ivy Bridge will have a die size of 162 mm^2, Haswell weighs in at a slightly heftier ~ 185 mm^2.
 
I really don't get this current trend/push for a better integrated GPU into CPU's.

Simple really, most PC users aren't gamers, so if they can remove the need for a separate video card for a lot of users, why not? Also with OpenCL and the like the onboard GPU can help out the CPU in tasks that a video card is better at.
 
Better integration of CPU and GPU will benefit us all, not just users who use integrated graphics. AMD are making big leaps in this area too:
AMD's CTO Mark Papermaster just put up this slide that shows its HSA (Heterogeneous Systems Architecture) roadmap through 2014. This year we got Graphics Core Next, but next year we'll see a unified address space that both AMD CPUs and GPUs can access (today CPUs and GPUs mostly store separate copies of data in separate memory spaces). In 2014 AMD plans to deliver HSA compatible GPUs that allow for true heterogeneous computing where workloads will run, seamlessly, on both CPUs and GPUs in parallel. The latter is something we've been waiting on for years now but AMD seems committed to delivering it in a major way in just two years.
 
Back
Top Bottom