Indeed, but maybe we'll see it in driver-level without any need for messing around.![]()
Microsoft might have something to say about that

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Indeed, but maybe we'll see it in driver-level without any need for messing around.![]()
I can't see how this thing will ever compete with a current-day bespoke GPU. Anyone who's ever coded a software renderer knows how much code is required to do filtered texture lookups - anything higher than bilinear texture filtering (on one texture) is difficult. Anisotropic filtering is just seriously painful and a lack of discreet hardware to do texture sampling will completely clobber fill rate.
There's got to be more hardware/specialist instructions that have not been disclosed! Else this is gonna compete with the low-end...
Intel's graphics drivers are a literal sham at the moment, nothing works properly, it will take some major overhauling
Heh, yeah but a Core2 at 4Ghz needs what sort of cooling? Even at stock you can't really compare video card cooling to CPU cooling just in sheer size! Imagine what you'd need to cool a 3Ghz shader monster 12 core Larrabee![]()
The article in the op states that Intels driver team aren't working on Larrabee.
I hope a few peeps here at OcUK are able to fully understand that Anandtech article and break it down into pea-brain sized chunks for us mortals!
The flexibility of Larrabee allows it to best fit any game running on it. But keep in mind that just because software has a greater potential to better utilize the hardware, we won't necessarily see better performance than what is currently out there. The burden is still on Intel to build a part that offers real-world performance that matches or exceeds what is currently out there. Efficiency and adaptability are irrelevant if real performance isn't there to back it up.
ATI has 160 sp each cabable of 5 operations each whereas
ATI has 160 sp each cabable of 5 operations each whereas NV has 240 sp +64 sfu (special function units) each sp can do 2 operations and each sfu can do 4, intel has 32 cores each with 16 sp capable of 4 operations each.
ATI 160*5=800
NV 240*2+64*4=736
intel 32*16*4=2048
There's also no mention of special AA hardware - we all saw what happened to the 3800 series cards where AA was done with shaders (rather than dedicated hardware).
That's not quite right. You are vastly overstating what each Larrabee core is capable of.
from the anandtech article.Each core is a dual-issue, in-order architecture loosely derived from the original Pentium microprocessor. The Pentium core was modified to include support for 64-bit operations, the updates to the x86 instruction set, larger caches, 4-way SMT/Hyper Threading and a 16-wide vector ALU.
i came up with the number 2048 instead of 1024 because the hyperthreading they are going to be using will be 4 way not not 2.