Its much more interesting than you think, and not remotely for the average user would be the answer.
Semi has a good editor's note on some details for comparison. Its got over 50 cores, which would logically point at 64 cores, for maths reasons that simply 64 works best and would be a good design goal... the fact they are claiming 50+ probably suggests like Fermi yields are.... questionable at this stage, its 64 core design but could launch with as few as 50 working.
As for the cores, in terms of the power of the various cards, Intel has assumign 64 512bit vector cores, AMD/Nvidia go with single precision and double them up to get double precision. Basically it adds up roughly speak that a single 512bit vector core for Intel could be worth 16 single precision shaders in terms of power.
So its if you want to compare it in 32bit instructions(single precision, the shader count AMD/Nvidia give) then its 64x16, or 1024 shaders, or likely what Nvidia WANT for their 28nm cards(though if they've learnt their lesson on 500mm2+ cores, they might drop that, previous gens, 128-256-512-1024 would be the logical progression for Nvidia).
Dp/SP rates are, essentially worthless, they are mostly speaking only peak rates anyway. With the kinds of work being done, various architectures will win on various software. When something can use Vliw5/4 brilliantly it can spank Nvidia, when it can't, it often won't in anything but graphics.
The question would be how efficient Intel's card is, in terms of real throughput compared to peak. It also has x86 so will be almost completely natively compatible with a crapload of software..... will that help, are most HPC type customers fine with writing their own software to take advantage of any hardware, sure. But it does open it up to more customers who haven't wanted to take the expense and time to specialise for GPU's which change yearly.
If they do memory stacked on the core as well it could have a monumental memory bandwidth advantage.
It's also an entire design dedicated to one thing while a huge chunk of the Nvidia die is for gpu related functions.
Intel could have actually made something pretty good this time around. One thing they do best is x86 cores, and a LOT of code is writen for x86 already, the one thing Intel have sucked out for a long time is graphics....... that was pretty much Larrabee's really weak point.
For the life of me I can't work out why one of the most profitable companies around can't just buy all the best driver/designers from Nvidia/AMD/whoever else and just get it done. They don't seem to want to, its very odd.