• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel and their 1 teraflop chip

Associate
Joined
11 Nov 2007
Posts
194
Apologies if this is old hat, but:

http://www.bbc.co.uk/news/technology-15758057

What interests me most is the comment from Mr Reynolds of Nvidia at the bottom:

"Intel has a technology advantage because its manufacturing processes can make transistors half the size and more efficient but Nvidia will catch up," he said.

His PR people must be spinning in their graves...

How long before this technology hits the shelves, do people reckon?
 
if it came out of a lab, using non standard production equipment decades

if it came off the end of a production line a few years,

if its x times more powerful than core2 then it will be £10,000 a pop and high end server only for years...
 
Looks like Intel has been canvassing the media sucessfully!!:p

Its a co-processor like the Tesla cards and their AMD equivalents. It is based on Larabee IIRC. The chip AFAIK even makes the 40NM Nvidia Fermi based GPUs look small despite being 22NM. On top of this the APIs which this uses will determine its success too and will also determine its real world performance (not theoretical). Intel has only shown a prototype,which is probably running an optimised benchmark and without TDP figures or any specifications being released. ATM performance of production cards is unknown.

The chip is still a prototype and older chips like the Nvidia GF110 GPU(GTX580) used in the Tesla M2050 already can produce 665GFLOPs DP. The RV870 GPU(HD5870) used in the AMD FirePro 3D V9800 already can do 544GLFOPs DP and the card was released last year. These are based on the older TSMC 40NM process too.

The next generation 28NM Nvidia GPUs and even the next AMD GPUs will probably exceed 1TF. These are both based on new architectures which will improve DP performance significantly.

However, I have been following Knights Ferry and Knights Corner for a while. ATM despite Intel's bragging it will be the best thing since slice bread it seems not a huge number of companies are interested in it even now. AFAIK,there is one design win for it(summer 2011) and even then the statement is vague,ie,"it will be added when available".

The machine which will be using it will be operational in 2013:

http://www.physorg.com/news/2011-09-texas-stampede-extreme-digital-xd.html

This makes sense if the cards will be launched sometime in 2012. Last year, 2011 was meant to be the actual launch year for Knights Corner. If the cards were already delivered to customers this year it would be a bigger deal but it isn't though.
 
It's the start of the next generation of computing. A few conventional cores backed by multiple lightweight cores.

Intel are doing it by scaling down x86 cores and stacking them. AMD/ATI are doing it be re-engineering GPUs to use the x86 instruction set and address space. I'm buggered if I know what nVidia are doing.

People don't understand what Llano represents. It isn't integrated graphics, it is the first step along the way to this new approach.

Eventually GPUs will cease to exist as purely graphics cards and the gfx functionallity will be an effect of the masssively parallel processing offered by the mix of cores.

We are still looking at about another 5 years. Operating Systems, compilers and development strategies will need to change to take advantage.

Looking forward to it as it will be an arms race, and enthuasiats are going to have loads of fun toys to play with.
 
Its much more interesting than you think, and not remotely for the average user would be the answer.

Semi has a good editor's note on some details for comparison. Its got over 50 cores, which would logically point at 64 cores, for maths reasons that simply 64 works best and would be a good design goal... the fact they are claiming 50+ probably suggests like Fermi yields are.... questionable at this stage, its 64 core design but could launch with as few as 50 working.

As for the cores, in terms of the power of the various cards, Intel has assumign 64 512bit vector cores, AMD/Nvidia go with single precision and double them up to get double precision. Basically it adds up roughly speak that a single 512bit vector core for Intel could be worth 16 single precision shaders in terms of power.

So its if you want to compare it in 32bit instructions(single precision, the shader count AMD/Nvidia give) then its 64x16, or 1024 shaders, or likely what Nvidia WANT for their 28nm cards(though if they've learnt their lesson on 500mm2+ cores, they might drop that, previous gens, 128-256-512-1024 would be the logical progression for Nvidia).

Dp/SP rates are, essentially worthless, they are mostly speaking only peak rates anyway. With the kinds of work being done, various architectures will win on various software. When something can use Vliw5/4 brilliantly it can spank Nvidia, when it can't, it often won't in anything but graphics.

The question would be how efficient Intel's card is, in terms of real throughput compared to peak. It also has x86 so will be almost completely natively compatible with a crapload of software..... will that help, are most HPC type customers fine with writing their own software to take advantage of any hardware, sure. But it does open it up to more customers who haven't wanted to take the expense and time to specialise for GPU's which change yearly.

If they do memory stacked on the core as well it could have a monumental memory bandwidth advantage.

It's also an entire design dedicated to one thing while a huge chunk of the Nvidia die is for gpu related functions.

Intel could have actually made something pretty good this time around. One thing they do best is x86 cores, and a LOT of code is writen for x86 already, the one thing Intel have sucked out for a long time is graphics....... that was pretty much Larrabee's really weak point.

For the life of me I can't work out why one of the most profitable companies around can't just buy all the best driver/designers from Nvidia/AMD/whoever else and just get it done. They don't seem to want to, its very odd.
 
Back
Top Bottom