• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi delayed till May

The original spec was... by the time it was actually working hardware it was barely competitive with the 7800GTX... unfortunatly no matter how good the hardware or how much money they chuck at it intel simply won't succeed in this area without a massive shift in approach and mindset.

The chip was to be released in 2010 as the core of a consumer 3D graphics card, but these plans were cancelled due to delays and disappointing early performance figures.
 
I'd love to see another player... completely "out there" but I'd love to see intel put the resources behind turning ARM Mali into a high end desktop GPU :D
 
I'd love to see another player... completely "out there" but I'd love to see intel put the resources behind turning ARM Mali into a high end desktop GPU :D

I'd love to see 3DFX be reincarnated in some form or another. Thing is that they were bought by NVIDIA and the only other independent chip maker that intrested me a while back (BitBoys Oy) were bought by ATi ages ago.
 

Just thinking... the power useage/heat with that is absolutely tiny... and the design very scalable - you could just tack a whole load of those cores together and it would probably eventually outperform fermi while using less power lol. OK they are a LONG way from a desktop gaming GPU.
 
The original spec was... by the time it was actually working hardware it was barely competitive with the 7800GTX... unfortunatly no matter how good the hardware or how much money they chuck at it intel simply won't succeed in this area without a massive shift in approach and mindset.

UNfortunately the mindset was of Nvidia's, but with vast differences, both are going not paralel enough, too big die's with not enough efficiency. AMD is only held back by software and programability. If games were all geared to use a 5870 completely, or a 4870 for that matter, Nvidia would be in serious trouble, even without perfect programing AMD are massively ahead on profitability, margins, manufacturability and really everything that makes a GPU competitive in a business sense. If software could use the full shader compliment on every clock, well, it would quite literally wipe the floor with anything Nvidia have, by miles. The problem is of course that making a hardware design that is almost impossible to get full efficiency from is a huge risk in itself, and Nvidia's design is pretty easy to leverage the full power from.

However, which has clearly been the winner for the past 18 months, performance, price, manufacturability, yields, move to new processes, etc, etc. I've said for over a year, Nvidia HAVE TO without question head for a AMD style GPU core, hugely smaller, manufacturable properly, designed for lower clock speeds, less leakage, better yields, and higher efficiency. The upside of Nvidia doing that is now all dev's have a reason to spend more time coding in a way that can get better efficiency from the hardware, downside is it would be harder for a while to get that extra performance. Nvidia simply can't compete from a business sense, and seemingly from a performance sense without doing so.

Larabee isn't far off Fermi in reality, except because of its design, it needs far higher clocks with less cores, vs Fermi with more cores and lower clocks. Intel thought they could do higher clocks on better process's of their own, which is highly likely, but it likely ran into a somewhat netburst reality....... just can't hit the clock speeds required to be truly competitive.

Same goes for Fermi though, it needs X clockspeeds to be competitive, but its only hit x -(25-30%) speeds.

Frankly its design will only worsen its problems at 28nm, yes Nvidia will add via's to account for TSMC's awful process, and account for variable transistor size, both will increase an already 60% larger die, on a process thats likely going to lose out on anywhere from 20-40% on die size to an identical core on GloFo's transistor count.

If AMD can even reduce the extra die size it used to avoid TSMC 40nm problems, if both the GF100/5870 roughly doubled in transistor count for next gen, the new Nvidia core could be up to twice the size of AMD's new core, yet offering similar or maybe even less power.

Nvidia made the exact mistakes Intel made with Larabee, which is hilarious as Dear Leader has laughed at everything Larabee was doing wrong for the past 2 years.

Larabee has efficiency in instructions, better process's to be used, more money to be invested and no shrinking market where R&D will dissappear. Intel also have a track record of learning from their mistakes, unlike Nvidia.

Intel also only started buying up small firms and design teams who would likely help with future GPU designs last year, that obviously wouldn't be effective Larabee mark 2009/10, that quickly, they are getting the right people and teams to take Larabee forward into something good in the future. Nvidia just show no signs of adjusting to the industry and manufacturing around them, which could ultimately be their end.
 
Last edited:
The future is closer to a hybrid of the 2 designs tbh... if both go for the most optimal design progression they will eventually converge on something fairly similiar - and not like either of the current designs.
 
Larabee isn't far off Fermi in reality, except because of its design, it needs far higher clocks with less cores, vs Fermi with more cores and lower clocks. Intel thought they could do higher clocks on better process's of their own, which is highly likely, but it likely ran into a somewhat netburst reality....... just can't hit the clock speeds required to be truly competitive.

Same goes for Fermi though, it needs X clockspeeds to be competitive, but its only hit x -(25-30%) speeds.

Where do you get this information from, dude? Are you an Intel executive?
 
The future is closer to a hybrid of the 2 designs tbh... if both go for the most optimal design progression they will eventually converge on something fairly similiar - and not like either of the current designs.

would have thought the best design would be to build your basic gpu at a normal size, and then add a cpu like chip to the pcb to pass instructions back and forth.
That way yields shouldn't be so appalling, and you don't wast money on die space for gamers who won't take advantage of GPGPU.

Of course is wouldn't make sense to do so, if it's gunna kill-off your higher margin CPU sales.
 
Its fairly public info - I expect its even on wikipedia :S

Actually, on wikipedia I read this:

The second demo was given at the SC09 conference in Portland at November 17, 2009 during a keynote by Intel CTO Justin Rattner. A Larrabee card was able to achieve 1006 GFLops in the SGEMM 4Kx4K calculation.

Indicating its maximum theoretical throughput should have been somewhere above 1TFLOP. In fact Wikipedia gives a (as far as I can tell) correct formula for deriving Larrabee's performance and by that it should have been above approximately 1GHz, which isn't exactly bad. Actually I haven't heard any of these 7800 GTX claims, only numerous ones about it performing closely to the GTX280, which the somewhere above 1TFLOP would coincide with...

It's all there: http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

I do believe the actual reason for them stopping production of Larrabee wasn't so much a performance problem as it was a production problem - there were reports of its die being anywhere up to 971mm^2 in area (although most claimed it was around 700mm^2, which is still huge).
 
I just want to say that I am impressed with the speed at which this thread has gone to its 8th page, even though most of it was outcry against Roff's comment.
 
Actually, on wikipedia I read this:



Indicating its maximum theoretical throughput should have been somewhere above 1TFLOP. In fact Wikipedia gives a (as far as I can tell) correct formula for deriving Larrabee's performance and by that it should have been above approximately 1GHz, which isn't exactly bad. Actually I haven't heard any of these 7800 GTX claims, only numerous ones about it performing closely to the GTX280, which the somewhere above 1TFLOP would coincide with...

It's all there: http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

I do believe the actual reason for them stopping production of Larrabee wasn't so much a performance problem as it was a production problem - there were reports of its die being anywhere up to 971mm^2 in area (although most claimed it was around 700mm^2, which is still huge).

You know theoretical peak TFLOP is no indication of ingame rendering performance :D

Information on there does seem to be updated since last I looked into it but the last I heard the realworld performance of the most optimal manufacturerable part - which I think was 24 cores at 1.6gig but its been awhile so I could be wrong - was similiar to that of a 7800 series card and the rest was just theoretical extrapolation of the numbers.
 
Back
Top Bottom