Caporegime
- Joined
- 18 Oct 2002
- Posts
- 33,188
Hmm well the buzzword is GPGPU. Hence they need to stick with "GPU". It's likely that the Fermi based GPUs will exist as the cards do have hardware required to accelerate graphics, however the ratio is significantly biased towards GPGPU.
As Fermi is targeting computing rather than graphics I would expect it's performance priority to be computing first. Is this bad? Not if the game support the use of the GPU's spare computing capacity to offload the computations from the CPU. The majority of time the GPU is being held back by the CPU- the majority of games show this where it's only the aliasing that differentiates and the FPS is bottlenecked.
nV are right in the fact that if you can reduce bottlenecking by offloading the maths tasks to the GPU then you can have a better game. Where they have gone wrong is attempting to use this as a tool to increase their sales without bringing the industry forward. Gamers/developers are going to be wary of a situation where they're reducing their sales segments and increasing their costs by defining multiple code paths.
Could anyone make a post thats more wrong? Most games are NOT held back by cpu's, very very few games are cpu limited and its VERY normal and not a new situation where a new card comes out and is fast enough that its "less" cpu limited, for a short time till harder games come out, get over it.
They did NOT make a cpu cruncing card with the intent of offloading normal cpu tasks to reduce this cpu limit in most games, thats just crap, theres no Nvidia were right there, they didn't do that. As for Gamers/developers are going to be wary. Really, firstly Gamers are going to be wary their sales segments will decrease and costs by increased coding will increase....... really? LIkewise, Developers will see a reduced segment by having different graphics cards to code for, and they have to define multiple code paths, really? Sounds to me like someones trying to sound like he knows what he's talking about, but doesn't.
Fermi WILL be all but identical to the GPU card they put out, there is a slim chance they release a cheaper smaller bandwidth/scalable part but it would be more akin to a 5770 version of the 5870 architecture, than a radical difference. IN all likelyhood like every other generation for decades, whats being touted as a GPGPU Fermi is identical in every way as the GPU aimed version, except in name.
The reason its likely they aren't showcasing its gaming performance, or leaking numbers, is being not final silicon it might be buggy in gpu acceleration, maybe their dx11 drivers are terrible or hell, their own dx11 benchmarks aren't finished, or a mix of all of that. basic number cruching is fairly simple, likewise those who are building number crunching boxes go for max performance, even if its 20% higher. A gamer is unlikely to upgrade to a £400 card for 20% more performance though. So non final silicon not at final clock speeds makes more of a difference in benchmarks to gamers, than to the GPGPU market so you're more likely to want to hide the gaming numbers till you have final silicon at higher clock speeds.
My personal belief is that TSMC's utterly crap process is causing NVidia's significantly higher clock speeds to be a massive stumbling block and cause for a lot of the delays. Remember ATi's whole core runs at the core clock speed, 850mhz or so, up to 1Ghz. Nvidia's largest part of the core, the shaders, run on the last gen at up to 1.5Ghz, massively higher. Leakage of power/signal is the biggest stumbling block on TSMC's 40nm process, and leakage is also worse the higher the clock speed.
TSMC have also announced another delay on their 32nm process, yippee, which apparently is in testing, at awful yields and utter crap, another screwup by TSMC

We're in a laughable situation that Nvidia are seriously considering Global Foundries(basically AMD's manufacturing arm) for future gpu's over TSMC, thats just how bad TSMC have been for so long.
Nobody knows how good or bad it's going to be yet, unless you know something pretty much the rest of the world doesn't? Please enlighten us all with some benchmarks 
