• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi mid-life kicker comes in 40nm

Just some firendly advice though, you probably shouldn't have commented on the other thread or DM if you wanted to avoid another debate.

Indeed. I should have kept quiet. 99% of the time I will bite my tongue on things like this, but sometimes when I'm in the wrong mood I will let rip. Everyone is human I guess...
 
Anyway...

Still sounds like a lot of hot air from Nvidia though. Even if the parts are slower than Ati, they won't be 'that' much slower.
 
He must be ill or something.

I'm away in Manchester at the moment, avoiding using the laptop as it sucks balls.

There is no mid-life kicker, gf104 is it.

Get this straight, look up the conference stuff nvidia did a week ago, they mentioned a potential refresh between the announced products at 28/22-20nm they didn't confirm it. I'm 100% sure they meant they might have a 28nm refresh IF thei 28nm part comes in on time and assuming the process/design works better for them at 28nm. Some bright spark(go look at news on the day/day after his speeches) decided this meant there could potentially be a refresh on 40nm, there isn't and there won't be, get over that idea. They have neither the time nor inclination to completely change their architecture.

The ONLY way forward for Nvidia, with a 530mm2 core, is to increase efficiency dramatically and maintain the core size, that means a complete fundamental redesign. they don't have time to build a new architecture from scratch, a refresh, sure, higher clocks, more of the same clusters, thats "easy" compared to a whole new architecture.


The entire source for "midlife 40nm Nvidia parts" is nvidia mentioning potential midlife parts, to which they were largely talking about products in the next 3 years.
 
Thankfully he's stayed away since he was called out in a previous thread.

But, if you really want to, you can construct your own drunkenmaster post (if you have the stamina to write a little essay composed of two-line paragraphs). After all, you know exactly what he would say...



But, on-topic, I'll be interested to see what the Fermi refresh brings to the table. I doubt it will be able to compete with AMD in the efficiency stakes, and frankly, I will be (pleasantly) surprised if we see it this year. My completely subjective feeling is that AMD has things in the bag for the rest of this generation, but will have a mountain to climb to match nvidia in the next, given that nvidia has already taken a first iteration of their "completely new" architecture in Fermi. A switch to global foundries could potentially work to AMDs advantage in the next generation though.

Oh dear, I responded to a post on a first page(with my settings) then realise you're making a complete pratt of yourself. I went on holiday(ish), called out, I've never seen you make a post that makes sense, ever. As for Biased, I'm not, I'm typing here on an nvidia gpu, thats life.

Fermi as i've REPEATEDLY said, is a nice design, if you ignore all aspects of manufacturing, unfortunately 98% of a products success is its profitability, which stems 98% from its manufacturability. That is where Nvidia failed, MISERABLY. Have you seen a 512shader part GF100, or even a 384shader GF104, how many spins did it take before Nvidia admitted defeat.

As for not being predictable problems, I assume you'll now 280gtx is a 256shader card, oh wait, they ran into production problems and cut it down, it was late, used a lot of power and had lower yields than they'd have liked, and most of the series was sold for a loss, which largely all stemmed from the core being too big.

285gtx, still only a 240shader part, dropped from 65 to 55nm, took an incredibly long time to do the shrink, was still a huge core, still couldn't hit the targeted clock frequencies, and it was later than the 280, despite being vastly smaller, on a good process, being essentially a copied design, and having little else to do(mid/low end 280gtx derivitives didn't get in the way).

So thats 3 years of products that gradually followed the exact problems Fermi had, people all over the internet predicted EXACTLY what Fermi's/Nvidia's architecture/ignoring manufacturing would bring for Nvidia, half the industry predicted it, and publically stated it, and Nvidia waded into it.

This is the SAME problem AMD had with R600, something everyone saw happen, AMD adjusted and haven't fallen into the same problem again, nvidia saw it and walked right into it, in 3 separate architectures/process's in a row over 3 years and STILL haven't learnt.


I'd be happy for you to link to the thread i was "called out in" to rip whatever rubbish you wrote completely apart, again, and it will be because of the mood you put others in when you spout unsubstantiated nonsense in an incredibly biased manner while calling everyone else an idiot, for using logical reasoning in the way they post, using facts, common knowledge, common sense and a little educated estimation. (which by the way, my last estimate was a 6770 with 1280 shaders, about 3 weeks ago).
 
I've never seen you make a post that makes sense, ever.

How nice for you.

As for Biased, I'm not

Sure... In just one post:

That is where Nvidia failed, MISERABLY

how many spins did it take before Nvidia admitted defeat

it was late, used a lot of power and had lower yields than they'd have liked, and most of the series was sold for a loss, which largely all stemmed from the core being too big.

...quite apart from having no evidence it was sold at a loss

being essentially a copied design, and having little else to do

If you can't see the above as being examples of a biased and over-simplified viewpoint, then there is no hope for you. Try writing without emotive language - it will add considerable weight to the points you are trying to make.



As for your point about R600: R600 was the starting point for the technology which evolved into R700 and R800 (i.e. the handling of floating point arithmetic through the clustering of 'stream processing units' into shader cores and SIMD cores). With architectural tweaking, refinement of the memory interface, and most importantly, a smaller manufacturing process, the technology evolved into a highly efficient product which has scaled well, offering roughly a factor of four increase in performance throughput (three 'true' generations so far, ending with evergreen). The fact that you utterly dismiss the possibility that Fermi is a similar example is unfortunate, and unfounded. Outside of nvidia's GPU design team, there is no-one who understands on a micro-scale what leads to Fermi's power inefficiency. There is nothing to suggest that we will not see dramatic improvements on a smaller manufacturing process (which is implemented without serious manufacturing issues).
 
Last edited:
All this is true, but how come so many games run better on Nvidia cards????

Well as blanket statement thats one of the worst I have seen. As whole they don't, at the same price points they are pretty similar and trade blows performance wise. That is all.
 
Last edited:

There are games that play better but generally they tend to be the crappy games that I don't like e.g. Far Cry 2, Mafia 2, Metro 2033, Mirrors Edge. The only decent game I've played that's meant to work better is the Batman game although I think it's just the advantage of Physx that makes it a "better" game to play on Nvidia hardware.

Personally the whole ‘ATI Game’ and ‘TWITMTBP’ means little to me and frankly it's bad for the industry. Going around studio’s handing them cash and block of code so certain features work on your hardware and not your rivals just ticks me off. Normally the code is **** poor and runs like a three legged dog and damages the game and the developer’s reputation and it's just plain bad for the industry as a whole. The bottom line is when hardware companies should stick to producing hardware and the only software development they should be concentrating on is their drivers. You only have to look at the shoddy implementation of PhysX in the majority of titles to realise Nvidia and ATI do not know better then game developers when it comes to building physics engines.
 
All this is true, but how come so many games run better on Nvidia cards????

0000048.gif
 
How nice for you.



Sure... In just one post:







...quite apart from having no evidence it was sold at a loss



If you can't see the above as being examples of a biased and over-simplified viewpoint, then there is no hope for you. Try writing without emotive language - it will add considerable weight to the points you are trying to make.

Biased? Do you think something is biased if it's not positive? Instead of biased, maybe you mean "negative" as that'd make far more sense. If they've messed up, and people speak about it, it's not being biased. :confused:
 
Biased? Do you think something is biased if it's not positive? Instead of biased, maybe you mean "negative" as that'd make far more sense. If they've messed up, and people speak about it, it's not being biased. :confused:

Bias, noun: "a partiality that prevents objective consideration of an issue or situation"

An unwillingness to consider both sides of an argument, and weigh each on its merits, is the hallmark of bias. Emotive language is its tell-tale. Here we have both. Negative comments are not in themselves biased, but when conspicuously presented without the corresponding positives or mitigations, they are.

In my post (the rest of it that you did not quote...), I attempted to provide a counter-balance to the one-sided argument put forward originally. No-one is arguing that the Fermi architecture is perfect (at least no-one without bias :p), but to dismiss it outright as a "miserable failure" is short-sighted. As I have repeatedly highlighted, GPU architectures are designed to scale over multiple generations. Dramatic reformulations are often unsuccessful in their first iteration (FX5800 / R600 for example), yet provide excellent results with minor tweaks on more refined manufacturing processes (NV40/G70, R700/R800). Only time will tell the success of the architecture, and to state its failure at the first iteration is premature and short-sighted; particularly when the reasons for the apparent shortcomings are not available to anyone outside nvidia's design team.
 
Everyone on the internet is biased like everyone is a bloke ;) For all I know half you lot work in the industry and this is all just propaganda. If we end up with some competition and cheaper hardware I'm all for it :)
 
Back
Top Bottom