Soldato
- Joined
- 8 Aug 2010
- Posts
- 6,453
- Location
- Oxfordshire
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
In terms of gaming Fermi is more like a space shuttle lol.
Loaded with outdated tech?![]()
If your talking about the material side of the process/design then you could say evergreen was a more accomplished, better thought out design. If your talking technical accomplishments in terms of the actual way the design works at runtime in processing graphics then Fermi is a generation ahead of Evergreen.
I'm witholding judgement in that regard until I see how the design does on a smaller process. It doesn't fit well on 40nm.
I'm witholding judgement in that regard until I see how the design does on a smaller process. It doesn't fit well on 40nm.
They both have their pro's and cons, at this moment in time ati (sorry amd) have their pulse on the public opinion now, Both architectures are solid performers and handle gaming equally well.
So your 'Preliminary' Judgement is, it sucks on 40nm as in your opinion it's too big for the process, yes?
In that case, can you explain to me why the much smaller GF106 architecture sucks so bad compared AMD's Juniper, despite having a large die size advantage?
In fact wouldn't it be a better, and more likely argument to say that as it stands, the GFXXX architecture sucks on 40nm, and that it is also likely to suck on 28nm, unless some fundamental changes are made to the Fermi architecture, and if we have more of the same, eventually Nvidia won't be able to keep it's head above water...
AMD are using the same leaky process.You have to bare in mind the changes nVidia made due to things like leakage they are artificially gimping performance on 40nm,
in theory on 28nm those won't be an issue which would automatically boost performance considerably before you made any other changes...
a) Do you expect 28nm to be any less leaky?
b) Do you expect Nvidia to keep the die size the same when competing against 28nm Southern Islands?
in practise they might not be able to who knows.
Miracles happen...
AMD are using the same leaky process.
a) Do you expect 28nm to be any less leaky?
b) Do you expect Nvidia to keep the die size the same when competing against 28nm Southern Islands?
Miracles happen...
Also can you please explain to me what you think went wrong with the GF106 core compared to Juniper?
Are you suggesting that Nvidia had to sacrifice performance somehow to make it more power efficient, even on a comparatively small die?
AMD aren't using the same design tho, they didn't have to gimp things the same way to get a functional chip
- but as the above quoted post shows you have absolutely no technical understanding of the design so I won't waste my time trying to explain it.
How did they gimp GF106?
It seems to me you are using false assertions and reasoning as to why you are unwilling to engage in 'Socratic Debate' for fear of being shown to be wrong.
The one common theme I'v noticed is that you are simply unwilling to concede you are wrong on any points even when it's clearly obvious, and will simply ignore the fact or continue to wriggle...
If you had any concept of what I'm talking about you would not have said "AMD are using the same leaky process".
As your apparently not interested in "Socratic Debate" as you put it http://forums.overclockers.co.uk/showpost.php?p=17537611&postcount=194 I don't see why you feel you merit an exception.