Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I've never pretended otherwise than to have a preference towards nVidia... I've also been pretty clear that I find them the easier to live with and don't particularly like them.
My stance/background is something like:
-In the past ATI drivers were dire - I had to literally swap between different versions for each game I wanted to play with my Rage Pro and Rage Fury... at that point I swapped to an nVidia TNT2 and it just worked... flawlessly. I will point out at this point that all through 2009 ATI's drivers have been for the most part very good.
-I've never liked the ATI control panel(s), whereas nVidias its plain but functional.
How utterly bizarre, considering so many places say the opposite to that.-ATI's AA optimizations don't work for me personally - i.e. with 4x AA on ATI I still notice distracting jaggies unless I turn it up to 8. On nVidia I'm perfectly happy with 4x.
So you keep mentioning, yet you never are able to back up what you say about your "games development" adventures.-I've had a poor experience personally with ATI in regard to video game development support, whereas nVidia were more than happy to help.
Ah, of course, what nVidia does is most relevant. How about CUDA/PhysX etc? You act like they're worthwhile technologies that add to a card's value, yet they do very little for games themselves.-I've found ATI's fascination with pushing technology like truform/tessellation, 3dc, etc. at the expense of useful features like shader model 3 frustrating. On the positive side AMD seem to have turned this around somewhat with attention on things more relevant to the market.
Ah, so now it's other people's faults that you post what you do?Despite the delusions of a small number on this board I'm not as anti ATI as they make out, I come on a little stronger on these forums than elsewhere due to the militant ATI fan crowd unbalancing things a bit.
You aren't aware of the goings on within nVidia?I'm not aware of an official statement from nVidia - only nVidia lapdogs as mentioned above - but even so it doesn't change my main point.
Nvidia Implied the card would ready and plentiful for Christmas.
Nvidia is a pure and utter speed demon, theres no efficiency...............
..........Remember Nvidia is at peak efficiency............
Its almost unbelievable you could be that naive as to quite literally get it completely and unquestionably backwards...............
...........Fermi simply can not and will not succeed in the future.
laughing my bottom off
oh im so pleased you used the word 'NAIVE' in that lovely post.![]()
You really can't get that any more backwards, Nvidia is a pure and utter speed demon, theres no efficiency, its brute force, it uses 50% more transistors, almost double the clock speed in shaders, yet comes out, about the same performance. AMD are getting similar performance from half the clocks and half the transistor space.
Sorry but considering the P4 was bigger with a shedload of cache and massive transistor count relying purely on clock speed to compete with a complete lack of efficiency, even having a double pumped fpu(or was it an interger part, I can't remember that far back) which is even closer to Nvidia's core clock speed essentially double pumped shader clock speed.
Nvidia HAVE ALREADY hit their ceiling, they've already had issues increasing clocks, they've got so many transistors at such a high speed they can't produce the thing, yet AMD push forwards, with on time releases of significantly faster cards compared to the previous generation, with very few large problems, with an incredibly efficient design that will continue to improve in efficiency. Remember Nvidia is at peak efficiency, its got zero headroom, you take a 285gtx and optimise the drivers, theres no headroom it can largely use its entire 240 shaders at any given moment. AMD has the ability to increase performance pretty much exponentially as game dev's program in a different way.
Its almost unbelievable you could be that naive as to quite literally get it completely and unquestionably backwards. Remember the 280 and the 285gtx both missed their target clocks, just with less of a problem than Fermi has had, but a far smaller clock increase on the 285gtx than planned as their architecture which most certainly does not lend itself to smaller processes gets worse the lower they go.
Fermi simply can not and will not succeed in the future. If they have a deritive of Fermi at 28nm, with the same design and try the doubling all the features route, it will simply not release. AMD would have zero issue releasing a double numbers on everything part on the next process node, why, because they've been watching manufacturing and design a architecture based on the problems being faced, while Nvidia have ignored what almost literally every single other gpu/cpu/chip maker in the industry is doing, which is finding leakage a major problem. Everyone else in the entire industry has moved away from raw clock speed and brute force, to efficiency and paralelism, every single last one but Nvidia.
What about the people who have had that experience with nVidia? Does that make them disliking nVidia for those reasons, right?
![]()
How utterly bizarre, considering so many places say the opposite to that.
Ah, of course, what nVidia does is most relevant. How about CUDA/PhysX etc? You act like they're worthwhile technologies that add to a card's value, yet they do very little for games themselves.
Fermi is all speculation, nothing official's out about how it performs.Anyhow... doesn't change the fact that Fermi is on schedule aslong as its release end of Q1 2010 however much you don't like me.
YOU think I have a problem with people disliking nVidia - doesn't make it true.
I know screenshots often show ATI's AA in a static image to look nicer - but personally I don't find they mask jaggies as well on the fly... horses for courses... I did say PERSONALLY and not that its necessarily an ATI failing.
Hardware physics is very relevant to gaming... if everyone got moving on that ball we'd see some nice advances... CUDA itself isn't as relevant to gaming but it is atleast a relevant technology to atleast one part of the GPU market... most of the similiar things ATI dreamed up had no relevance in any market they were selling in.
OpenCL will replace PhysX lol.
I don't think you get what I meant...
![]()
Hardware Physics IS, PhysX however ISN'T. The sooner it dies the better, and I'm looking forward to a physics standard that isn't owned by anyone making the hardware it runs on.
As for CUDA, again, CUDA as a brand (which is what the issue really is) is not relevant. Nor is brooke+ and any other proprietary "technologies" ATi and nVidia come up with.
You're inadvertently demonstrating that nVidia have a complete lack of innovation time and time again.
Awesome thread all. Pity it is so pointless- we all know Matrox are the best.
laughing my bottom off
oh im so pleased you used the word 'NAIVE' in that lovely post.![]()
You have made claims, now back them up, otherwise you've got no business laughing at other people's claims.
Most people know nVidia aren't about efficiency, why you'd pretend otherwise, I don't know.
Just because something isn't positive, doesn't mean it's wrong.
nVidia's constant "make it bigger, shove more stuff in, power requirements and die size don't matter" have been being discussed since the 8800GTX days.
If nVidia weren't inefficent and aiming for speed only, they wouldn't be having the issues they currently are supplying GT200 GPUs.
Fermi simply can not and will not succeed in the future.
Unfortunatly they've lost interest in the gaming market... intel should buy them up really instead of half baked attempts with larabee and the likes which are doomed to failure due to intel's mentality - and give them the financial backing and research availability to make a proper gaming GPU again.
Problem is Nvidia's mentality of late hasn't been much better - although Intel's experience with fabrication would probably help Nvidia with their ambitious monolithic cores.