• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Is Happy With The Performance Of Fermi

I've never pretended otherwise than to have a preference towards nVidia... I've also been pretty clear that I find them the easier to live with and don't particularly like them.

My stance/background is something like:

-In the past ATI drivers were dire - I had to literally swap between different versions for each game I wanted to play with my Rage Pro and Rage Fury... at that point I swapped to an nVidia TNT2 and it just worked... flawlessly. I will point out at this point that all through 2009 ATI's drivers have been for the most part very good.

Nonsense. Whilst I'm not going to say that didn't happen, to use it as a reason to dislike a company is plain wrong.

What about the people who have had that experience with nVidia? Does that make them disliking nVidia for those reasons, right?

-I've never liked the ATI control panel(s), whereas nVidias its plain but functional.
:rolleyes:

-ATI's AA optimizations don't work for me personally - i.e. with 4x AA on ATI I still notice distracting jaggies unless I turn it up to 8. On nVidia I'm perfectly happy with 4x.
How utterly bizarre, considering so many places say the opposite to that.

-I've had a poor experience personally with ATI in regard to video game development support, whereas nVidia were more than happy to help.
So you keep mentioning, yet you never are able to back up what you say about your "games development" adventures.

-I've found ATI's fascination with pushing technology like truform/tessellation, 3dc, etc. at the expense of useful features like shader model 3 frustrating. On the positive side AMD seem to have turned this around somewhat with attention on things more relevant to the market.
Ah, of course, what nVidia does is most relevant. How about CUDA/PhysX etc? You act like they're worthwhile technologies that add to a card's value, yet they do very little for games themselves.

Despite the delusions of a small number on this board I'm not as anti ATI as they make out, I come on a little stronger on these forums than elsewhere due to the militant ATI fan crowd unbalancing things a bit.
Ah, so now it's other people's faults that you post what you do?



I'm not aware of an official statement from nVidia - only nVidia lapdogs as mentioned above - but even so it doesn't change my main point.
You aren't aware of the goings on within nVidia? :eek:

What's changed?
 
Rroff sounds like the biggest Nvidia fanboy i have ever heard why ? just pick which gives you the best performance for you £ as everyone else does, as for ATI drivers i can't say i have ever had any issues with them certainly no more than i have had with Nvidia ones.
 
Nvidia is a pure and utter speed demon, theres no efficiency...............

..........Remember Nvidia is at peak efficiency............

laughing my bottom off

Its almost unbelievable you could be that naive as to quite literally get it completely and unquestionably backwards...............

...........Fermi simply can not and will not succeed in the future.

oh im so pleased you used the word 'NAIVE' in that lovely post. :D:D:D
 
laughing my bottom off



oh im so pleased you used the word 'NAIVE' in that lovely post. :D:D:D

You have made claims, now back them up, otherwise you've got no business laughing at other people's claims.

Most people know nVidia aren't about efficiency, why you'd pretend otherwise, I don't know.

Just because something isn't positive, doesn't mean it's wrong.

nVidia's constant "make it bigger, shove more stuff in, power requirements and die size don't matter" have been being discussed since the 8800GTX days.

If nVidia weren't inefficent and aiming for speed only, they wouldn't be having the issues they currently are supplying GT200 GPUs.
 
You really can't get that any more backwards, Nvidia is a pure and utter speed demon, theres no efficiency, its brute force, it uses 50% more transistors, almost double the clock speed in shaders, yet comes out, about the same performance. AMD are getting similar performance from half the clocks and half the transistor space.

Sorry but considering the P4 was bigger with a shedload of cache and massive transistor count relying purely on clock speed to compete with a complete lack of efficiency, even having a double pumped fpu(or was it an interger part, I can't remember that far back) which is even closer to Nvidia's core clock speed essentially double pumped shader clock speed.

Nvidia HAVE ALREADY hit their ceiling, they've already had issues increasing clocks, they've got so many transistors at such a high speed they can't produce the thing, yet AMD push forwards, with on time releases of significantly faster cards compared to the previous generation, with very few large problems, with an incredibly efficient design that will continue to improve in efficiency. Remember Nvidia is at peak efficiency, its got zero headroom, you take a 285gtx and optimise the drivers, theres no headroom it can largely use its entire 240 shaders at any given moment. AMD has the ability to increase performance pretty much exponentially as game dev's program in a different way.

Its almost unbelievable you could be that naive as to quite literally get it completely and unquestionably backwards. Remember the 280 and the 285gtx both missed their target clocks, just with less of a problem than Fermi has had, but a far smaller clock increase on the 285gtx than planned as their architecture which most certainly does not lend itself to smaller processes gets worse the lower they go.

Fermi simply can not and will not succeed in the future. If they have a deritive of Fermi at 28nm, with the same design and try the doubling all the features route, it will simply not release. AMD would have zero issue releasing a double numbers on everything part on the next process node, why, because they've been watching manufacturing and design a architecture based on the problems being faced, while Nvidia have ignored what almost literally every single other gpu/cpu/chip maker in the industry is doing, which is finding leakage a major problem. Everyone else in the entire industry has moved away from raw clock speed and brute force, to efficiency and paralelism, every single last one but Nvidia.

I wasn't literally comparing architecture for architecture... but rather how the configuration worked out in the long run for scalability. The P4 was relying on a long pipeline being loaded up just right to get performance and as the core scaled the returns diminished, the athlon was less susceptible to this... it was a counter point to my main argument...

nice rant btw sucks you contradicted your own points a few times.
 
What about the people who have had that experience with nVidia? Does that make them disliking nVidia for those reasons, right?

:rolleyes:

YOU think I have a problem with people disliking nVidia - doesn't make it true.

How utterly bizarre, considering so many places say the opposite to that.

I know screenshots often show ATI's AA in a static image to look nicer - but personally I don't find they mask jaggies as well on the fly... horses for courses... I did say PERSONALLY and not that its necessarily an ATI failing.

Ah, of course, what nVidia does is most relevant. How about CUDA/PhysX etc? You act like they're worthwhile technologies that add to a card's value, yet they do very little for games themselves.

Hardware physics is very relevant to gaming... if everyone got moving on that ball we'd see some nice advances... CUDA itself isn't as relevant to gaming but it is atleast a relevant technology to atleast one part of the GPU market... most of the similiar things ATI dreamed up had no relevance in any market they were selling in.
 
Anyhow... doesn't change the fact that Fermi is on schedule aslong as its release end of Q1 2010 however much you don't like me.
 
Hardware physics is needed, but Nvidia's PhysX flopped since they were dictorial about it, OpenCL won't have this problem.
Anyhow... doesn't change the fact that Fermi is on schedule aslong as its release end of Q1 2010 however much you don't like me.
Fermi is all speculation, nothing official's out about how it performs.
 
YOU think I have a problem with people disliking nVidia - doesn't make it true.

I don't think you get what I meant...

I know screenshots often show ATI's AA in a static image to look nicer - but personally I don't find they mask jaggies as well on the fly... horses for courses... I did say PERSONALLY and not that its necessarily an ATI failing.

:confused:


Hardware physics is very relevant to gaming... if everyone got moving on that ball we'd see some nice advances... CUDA itself isn't as relevant to gaming but it is atleast a relevant technology to atleast one part of the GPU market... most of the similiar things ATI dreamed up had no relevance in any market they were selling in.


Hardware Physics IS, PhysX however ISN'T. The sooner it dies the better, and I'm looking forward to a physics standard that isn't owned by anyone making the hardware it runs on.

As for CUDA, again, CUDA as a brand (which is what the issue really is) is not relevant. Nor is brooke+ and any other proprietary "technologies" ATi and nVidia come up with.

You're inadvertently demonstrating that nVidia have a complete lack of innovation time and time again.
 
I don't think you get what I meant...



:confused:

Yes I did but I don't see what difference it makes when discussing my opinion... if people have a bad time with nVidia drivers then if they want to dislike nvidia for it fair enough I don't really care if its right or not.


Hardware Physics IS, PhysX however ISN'T. The sooner it dies the better, and I'm looking forward to a physics standard that isn't owned by anyone making the hardware it runs on.

As for CUDA, again, CUDA as a brand (which is what the issue really is) is not relevant. Nor is brooke+ and any other proprietary "technologies" ATi and nVidia come up with.

You're inadvertently demonstrating that nVidia have a complete lack of innovation time and time again.

Atleast with PhysX they were attempting to address something relevant... and it wasn't at the expense of something else that was needed... unless you wanna spin the whole DX10.1 thing...

Wishing the demise of PhysX is naive... you should be wishing for it becoming an open standard if you really cared about what was good for gaming... its already a sucessful, stable, fully featured implementation with good documentation and well ahead of any alternative... we are literally years from seeing any other hardware physics library reach the same level of maturity... I'd almost rather see nVidia die off than PhysX... hence I absolutely detest what nVidia have done with it with regards to making it proprietary to nVidia.
 
Awesome thread all. Pity it is so pointless- we all know Matrox are the best.

Unfortunatly they've lost interest in the gaming market... intel should buy them up really instead of half baked attempts with larabee and the likes which are doomed to failure due to intel's mentality - and give them the financial backing and research availability to make a proper gaming GPU again.
 
laughing my bottom off



oh im so pleased you used the word 'NAIVE' in that lovely post. :D:D:D

You have made claims, now back them up, otherwise you've got no business laughing at other people's claims.

Most people know nVidia aren't about efficiency, why you'd pretend otherwise, I don't know.

Just because something isn't positive, doesn't mean it's wrong.

nVidia's constant "make it bigger, shove more stuff in, power requirements and die size don't matter" have been being discussed since the 8800GTX days.

If nVidia weren't inefficent and aiming for speed only, they wouldn't be having the issues they currently are supplying GT200 GPUs.

kyle what is your problem, i laughed at drunkenmaster contradicting himself in his own post.
and then calling someone else naive only to go on and catagorically state that...

Fermi simply can not and will not succeed in the future.

now understand that he didnt say that he thinks that fermi will fail or hes hopeing that it will fail. hes stateing that it will not succed.
 
Unfortunatly they've lost interest in the gaming market... intel should buy them up really instead of half baked attempts with larabee and the likes which are doomed to failure due to intel's mentality - and give them the financial backing and research availability to make a proper gaming GPU again.

Problem is Nvidia's mentality of late hasn't been much better - although Intel's experience with fabrication would probably help Nvidia with their ambitious monolithic cores.
 
Problem is Nvidia's mentality of late hasn't been much better - although Intel's experience with fabrication would probably help Nvidia with their ambitious monolithic cores.

nVidia's mentality of late reminds me of hitlers last hours in the bunker... intel just don't "get" the gaming market... no matter how much money they throw at it they will not succeed unless they are prepared to buy up a company like matrox or nVidia that do and let them have a reasonable amount of say, backed up by intels experience.

I wouldn't even put it past intel to weaken nVidia by proxy and then swoop in when they are down.
 
Back
Top Bottom