• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Why do people buy Nvidia cards?

Associate
Joined
17 Oct 2011
Posts
86
Im not an AMD fanboy or anything like that, though i do prefer them due to the price. But its always bugged me to why on earth any1 would buy Nvidia cards when they cost loads more and perform less. I understand paying more for an Intel CPU because the peromance increase is better then its AMD counterpart.

But a GTX 580 3gb for £480? When a 7950 can out perform it for like a £130 less. I've notice this threw the years... Why do people buy Nvdia cards?
 
In the 7900 series thread, Gibbo specfically said that while the new 79x0 cards were selling reasonably well, the top end Nvidia cards weren't selling at all.

It's always been about price / performance since 3D acceleraters started to become mainsteam. When Nvidia brings out competitive products, we'll see a shift in pricing again. Simples!
 
I frequently alternate between AMD/ATI and Nvidia cards. I simply buy whichever card gives me the best performance in my price range. Last time round it was the GTX570.
 
In my experience so far I have found they perform better, particularly when using anti-aliasing, though I do tend to notice that they are somewhat less robust than AMD hardware (AMD drivers on the whole have been quite good for me). I bought my overclocked gtx480 when it was around fifty percent off; I simply couldn't say no to what was previously around the £400-£450 mark only a few months before hand. I'd definitely purchase an AMD card again, though I see no reason or incentive to change for the time being.
 
Last edited:
I think the GTX 580 3GB is a big of a special case - since it is an extremely low-volume non-standard configuration card aimed at people running games at extremely high resolutions. Hence it commands a price premium. That isn't saying that this price is justified or better than the AMD alternative, but it is a special case.

If you look at the other, more popular cards like the GTX 570, GTX 560Ti and GTX 460 1GB - each of these cards is currently price and performance competitive with the AMD options (HD 6970, HD 6950 1GB/2GB, HD 6850 respectively) and there aren't currently any HD 7000 series cards which compete directly with these cards (I don't count the HD 7770). So it is understandable why people will go for these cards - especially when you factor in that some games tend to perform better on AMD cards, while others perform better on Nvidia cards. Also, the prices are very fluid - so often the Nvidia card is the better deal and vice versa.
 
Last edited:
I choose ATI for raw double precision computational grunt... Nvidia has never been close.

But for gaming it seems that that the games that I like are more suited to Nvidia cards - so I too alternate.

So in my instance ATI best for maths, Nvidia for games.
 
At this second in time I may be on the Nvidia bandwagon but my first PC had an ATI Rage™ PRO with a beastly 8mb of RAM. It's difficult to contend with nostalgia like that.
 
The HD7970 is only going to hit that throughput though if you fill it totally with perfect, GCN-optimised code.

The nVidia card will hit 1.5GFlops far easier. You really can't compare them like that.
 
I didn't think the GTX 580 was even considered as a real counterpart to the 7970 (despite the fact that the 580 is horrendously priced)
 
It depends on the price point. I think the 570/560 Ti is in a good price point. Although in the lower end AMD seems to dominate it. On the top high end it gets a bit fuzzy... I think the new AMD prices for the 7000 series are a bit high compared to their past offerings. Then again the 580 is also expensive for what it's worth... hmm...
 

Do you actually use GPUs for compute? Do you work on algorithms with high computational complexity?

Because if you do, then I'm surprised at your claim. ATI's 2 year old GPU may have impressive figures on the spec sheet but on any real world algorithm it will not reach a small fraction of that performance. The reason being due to the difficulty for an optimizing compiler to paralllelize any real world algorithm effectively on it. VLIW may sound impressive on paper but it is a poor architecture for compute. It's an area NVIDIA GPUs are so much better at that it prompted AMD to come out with GCN to improve its real world performance.

And to the OP, you do realize you're comparing the 7950/7970 to old GPUs don't you? An year or two ago while the GTX 580s commanded a premium they were priced based on the second tier 570/6970 etc. It's just old stock that has not dropped in price because it is EOL. It certainly makes no sense to buy a GTX 580 today when the 7950 is cheaper, but you need to understand things in context. The 580s are old stock and while they shouldve dropped in price, perhaps NVIDIA isn't really bothered with the few that sellers have of the stock.
 
Last edited:
Vastly more pc gamers buy mid range cards than the 580s, and amd has yet to release anything in the gamer sweetspot. 7950 is too expensive for many, 7770 is rubbish, the 7800 series will be welcome additions imo.

As for pricing, 580 in particular, i'd guess they produced far more 560/570 gpus than 580, to meet demand for these parts. If they drop the 580 price because amd outperforms it they could cut into potential 570 sales and would have to lower that price too, cutting into the very successful 560 range.

If the 7850/70s perform really well I'd guess that all nvidia 500s will drop in price tbh.

/guesswork.

As for your original question: before the 7970 was released amd was not cheaper than the 580 for the same performance, it was cheaper for worse performance.

590/6990s aside.. But idc about them, they grate me.
 
Do you actually use GPUs for compute? Do you work on algorithms with high computational complexity?

Because if you do, then I'm surprised at your claim. ATI's 2 year old GPU may have impressive figures on the spec sheet but on any real world algorithm it will not reach a small fraction of that performance. The reason being due to the difficulty for an optimizing compiler to paralllelize any real world algorithm effectively on it. VLIW may sound impressive on paper but it is a poor architecture for compute. It's an area NVIDIA GPUs are so much better at that it prompted AMD to come out with GCN to improve its real world performance.

Perfect, you're completely correct which is why complex algos are pieced together externally whilst the modules themselves are computed 'efficiently' on the GPU. GCN is improving but is still not efficient enough to take the complete payload.

Currently the only benefit is to the developer and not to the overall performance of the cluster. Maybe in year or two it'll improve but for now it's a complete waste of money to move our architecture over to it.

ps What projects if any are you involved in ?
 
Perfect, you're completely correct which is why complex algos are pieced together externally whilst the modules themselves are computed 'efficiently' on the GPU. GCN is improving but is still not efficient enough to take the complete payload.

Currently the only benefit is to the developer and not to the overall performance of the cluster. Maybe in year or two it'll improve but for now it's a complete waste of money to move our architecture over to it.

ps What projects if any are you involved in ?

My research is in the application of algebraic topology - cohomology groups in particular - to self-assembling structures. I'm not using GPUs for compute at this precise moment as I'm working on mathematical proofs, but I should be simulating some results later this year. And last year most of my use of GPUs was to accelerate the mapping of topological forms.
 

You are comparing 2 different generations mind... also 2 year old ATI cards can NOT do better than that tho the 7970 probably will with the updated architecture. Except in some highly specific applications you will struggle to get even 60% of the peak theoretically gflops on a 4870, 5870, etc.

Back to the OP - I've always found nVidia to have the more robust drivers of the 2 with more timely updates for the games I play and have a better focus on whats actually relevant to gaming. I've always managed to get very good value for money for performance to - I've not paid more than £160 for any GPU in the last few years and always got very close to bleeding edge performance.

My reasons for not buying AMD/ATI are a bit more deep rooted than some tho going back to the days when I was involved in video game development and comparing the different approaches to support from the different companies.
 
Damn you that sounds very interesting but I left my interest of string theory back in uni - having a read now :D

So how did you attack it if you don't mind me asking ? Is there a paper I can read somewhere ?

On last year's work? I'm afraid not. I've not had the time to finish my paper on it. It's a little overdue but what I'm working on now is a different approach altogether so I've been dragging my feet on getting back on that topic. At any rate, there won't be anything about GPU algorithms on it. I needed to use them to compute some Iterated Function Systems very rapidly and do Lebesgue integration on them.
 
Back
Top Bottom