• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI or NVIDIA? Which is Better?

Could not agree more, pathetic that people feel, the need to spam.

Don't see the new ATI from what I have seen/read being faster than the 470, never mind the 480.

I have used both cards and will continue to do so normally go for the best bang for bucks card, but one thing that has not changed over the years is that Nivida drivers are still quite a bit better than ATI drives.

If you need a card for now , I would recommend buying the 460, 1GB, and when the new ones come out if you fancy them you can always sell the 460 on, as they tend to hold there value quite well.

The cards you speak of are the mid-high end cards. The top variant of this is faster than a 5850 if the rumours are true so probably on par with a gtx470. The real high end is meant to be coming later and should be a good bit faster than a gtx480. 30-40% faster has been touted around as what advantage the 6970 would have over the gtx480. Then there is the rumoured 6990 dual card that should be somewhere in the margin of double the performance of a gtx480 if scaling is good.

Until performance figures are out we really know very little but my money would be on amd having a card faster than nv at every price point or there was really no need for a refresh in the first place as amd sales were still going well.
 
Last edited:
Sympathies to the op for having to trawl through this sorry troll-fest of a thread.

I agree with the op, he/she should buy a card right away. By the time the 6000 series arrives, Nvidia will anounce a new refresh and the waiting game will start all over again. This wait for the next gen merry-go-round has been going on for a decade. Gaming is about gaming today, not in 3 months time. You can replace your hardware any time you like.

Don't worry too much about the architecture under the hood. It's not important if you're more of a gamer than a technophobe. All of the latest cards support the latest API's, so happy days :)

There are minor pros and cons for going with either Nvidia or ATI, but for the most part, the overwhelming concern is to get the best speed for the ££ you're willing to spend. Anyone looking at this thread can see that Nvidia vs ATI is a moot point. They both offer the best solutions at different price points - the trick is to choose the right one at yours.
In this case, given that the op already has a gtx295, which is no slouch, I'd recomend one of the following;

Single GTX480
Single 5970,
Xfire 5870/Sli GTX470 (not sure which is quickest here),
Sli GTX480 :) ,
Xfire 5970 :) (pretty pricy/possible power supply issues)

All would be considered enthusiast solutions. I've left out the single 5870 as I don't think that would offer much over your GTX295.

In terms of dual card options, Nvidia's Sli scales a little better, but on the flip side, ATI cards tend to run a bit cooler, use less power, and are generally quieter.

Drivers are another moot point, especially in single card scenarios, but Nvidia tend to release updates a little faster than ATI do. There may be the odd issue if you play the very latest games on release date. However there's no gaurantee that Nvidia will always be the first across the line with a fix.

Other features like Physx, Eyfinity etc are all a matter of personal preference and utility.

imo, raw speed/£ is the clincher.
 
Last edited:
Drivers are another moot point, espessially in single card scenarios, but Nvidia tend to release updates a little faster than ATI do. There may be the odd issue if you play the very latest games on release date. However there's no gaurantee that Nvidia will always be the first across the line with a fix.

I think you mean to say they are more timely releasing updates for new releases, because ATI generally pump out new drivers every month give or take... it generally still takes them longer than nVidia to actually support a new games, etc. tho.
 
Performance can make or break the relevance of a design - its no good having a design thats technically 10 years ahead of the current one if it only performs at 1/25th of the current performance level, but if theres a smaller variation in performance it doesn't really have much impact on the relevance of a design.

Compute is just one aspect.

But its not 10 years ahead so your point is irrelevant to current reality & its gaming performance is the most important point in this forum.
 
Where did I say it was 10 years ahead? I was illustrating the impact of performance on relevance of a design generation.
 
Where did I say it was 10 years ahead? I was illustrating the impact of performance on relevance of a design generation.

The point is its gaming performance is inefficient for its size no matter how far ahead you think it is in design.

Being ahead in theory but worse in performance in reality is worse than being behind in design but with better performance for its size.
Newer design does not always mean better design.
 
Last edited:
^^^
The truth of the matter is that Fermi is no further ahead in terms of architecture advancement, they simply decided to take a different path to accommodate GPGPU at the expense of gaming performance, it is in a sense, a 'Jack of all trades, and a master at non'.
Intel will surpass them in the GPGPU market with a highly programmable Larrabee based implementation, and AMD will be (is) more efficient at gaming with it's fixed function based hardware strategy.
 
Last edited:
^^^
The truth of the matter is that Fermi is no further ahead in terms of architecture advancement, they simply decided to take a different path to accommodate GPGPU at the expense of gaming performance, it is in a sense, a 'Jack of all trades, and a master at non' becuase Intel will surpass them in the GPGPU market with a highly programmable Larrabee based implementation.

I can't believe you actually said that :( I hope you don't actually believe it to.

From the way you keep banging on about GPGPU I think you don't really understand the Fermi architecture and how it is different. We can take the whole compute issue entirely off the table anyhow and my comments still stand about the technical aspect of the core.
 
Last edited:
One word regarding fermi architecture " tessellation " the main DX11 feature, in this area AMD currently gets owned big time, Nvidia have designed a core that rocks in DX11 " DX11 done right " and it will only get a lot better with kepler. As for gaming performance, I went from a 5870 to a 480 and the 480 is much faster and smoother, more so when overclocked.
 
One word regarding fermi architecture " tessellation " the main DX11 feature, in this area AMD currently gets owned big time, Nvidia have designed a core that rocks in DX11 " DX11 done right " and it will only get a lot better with kepler. As for gaming performance, I went from a 5870 to a 480 and the 480 is much faster and smoother, more so when overclocked.

Well it should do seeing as the chip is twice as big ;)
 
The truth is Fermi is atleast a step ahead in terms of architecture advancement and the current trend for GPUs compared to Evergreen, I'm not sure what this "other path" thing is as you seem to be implying things like tessellation, setup and transformation are processed on the "CUDA" cores, granted the core has been redesigned somewhat over traditional designs for better compute performance with some trade off against potential gaming performance due to the scheduling for the different types of processing but it still does pretty well in regards to gaming performance too, managing to stay ahead of Evergreen despite the extra compute stuff.
 
The truth is Fermi is atleast a step ahead in terms of architecture advancement and the current trend for GPUs compared to Evergreen, I'm not sure what this "other path" thing is as you seem to be implying things like tessellation, setup and transformation are processed on the "CUDA" cores, granted the core has been redesigned somewhat over traditional designs for better compute performance with some trade off against potential gaming performance due to the scheduling for the different types of processing but it still does pretty well in regards to gaming performance too, managing to stay ahead of Evergreen despite the extra compute stuff.

Now try comparing apples to apples like if ATI doubled up on everything in the core to fit the same footprint as the fermi.
 
Evergreen scaled to the footprint of Fermi would still be basically the 4000 series with DX11 tacked on and we can only guess at the performance - for all we know there could be issues that means it performs no better than the current Evergreen process.

Fermi architecture wise is another step along the current trend for GPU advancement - like it or not the SM and Uncore have be redesigned with forward thinking on Fermi compared to Evergreen which is basically hacked up to get DX11 support on an already established architecture - nothing wrong with that it works and its cost effective - but its still technically a generation behind.


This architectural difference is the origins of the "DX11 done properly" thing.
 
Last edited:
The truth is Fermi is atleast a step ahead in terms of architecture advancement and the current trend for GPUs compared to Evergreen, I'm not sure what this "other path" thing is as you seem to be implying things like tessellation, setup and transformation are processed on the "CUDA" cores, granted the core has been redesigned somewhat over traditional designs for better compute performance with some trade off against potential gaming performance due to the scheduling for the different types of processing but it still does pretty well in regards to gaming performance too, managing to stay ahead of Evergreen despite the extra compute stuff.
.

You keep spreading Fud, as an architecture it doesn't stay ahead of Evergreen at all, just look at GF104, the die is larger than Cypress and it had all the unnecessary cache etc removed, yet it is slower than Cypress.
Same with Juniper and GF106 (which is allot bigger than juniper) and will no doubt be the same with Cedar and GF108, bottom line is the architecture is flawed when it comes to gaming.

And that isn't even the worst of it, Fermi now has to do battle with an even faster and efficient AMD NI based architecture that will most probably push Nvidia's cards into negative margins, that's not a huge issue for GF100 but small loses per unit on HIGH VOLUME chips could have catastrophic effects on Nvidia cash reserves.
If this happens then I think we will see a repeat of last year i.e. GFXXX EOL'd with the few that remain priced out of the market (like GT200) to make it like like Nvidia still selling product as normal.

Does that sound like Fermi is a good architecture?
 
Evergreen scaled to the footprint of Fermi would still be basically the 4000 series with DX11 tacked on and we can only guess at the performance - for all we know there could be issues that means it performs no better than the current Evergreen process.

Fermi architecture wise is another step along the current trend for GPU advancement - like it or not the SM and Uncore have be redesigned with forward thinking on Fermi compared to Evergreen which is basically hacked up to get DX11 support on an already established architecture - nothing wrong with that it works and its cost effective - but its still technically a generation behind.

And that point is a generation ahead is irrelevant to its performance in the case of fermi because its uses a hell of allot more space & resources for very little performance gain over the ATI architecture.

All that effort for little gain is a waste= brute force bloated approach to ATI's finesse.
 
.

You keep spreading Fud, as an architecture it doesn't stay ahead of Evergreen at all, just look at GF104, the die is larger than Cypress and it had all the unnecessary cache etc removed, yet it is slower than Cypress.
Same with Juniper and GF106 (which is allot bigger than juniper) and will no doubt be the same with Cedar and GF108, bottom line is the architecture is flawed when it comes to gaming.

And that isn't even the worst of it, Fermi now has to do battle with an even faster and efficient AMD NI based architecture that will most probably push Nvidia's cards into negative margins, that's not a huge issue for GF100 but small loses per unit on HIGH VOLUME chips could have catastrophic effects on Nvidia cash reserves.
If this happens then I think we will see a repeat of last year i.e. GFXXX EOL'd with the few that remain priced out of the market (like GT200) to make it like like Nvidia still selling product as normal.

Does that sound like Fermi is a good architecture?

Who said anything about a good or bad architecture?

The GTX460 cards do pretty well against the 5850 which on paper should be quite a lot ahead so I think my point stands fairly well performance wise.

And that point is a generation ahead is irrelevant to its performance in the case of fermi because its uses a hell of allot more space & resources for very little performance gain over the ATI architecture.

All that effort for little gain is a waste= brute force bloated approach to ATI's finesse.

Maybe, maybe not but what does that have to do with the technical capabilities of the architecture?
 
Evergreen scaled to the footprint of Fermi would still be basically the 4000 series with DX11 tacked on and we can only guess at the performance - for all we know there could be issues that means it performs no better than the current Evergreen process.

Just admit, like with the rest of whole entire range, evergreen would be faster.

like it or not the SM and Uncore have be redesigned with forward thinking on Fermi compared to Evergreen which is basically hacked up to get DX11 support on an already established architecture.

Below are the facts with regards to gaming...

Evergreen = Evolution
Fermi = Devolution
 
Back
Top Bottom