• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI or NVIDIA? Which is Better?

He is ignoring the obvious flaw with the size of the core.

I'm not ignoring them - I just don't think they are very relevant to the original point. By and large Fermi does the job so I'm not really too concerned about the flaws.

When graphics functions in games are dictated by Microsoft's Direct X it's pretty much the whole picture, especially now we are hitting hard TDP walls.

Sure and for all its problems the 400 series still out performs the ATI 5 series at those graphic functions for the most part.... tessellation anyone?

What hard TDP wall?
 
Not with the same design they wouldn't.

ATI are able to build a better 40nm core than Nvidia. If ATI built a Fermi core it would be better than Nvidia's version. If they increased the size of thier " last generation Evergreen architecture" to the size of Nvidia's current generation (your own words) they would be twice as fast even running a generation behind.


ATI are better/end.
 
Last edited:
I'm not ignoring them - I just don't think they are very relevant to the original point. By and large Fermi does the job so I'm not really too concerned about the flaws.

Then only reply to the original point & not to others that are not about the original point because your point then becomes irrelevant.
 
ATI are able to build a better 40nm core than Nvidia. If ATI built a Fermi core it would be better than Nvidia's version. If they increased the size of thier " last generation Evergreen architecture" to to Nvidia's next generation (your own words) they would be twice as fast even running a generation behind.


ATI are better/end.

I can't really disagree or agree on that point unless they actually went out and built one, they are certainly potentially capable of doing such.

Scaling the current Evergreen architecture directly upto the die size of Fermi would not double the actual performance neither would the TDP be that amazing.

As far as performance goes great - but thats arbitary to my original point in that Fermi is not a generation behind Evergreen as a previous poster suggested. Fermi is the same generation DX feature wise and a generation ahead technical feature wise even if not material process wise.
 
If ATI built a core the size of Fermi they would be twice as fast and have a lower TDP.

ATI are better/end

Not with the same design they wouldn't.



Rroff is right in what he said, what he didn't go on to say was that while double the performance would be highly unlikely, it would be highly likely AMD's performance would be greater then Nvidia's by a large margin.

ATI are able to build a better 40nm core than Nvidia. If ATI built a Fermi core it would be better than Nvidia's version. If they increased the size of thier " last generation Evergreen architecture" to to Nvidia's next generation (your own words) they would be twice as fast even running a generation behind.


ATI are better/end.

Sorry but it wouldn't be twice as fast, considering that GF100 is only 50% larger than cypress so the maths don't add up, and that's ignoring the other issues that come with trying to make a die that big. It would however be faster though.
 
If AMD went out with the intention of designing a GPU core of the die size of Fermi I do believe they would most likely produce a faster and cooler running core. Thats completely arbitary to my original point but no one seems to care about that.
 
I can't really disagree or agree on that point unless they actually went out and built one, they are certainly potentially capable of doing such.

Scaling the current Evergreen architecture directly upto the die size of Fermi would not double the actual performance neither would the TDP be that amazing.

As far as performance goes great - but thats arbitary to my original point in that Fermi is not a generation behind Evergreen as a previous poster suggested. Fermi is the same generation DX feature wise and a generation ahead technical feature wise even if not material process wise.

Scalling would at least get close, maybe even better considering ATI would have extra room to work. No memory, ECC and so on.

The TDP would be better, look at what cutting the fat off the GF104 did. ATI dont have that problem to start with.
 
That gets a bit difficult sometimes :(

It doesn't & if you need to reply then make it relevant to there point.

I care about facts & i don't care what people think is better ATM & that's why i only posted about CCC not needing to be installed & ignored all the other talk that is going on in this thread, but the other fact that some others have pointed out with the core size is also true & is a good point thus i posted again.
 
Last edited:
What would be a possible step to avoid the tdp restrictions? (Both makes) Could they make a smaller die with a higher memory interface? For example the 5970 being two downclocked 5870's. Been wondering how they might next tackle the issue for a dual gpu card?
 
Last edited:
I'm not ignoring them - I just don't think they are very relevant to the original point. By and large Fermi does the job so I'm not really too concerned about the flaws.

Not concerned about it's flaws... Why not? you sound awfully impartial here and also sounds to me like you got your head in the sand with regards to any notion that Fermi isn't the greatest thing ever.


Sure and for all its problems the 400 series still out performs the ATI 5 series at those graphic functions for the most part.... tessellation anyone?

Again you are avoiding the issue with regards to die space, and also the fact that Nvidia's implementation isn't so impressive in games as it is in Unigine.
The truth is that AMD's hardware based method took up very little die space which gives AMD the architectural flexibility to greatly increase this function to outer-perform Nvidia's implementation with out costing as much die space as Nvidia.
It all comes down to Hardware being more efficient than software, you see the Fermi architecture will always have an inherent disadvantage outside of the compute arena, hence why it's not suitable as a gaming architecture...

What hard TDP wall?

Are you suggesting that cards can just keep getting hotter and hotter?
There comes a point when they just become a fire hazard, or the power supply in your home can't cope with it.

 
The problem with fermi is its not a pure gaming card. I think that is primarily where the problem in the architecture is. Nv know there market is going to shrink and are trying to branch out into other areas. If this was not the case i don't think the fermi architecture would be as it is now.

My point is all amd really need to concentrate on is bringing out a good gaming card with features that suit gamers but on the other hand nv have to have the all round package that does just about every job a gpu can do to keep them in business in the long term. Amd have this side covered with there new cpu's.

All this combined has led nv to have a real power hungry hot card that costs more to produce.
 
Last edited:
What would be a possible step to avoid the tdp restrictions? (Both makes) Could they make a smaller die with a higher memory interface? For example the 5970 being two downclocked 5870's. Been wondering how they might next tackle the issue for a dual gpu card?

Interesting point, the best method is to use the most efficient architecture for the purpose in hand.

Gaming = Fixed function hardware
GPGPU = Programmable/software based function hardware.

The solution would be to develop two different architectures for two different purposes. The problem however is that it cost's huge sums of money and resources to develop new architectures and the GPGPU market is too small to sustain these additional development cost's.
 
The truth is that AMD's hardware based method took up very little die space which gives AMD the architectural flexibility to greatly increase this function to outer-perform Nvidia's implementation with out costing as much die space as Nvidia.
It all comes down to Hardware being more efficient than software, you see the Fermi architecture will always have an inherent disadvantage outside of the compute arena, hence why it's not suitable as a gaming architecture...

I think your mis-understanding the Fermi architecture somewhat - the polymorph engines are a far more efficent implementation in that regard also Fermi is not as software orientated as your implying - very little is "emulated" via compute hardware rather than implemented as a hardware function.

Are you suggesting that cards can just keep getting hotter and hotter?
There comes a point when they just become a fire hazard, or the power supply in your home can't cope with it.

Contrary to the commonly banded around info on the PCI-e spec there is no actual hard limits specified (last time I checked) for the overall device TDP wise - there is a thermal and electrical advisory for desktop systems which is the minimum requirement for ratification but it imposes no hard limits other than how much power you can draw from the individual PCI-e spec power connections. It doesn't limit how many connections or how much heat you can pump out aslong as you can show you've taken the guidelines into account. Obviously most developers will try to stick within the limits of what current PSU/desktop chassis that are on the market can handle as that makes life easier for everyone.
 
I think your mis-understanding the Fermi architecture somewhat - the polymorph engines are a far more efficent implementation in that regard also Fermi is not as software orientated as your implying - very little is "emulated" via compute hardware rather than implemented as a hardware function.

That is an even scarier prospect then...

Contrary to the commonly banded around info on the PCI-e spec there is no actual hard limits specified (last time I checked) for the overall device TDP wise - there is a thermal and electrical advisory for desktop systems which is the minimum requirement for ratification but it imposes no hard limits other than how much power you can draw from the individual PCI-e spec power connections. It doesn't limit how many connections or how much heat you can pump out aslong as you can show you've taken the guidelines into account. Obviously most developers will try to stick within the limits of what current PSU/desktop chassis that are on the market can handle as that makes life easier for everyone.

I didn't have the PCI-e spec in mind, rather the hard limits imposed by physics (not PhysX), economics and safety.
Do you really propose graphics cards can get much hotter than the 480GTX and still remain safe and an economically viable product?
I mean you understand that most of the market, doesn't want hot fire breathing dragons as GPU's right?
Hence why Nvidia's GF100 market share is almost non-existent despite being being better priced than it's competition for the performance it offers, there was a very simple reason for that...
 
Last edited:
If they can effectively exhaust the heat from the case its not an issue... granted no one really wants that but thats a whole different story.

The reason why nVidias GF100 market share is relatively small is due in the most part to; being late to the market when many people had already bought 5 series GPUs and not being cost effective to the consumer in comparision to the 5 series. With the 460 cards things have kicked off because they offer competitive price/performance the number of people put off by the heat and noise is fairly small - not insignificantly small but not a huge proportion either.
 
If they can effectively exhaust the heat from the case its not an issue... granted no one really wants that but thats a whole different story.

Yeh, like it's not economically viable like I described earlier.
Did you even watch the video I posted Rroff?
The guy had quad SLI 480's and had to get an electrician to re-wire his computer room with new breakers to prevent tripping the electrics...

By the below chart folks would have to call in the electrician if the wanted to run SLI GTX480 512's? Is this not a HARD TDP LIMIT Rroff?

"As we had expected, the stand-by power consumption of 512SP GTX 480 was 17W higher.
Under full load the GPU voltage of reference GTX 480 was 1.0V, while 512SP edition was 1.056V. Surprisingly, the full spec’ed GTX 480 sucked 644W power, which was 204W higher than 480SP GTX 480!"

NO HOTLINKING


http://en.expreview.com/2010/08/09/world-exclusive-review-512sp-geforce-gtx-480/9070.html/6

The reason why nVidias GF100 market share is relatively small is due in the most part to; being late to the market when many people had already bought 5 series GPUs and not being cost effective to the consumer in comparision to the 5 series. With the 460 cards things have kicked off because they offer competitive price/performance the number of people put off by the heat and noise is fairly small - not insignificantly small but not a huge proportion either.


That's not true Rroff, GF100 particularly the GTX470 offer(ed) ALLOT of performance for it's money yet it's market share is terrible, even the 480 offers great value, but it's DX11 market share % is decreasing.
Even with the delay it's market share should be much better than it is.
While the 460 market share did look positive when it first released, it has somewhat slowed down considerably looking at steam, maybe it's a blip, who knows?.
 
Last edited:
Back
Top Bottom