• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI or NVIDIA? Which is Better?

ATI cards provide equivalent gaming ability (a little above or below, depending on specific games and cards) for a smaller die size, low power requirements and lower heat produced. Isn't that more advanced?

NVidia aren't next-gen compared to ATI - they are same Gen. But ATI chose to concentrate on performance efficiency, and no doubt the technical improvements to achieve that are every bit as impressive as the technical improvements in Fermi - they just don't have the same wow factor.
 
If your talking about the material side of the process/design then you could say evergreen was a more accomplished, better thought out design. If your talking technical accomplishments in terms of the actual way the design works at runtime in processing graphics then Fermi is a generation ahead of Evergreen.
 
If your talking about the material side of the process/design then you could say evergreen was a more accomplished, better thought out design. If your talking technical accomplishments in terms of the actual way the design works at runtime in processing graphics then Fermi is a generation ahead of Evergreen.

Ok sure it's a generation ahead if that makes you feel better, but that doesn't change the fact it sucks compared to AMD's architecture, and could 3DFX nvidia.

I think you need to stand back a little and look at the situation from a different perspective, else you'l never be able to see the woods from the trees.
 
I'm witholding judgement in that regard until I see how the design does on a smaller process. It doesn't fit well on 40nm.
 
I'm witholding judgement in that regard until I see how the design does on a smaller process. It doesn't fit well on 40nm.

"Here comes a new challenger!"

In other words you're waiting for it to be good so you can defend it furthermore? :D
 
I'm witholding judgement in that regard until I see how the design does on a smaller process. It doesn't fit well on 40nm.

So your 'Preliminary' Judgement is, it sucks on 40nm as in your opinion it's too big for the process, yes?

In that case, can you explain to me why the much smaller GF106 architecture sucks so bad compared AMD's Juniper, despite having a large die size advantage?

In fact wouldn't it be a better, and more likely argument to say that as it stands, the GFXXX architecture sucks on 40nm, and that it is also likely to suck on 28nm, unless some fundamental changes are made to the Fermi architecture, and if we have more of the same, eventually Nvidia won't be able to keep it's head above water...
 
They both have their pro's and cons, at this moment in time ati (sorry amd) have their pulse on the public opinion now, Both architectures are solid performers and handle gaming equally well, with Nvidia edging ahead, but coming to market six months late if they weren't edging ahead on performance then those were months wasted. At this moment in time if your an enthusiast for specialty applications then follow the brand that shines (usually Nv) but if your just your Joe Bloggs gamer, then buy to the brand your loyal to, or buy to your pocket's limits, or if your using a lowly specced PC , then buy to the limits of your Power Supply, As the high end card EAT juice.

Im using a 5770 in my SFF, but i lean towards Nvidea powered laptops, fanboyism falls under buy the brand your loyal to, sure if thats all you care about then what the hell does 10% performance difference matter,
 
Last edited:
So your 'Preliminary' Judgement is, it sucks on 40nm as in your opinion it's too big for the process, yes?

In that case, can you explain to me why the much smaller GF106 architecture sucks so bad compared AMD's Juniper, despite having a large die size advantage?

In fact wouldn't it be a better, and more likely argument to say that as it stands, the GFXXX architecture sucks on 40nm, and that it is also likely to suck on 28nm, unless some fundamental changes are made to the Fermi architecture, and if we have more of the same, eventually Nvidia won't be able to keep it's head above water...

You have to bare in mind the changes nVidia made due to things like leakage they are artificially gimping performance on 40nm, in theory on 28nm those won't be an issue which would automatically boost performance considerably before you made any other changes... in practise they might not be able to who knows.
 
Ejizz find me a benchmark(list of benchmarks) that puts a 20% distance between a stock 5870 and a stock 480 in over 10 current games and i will concede my statement... If you cant then stop camping this thread :)

Equally well means from my perspective that if your talking about 6-10 Fps when you've already breached 60 at high settings then your just fanboying it up in public...

Which is better is an eternal argument that wont be settled without a ring and fisti-cuffs
 
Last edited:
^^^
No offence, but I honestly don't think you get it. If you have already read my previous comments then I doubt I can help you understand any further either.

Thanks for the contribution anyway :)
 
You have to bare in mind the changes nVidia made due to things like leakage they are artificially gimping performance on 40nm,
AMD are using the same leaky process.
in theory on 28nm those won't be an issue which would automatically boost performance considerably before you made any other changes...


a) Do you expect 28nm to be any less leaky?

b) Do you expect Nvidia to keep the die size the same when competing against 28nm Southern Islands?

in practise they might not be able to who knows.

Miracles happen...

Also can you please explain to me what you think went wrong with the GF106 core compared to Juniper?
Are you suggesting that Nvidia had to sacrifice performance somehow to make it more power efficient, even on a comparatively small die?
 
Last edited:
AMD are using the same leaky process.


a) Do you expect 28nm to be any less leaky?

b) Do you expect Nvidia to keep the die size the same when competing against 28nm Southern Islands?



Miracles happen...

Also can you please explain to me what you think went wrong with the GF106 core compared to Juniper?
Are you suggesting that Nvidia had to sacrifice performance somehow to make it more power efficient, even on a comparatively small die?

AMD aren't using the same design tho, they didn't have to gimp things the same way to get a functional chip - but as the above quoted post shows you have absolutely no technical understanding of the design so I won't waste my time trying to explain it.
 
AMD aren't using the same design tho, they didn't have to gimp things the same way to get a functional chip

How did they gimp GF106?

- but as the above quoted post shows you have absolutely no technical understanding of the design so I won't waste my time trying to explain it.

It seems to me you are using false assertions and reasoning as to why you are unwilling to engage in 'Socratic Debate' for fear of being shown to be wrong.
The one common theme I'v noticed is that you are simply unwilling to concede you are wrong on any points even when it's clearly obvious, and will simply ignore the fact or continue to wriggle...
 
If your going to invest your time it getting to the bottom of this Question lets start a true dialectic conversation now, We need to answer each others questions and not rebuff them, Let me start,

1.What architecture do you hold in highest regard ?

2.What is its strongest benefit over its opposition ?

3.Where does it succeed where its opponent cannot ?

A dialectic in its truest form means you conceed where concession is due, always. And work together to reach a consensus on a question/problem.

Here are my opinions for the 3 questions above..

1.I currently think the amd Cypress architecture is winning on most fronts of performance and power in the gpu market...

2.I beleive its strongest benefit is the ability to deliver better flops per watt then its main competitor at this time.

3.I dont think its succeeded where Nvidia cannot, both companies can dominate if they play their cards right...

So the question to answer is , what does the statement Which IS Better mean?? and of course , what are your opinions on those specific questions above, and what questions do you have for us(everyone else)...
 
Last edited:
How did they gimp GF106?



It seems to me you are using false assertions and reasoning as to why you are unwilling to engage in 'Socratic Debate' for fear of being shown to be wrong.
The one common theme I'v noticed is that you are simply unwilling to concede you are wrong on any points even when it's clearly obvious, and will simply ignore the fact or continue to wriggle...


If you had any concept of what I'm talking about you would not have said "AMD are using the same leaky process" - hint AMD don't have polymorph engines and an entirely different texture management setup.

As your apparently not interested in "Socratic Debate" as you put it http://forums.overclockers.co.uk/showpost.php?p=17537611&postcount=194 I don't see why you feel you merit an exception.
 
Last edited:
If you had any concept of what I'm talking about you would not have said "AMD are using the same leaky process".

As your apparently not interested in "Socratic Debate" as you put it http://forums.overclockers.co.uk/showpost.php?p=17537611&postcount=194 I don't see why you feel you merit an exception.

Agreed , that was childish, but no one is perfect. Although if he is going to help the world decide which company is doing a better job then spouting out socratic debate without clearing defining your stand in one post is not going to get us anywhere, so is answering questions and not rebuffing them like you just quoted him doing above roff, so whats your stance too? (in one major post that we should prob edit and refer back to)
 
Last edited:
Back
Top Bottom