• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

ATI or NVIDIA? Which is Better?

@necrontyr
Impressive, I particularly like "A dialectic in its truest form means you conceed where concession is due, always. And work together to reach a consensus on a question/problem."

1.I currently think the amd Cypress architecture is winning on most fronts of performance and power in the gpu market...

2.I beleive its strongest benefit is the ability to deliver better flops per watt then its main competitor at this time.

3.I dont think its succeeded where Nvidia cannot, both companies can dominate if they play their cards right...

Below is my opinion on your points:

1) Correct

2) Correct

3) Correct if both companies play their cards right, however to dominate you need a solid economically viable foundation, which Fermi just doesn't have as it stands.
Also I think it's important to acknowledge not just the issues affecting the architecture, but the issues affecting both AMD and Nvidia as a company with the elephant in the room being there will soon be no low-end and large revenue streams will soon be diverted into different hands.
 
i had to partake in several dialectic arguments/debates in college, sue me :P i understand the approach.

But i stand by the opinion that ati aint trail blazing in this current generation, their TDP is lower only by 30% , not double or close... so it comes down to asking are Nvidea wasting their time adding extra logic to suit the GPGPU crowd??

No I firmly do believe Nvidea have the edge performance wise, but i dont think its more than 10% myself unless your going to render farms where they bought their way into getting well supported applications... (but rightly so too, ati weren't playing enough hard-ball)

edit**
I guess thats summed up well when roff said "the actual way the design works at runtime in processing graphics then Fermi is a gen ahead" ( gen ahead nonsense, out now is current gen, you cant compare apples to oranges...simply ahead will do, as it is Accurate ) ... Because even though i prefer ati, the 6 months late weren't idle ... they did more work, thus accomplished more ... although that may not be very important in the long run, being late to the party hurts credibility and user-base purchase rates...
 
Last edited:
GPGPU is one of nVidias most profitable/growing markets right now even with the R&D and advertising costs... granted it can't sustain nVidia's growth but until intel step into that arena with anything substantial they have it pretty much tied up.

I think a lot of people underestimate just how big CUDA is tho;

http://www.nvidia.com/object/cuda_apps_flash_new.html#

and thats just the tip of the iceberg.
 
I agree that CUDA is the best development environment for gpu applications at this moment in time, but are its benefits all architectural or could AMD catch up in the near future with its higher customer base... (lets say double the penetration in public graphics markets) with that amount of customers ahead of Nv will they push their support in the computational world further with the influx of cash from higher revenues in the public domain ?

Ed** i dont believe it could catch them in quality in full, the Nv arch is just geared better, but software support can turn some heads, garnering at least some of the attention Nv has in the spotlight in the HPC field **

Although we are arguing who will produce the better cards( assumedly for gaming purposes) maybe AMD will leave Nvidia to their Sandbox(big Sandbox but still a box) in the high end computing domain and just push faster gpu's for raw graphical performance, edging Nvidia further out of a market they are currently playing catchup in....
 
Last edited:
i had to partake in several dialectic arguments/debates in college, sue me :P i understand the approach.

But i stand by the opinion that ati aint trail blazing in this current generation, their TDP is lower only by 30% , not double or close... so it comes down to asking are Nvidea wasting their time adding extra logic to suit the GPGPU crowd??

No I firmly do believe Nvidea have the edge performance wise, but i dont think its more than 10% myself unless your going to render farms where they bought their way into getting well supported applications... (but rightly so too, ati weren't playing enough hard-ball)

TDP is an important issue which I think was highlighted by the slow adoption of GF100 even taking into account the delay, this in itself has shown that a company can lose huge swathes of market share over TDP.

But that's not even the biggest issue affecting Fermi's architecture, it's the Die size Vs performance as this directly relates to cost.
Firstly with a larger die you have a lesser number of 'potential' successful candidates per wafer.
Next with that lower number of candidates you get exponentially worse yields the larger the die.
Additionally if the architecture is complex, this further adds to the yield woes.

The above factors make Fermi a very expensive chip for Nvidia relative to AMD's performance equivalent. This means that AMD can effectively price Nvidia into the red while maintaining profitability.

Why haven't AMD already done this?
They said them selves that TSMC wouldn't be able to supply them with enough GPU's so they wouldn't be able gain any market share even if they lowered prices and would instead just make less money or even a loss.

The last point above may be about to change though as TSMC is about to double their capacity thus giving AMD the ability to lower prices and gain additional market share. Nvidia simply can't go as low as AMD can without making huge losses and if they don't they lose huge swathes of market share.
 
Okay, so lets get numbers in here

*****%20-%20Power%20Consumition.jpg


Is they 30% delta in TDP going to wipe Nvidia out, or can it be revised ?

sigh , just view the pic >< i dont know why the forum code doesnt accept it as a ligit embed, anyways... 30% is the delta approx between the two, assuming that both companies chips have undergone die shrinks by the next nvidia refresh, as we know the 6k's are incoming on the same process ...

We can be sure the die shrink will bring the Nv cards back into the fold of reasonable(as opposed to Obscene) power consumption, albeit at the same performance point as now. So in that gen they could revise to push even more power through the cards to return to this high of 400W per high end card at full load.... and after that happens, in perhaps 1.5 - 2 refreshes time, they may have revised their architecture fully to cut out all the fluff (by fluff im suggesting the logic to accomodate CUDA programming is overkill to compute vectors and render pixels) for the gaming market... So who's to say that this bump in the road to accommodate the HPC crowd wasn't worth it?

Edit ** just to add to the speculation... albeit educated... removing excess logic will lower tranny count and thus lower die size, if they went that route, similar to their scaling down the GF 10x from the original except im suggesting they aint gonna chop the card into pieces identical but smaller than the original, but rather refactor the concept to reduce the logic per distinct unit.

All heavy on the speculation , comments ?
 
Last edited:
Ed** i dont believe it could catch them in quality in full, the Nv arch is just geared better, but software support can turn some heads, garnering at least some of the attention Nv has in the spotlight in the HPC field **

Although we are arguing who will produce the better cards( assumedly for gaming purposes) maybe AMD will leave Nvidia to their Sandbox(big Sandbox but still a box) in the high end computing domain and just push faster gpu's for raw graphical performance, edging Nvidia further out of a market they are currently playing catchup in....

It's not AMD Nvidia needs to worry about here, it's Intel with it's x86 GPU's that can be coded using native C/C++ compilers.

The main issue here is if Intel are squeezing Nvidia's margins from the left and with AMD forcing Geforce into the red from the right, then Nvidia has nowhere to hide and it literally is just a matter of time before it bites the dust.

To survive Nvidia needs Geforce to be economically competitive and not just competitive on performance by using massive dies.

Intel has a huge advantage as x86 is much much more widely adopted than CUDA and Intel also has immense resources.

"The Intel MIC will be programmed using native C/C++ compilers from Intel, and presumably from other sources as well. If the program is already parallelized with threads and the compiler vectorizes the code successfully, then the program may be ported with nothing more than a recompile (or so we can dream, anyway). In any case, the amount of restructuring to get started is likely to be quite a bit less intrusive than for NVIDIA."
http://www.hpcwire.com/features/Compilers-and-More-Knights-Ferry-v-Fermi-100051864.html?viewAll=y
 
Last edited:
But the HPC niche exists because current Cpu's are just shte at workloads like folding, maybe roff you could weigh in on whether intel are gonna push them, they are showing concepts with promise, that massive 48 core chip that real-time rendered wolfenstein 3-d using raytracing is saying hello Nv in a few years we are gonna play ball, but is it even capable of overtaking a tradtional graphics card which exceeds at the parallel?
 
Personally I query if Intel will ever put up credible competition in that arena... on the face of it it seems like madness to suggest otherwise yet they have spent years more on it than nVidia and had project after project end in failure.

I think part of the problem is Intel like to get it right first time and as we all know nVidia are much more of the mentality of pushing out the boat and dealing with the consequences later.
 
Last edited:
^^^
Intel are basically morphing a CPU into a highly programmable GPU so while it may not compete in the graphics arena as not having the parallelism of a traditional GPU it stands a good chance however with HPC, especially as Intel will probably maintain a process node advantage, and don't have to actually make a profit out of it for a number of years.

Just the below point alone makes them very very dangerous.

" Intel advertises MIC as an “Intel Co-Processor Architecture,” so by nature it can become drop-in compatible with an Intel Xeon chip without the need to reprogram application code in another language."

http://www.fudzilla.com/processors/item/19775-intel-unveils-knights-ferry-hpc-co-processor-specifications

Edit:
On a side note, AMD could effectively employ a technique similar to Intel if it wants to get into the HPC market (maybe even for specific pro/consumer market), and it could also use more specific hardware based functions within the architecture to and make it more efficient.
 
Last edited:
Back
Top Bottom