• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD teasing new product, is it Radeon or FirePro?

It isn't though when you consdier the increase in transistors for said performance and decrease in performance/watt too. You will get reductions in performance/mm2 and performance/watt for a 28NM GM200 against a GM204 based on the same uarch.

The following is just a general statement BTW.

This is why the sky is failing is hilarious when people ignore on purpose the GT200 vs G92,GF100 vs GF104,GF100 vs GF114 and GK110 vs GK104.

Hawaii and Tahiti are both compute cards since they have far higher DP performance. Try comparing something like Pitcairn and Cape Verde with Tahiti for example. Improved performance/watt for one,but far less DP performance,and much smaller chips.

GK104 vs GK110 - improved performance/watt and performance/mm2. The GK110 based cards have to be clocked comparatively lower to maintain performance/watt.

Like I said the same old arguments about AMD is doomed and Nvidia is doomed have happened for years.

Move back 10 years and you can see all this doom and gloom said for different GPUs.

Some of the people here forget the awful FX series,and how there were multiple Nvidia is screwed predictions too. Even Fermi with its massive chips and large power consumption,was meant to doom Nvidia. ATI/AMD was doomed because of the G80 and G92 and their HD2000 and HD3000 series,which could not compete. All here still. I for one never predicted any doom since I thought it was silly and more importantly I want both companies pushing each other! I also want a choice.

Apple is doomed,Android is doomed,and so on.

Plus broken records can always be right when it comes to doom predictions.

Just like the person on the side of the road saying the end is nigh for 30 years and it actually happens.

Is he an oracle or just a nutjob?? Their word against yours.

Edit!!

But if you people really want to feel depressed,I can start my own doom and gloom scenario.

Discrete cards are decreasing in sales each year and BOTH AMD and Nvidia are fighting over less and less sales,and depending more and more on compute and pro cards.

Except,Intel is now improving its graphics massively each generation eating away at the low end and now with MIC is poised to enter the compute market in a BIG way.

Unlike Nvidia and AMD they have billions of dollars to buy marketshare(look at new Atom) and could probably eff up AMD and Nvidia in a big way if they wanted to.

Those big monolithic GPUs you all love on this forum are primarily developed for the markets Intel is entering and the runts offloaded to gamers.

Intel is maintaining margins from increased investment into services and commercial computing and this is why they want to get a foothold in the compute market.

So people should enjoy the fact they have a choice now as Intel is only making baby steps.

Might not be the way in 5 to 10 years,especially with process nodes being drawn out. Intel spends more than TSMC and GF combined ATM just on process node development.



Which games benefit from the improved DP performance?

Read again.

What you don't understand(and you would if you bothered to read what I said) is the larger GPUs have enhanced DP performance.

They are designed for use in supercomputers too for complex calculations tasks,hence this means more transistors and greater die area meaning less effiency. If you don't believe me - look at all the large die AMD and Nvidia flagship cards - they have worse performance/watt and in many cases worse gaming performance/mm2 than the gaming optimised cards of the same generation.

This is one of the reasons why the GK104 was more efficient than Tahiti - that 384 bit bus and DP performance added transistors and die area,which made the chip bigger and consume more power. It also made it unsuitable for laptop use and helped Nvidia gain traction in laptops which is primarily where they increased marketshare(plus the AMD switching mechanism was bugged too)

It was also why the GK106 and Pitcairn were actually not too far apart.

The GK104 showed the same against the GK110:

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfwatt_1920.gif

Yet the GK110 was double the size in transistors and nearly 90% larger in surface area.

It has many times the DP performance of a GK104 and double the DP performance of a GM204,for a 40% increase in surface area and transistor count.

It was not double the gaming performance either,due to lower clocks to maintain effiency. Overclocked effiency was destroyed.

If you don't get it still - read some of the uarch articles from Hardware.fr and anandtech first.
 
Last edited:
Saw the following mentioned on overclock.net today:

http://www.overclock.net/content/type/61/id/2140360/width/500/height/1000/flags/LL

LL


That is an AMD feature sheet and timeline for their GPUs. Look at the ones for the GPUs which are to be released.

A number of similar mechanisms are integrated into Maxwell to improve energy efficiency.

I will repeat this again,since people are not adding more information to this thread now.
 
Read again. What you don't understand(and you would if you bothered to read what I said) is the larger GPUs have enhanced DP performance. This means more transistors and greater die area meaning less effiency. If you don't believe mre - look at all the large die AMD and Nvidia flagship cards - they have worse performance/watt and in many cases worse gaming performance/mm2 than the gaming optimised cards.

Which games benefit from the improved DP performance?

^^As GM asked what games benefit? He didnt say he believed you or not he asked a simple question which you seemed to ignore and go off on a tangent :rolleyes:
 
Read again.

What you don't understand(and you would if you bothered to read what I said) is the larger GPUs have enhanced DP performance.

They are designed for use in supercomputers too for complex calculations tasks,hence this means more transistors and greater die area meaning less effiency. If you don't believe me - look at all the large die AMD and Nvidia flagship cards - they have worse performance/watt and in many cases worse gaming performance/mm2 than the gaming optimised cards of the same generation.

This is one of the reasons why the GK104 was more efficient than Tahiti - that 384 bit bus and DP performance added transistors and die area,which made the chip bigger and consume more power.

It was also why the GK106 and Pitcairn were actually not too far apart.

If you don't get it still - read some of the uarch articles from Hardware.fr and anandtech first.

I read what you said, what I'm asking is why do I want a compute card with better DP performance for gaming? What games does it benefit? Why am I paying AMD for it and what do I get in exchange for the extra transistors and heat that will make games run better?

Did you avoid the question because you don't know or did you not understand the question?
 
Read again.

What you don't understand(and you would if you bothered to read what I said) is the larger GPUs have enhanced DP performance.

They are designed for use in supercomputers too for complex calculations tasks,hence this means more transistors and greater die area meaning less effiency. If you don't believe me - look at all the large die AMD and Nvidia flagship cards - they have worse performance/watt and in many cases worse gaming performance/mm2 than the gaming optimised cards of the same generation.

This is one of the reasons why the GK104 was more efficient than Tahiti - that 384 bit bus and DP performance added transistors and die area,which made the chip bigger and consume more power. It also made it unsuitable for laptop use and helped Nvidia gain traction in laptops which is primarily where they increased marketshare(plus the AMD switching mechanism was bugged too)

It was also why the GK106 and Pitcairn were actually not too far apart.

The GK104 showed the same against the GK110:

http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_780_Ti/images/perfwatt_1920.gif

Yet the GK110 was double the size in transistors and nearly 90% larger in surface area.

It has many times the DP performance of a GK104 and double the DP performance of a GM204,for a 40% increase in surface area and transistor count.

It was not double the gaming performance either,due to lower clocks to maintain effiency. Overclocked effiency was destroyed.

If you don't get it still - read some of the uarch articles from Hardware.fr and anandtech first.

I read what you said, what I'm asking is why do I want a compute card with better DP performance for gaming? What games does it benefit? Why am I paying AMD for it and what do I get in exchange for the extra transistors and heat that will make games run better?

Did you avoid the question because you don't know or did you not understand the question?


I didn't avoid anything.

Read my answer again.

Have you not followed the sizes of the AMD and Nvidia GPUs since the GT200??

The top AMD GPUs have always been smaller than the Nvidia ones,ie, 300 to 440MM2 as opposed to around 520MM2 to nearly 570MM2.


In BOTH cases they are targeting the same markets. Gaming and DP compute markets. You need to understand they are not just for gaming.

In fact the compute market has massively higher margins.

Except AMD since the HD2000 debacle has tried to keep die sizes smaller. Nvidia has used larger die sizes to overcome the inefficiencies of going for greater compute performance.

This is why when people make uarch comparisons which is what is being made here (NOT GPU or card comparisons),it is kind of pointless. You cannot compare two GPUs,one which is gaming focussed and one which is made for mixed gaming/compute purposes and then make uarch comparisons,since they are not aiming for the same thing.

Its like comparing the engines of an Excavator and a Smartcar - both are engines but serve different purposes.

Even within the same generation of AMD and Nvidia you see this. If people don't understand this,go and read the Anandtech of Hardware,fr uarch articles or one of the larger sites. They explain this very clearly.

This means Nvidia with their "larger" midrange GPUs have done well against AMD in pure gaming performance and performance/watt.

^^As GM asked what games benefit? He didnt say he believed you or not he asked a simple question which you seemed to ignore and go off on a tangent :rolleyes:

Except you have not read what I said too.
 
Last edited:
Could they not make separate business targeted cards with high compute performance and keep the Radeon cards for gaming?

And you didn't answer the question. An answer would've been something like "Nothing".
 
Could they not make separate business targeted cards with high compute performance and keep the Radeon cards for gaming?

The thing is that basically the gaming market helps also in keeping the sales of the large GPU cards reasonable - I suspect it might have something to do with the chip order/wafer volume(larger volumes probably help with lower chip costs) and also taking on less than perfect chips,ie,those with defects in the parts which reduce DP performance but gaming performance is fine.

The problem is that Nvidia is willing to plonk out huge chips unlike AMD,meaning AMD needs to have a smaller die with good DP performance leading to compromises with absolute gaming performance,and AMD seems to concentrate on things like density and core size as a result,which affects power consumption. It looks like AMD is con

See the GK110 against Hawaii. The latter comes close to the GK110 with a much smaller die,but the former has better performance/watt(lower clocked and higher performing shaders??).

OTH,this means Nvidia can have "large" midrange chips which are better optimised with getting as much gaming performance out of them as possible,and better performance per watt and mm2 than the slightly larger AMD top end GPUs which are hamstrung by having to be denser and having to do two things,ie,be decent at DP compute and gaming.

The situation with Fermi and the HD5000 series was the same. The VLIW5 uarch was a good mix for gaming at the time - but had meh compute performance. However,Fermi with its compute focussed uarch had larger dies and greater power consumption.

Basically AMD probably needs to start considering maybe expanding the size of its midrange GPUs and perhaps going for larger topend chips,although the latter does increase the cost of the chips(both in yield and dies per wafer).

And you didn't answer the question. An answer would've been something like "Nothing".

I said DP performance leads to more power consumption and transistors and a reduction in gaming performance/mm2 and increased power consumption. I thought I implied that DP performance was not improving gaming performance,but I am tired(and drank too much yesterday night) so its not helping.
 
Last edited:
Saw the following mentioned on overclock.net today:

http://www.overclock.net/content/type/61/id/2140360/width/500/height/1000/flags/LL

LL


That is an AMD feature sheet and timeline for their GPUs. Look at the ones for the GPUs which are to be released.

A number of similar mechanisms are integrated into Maxwell to improve energy efficiency.

I will repeat this again,since people are not adding more information to this thread now.

I'll take it, since Boomstick is throwing the thread completely off topic by going over the same old nonsense over and over and over again like a broken record.
This goes back to what i was saying before and what Bru expanded on.
These technologies take years to develop, Nvidia got it off the starting line before AMD, good for them. in time so will AMD and normality will be restored.
And thats the crooks of it Boomstick, no matter how much you go over the same worn-out garbage it will not make AMD get their own power saving technologies to the fore any faster, these things take time.
 
Last edited:
And thats the crooks of it Boomstick, no matter how much you go over the same worn-out garbage it will not make AMD get their own power saving technologies to the fore any faster, these things take time.

Time is not static in business though, time scales can usually be accelerated by throwing money at them, and when not having something your competitors do is costing you money companies motivation to throw money at problems increases.
 
I didn't avoid anything.

Read my answer again.

Have you not followed the sizes of the AMD and Nvidia GPUs since the GT200??

The top AMD GPUs have always been smaller than the Nvidia ones,ie, 300 to 440MM2 as opposed to around 520MM2 to nearly 570MM2.


In BOTH cases they are targeting the same markets. Gaming and DP compute markets. You need to understand they are not just for gaming.

In fact the compute market has massively higher margins.

Except AMD since the HD2000 debacle has tried to keep die sizes smaller. Nvidia has used larger die sizes to overcome the inefficiencies of going for greater compute performance.

This is why when people make uarch comparisons which is what is being made here (NOT GPU or card comparisons),it is kind of pointless. You cannot compare two GPUs,one which is gaming focussed and one which is made for mixed gaming/compute purposes and then make uarch comparisons,since they are not aiming for the same thing.

Its like comparing the engines of an Excavator and a Smartcar - both are engines but serve different purposes.

Even within the same generation of AMD and Nvidia you see this. If people don't understand this,go and read the Anandtech of Hardware,fr uarch articles or one of the larger sites. They explain this very clearly.

This means Nvidia with their "larger" midrange GPUs have done well against AMD in pure gaming performance and performance/watt.



Except you have not read what I said too.

Sadly you your on another tangent ,You say others fail to read your posts but then cant answer a simple question Because you havent read the question and and go off trying to explain something that wasnt asked .
The question was What games does DP benefit not any thing else to do power consumption and transistors and a reduction in gaming performance/mm2 and increased power consumption
Seems the answer was really a simple "none"....
 
I read what you said, what I'm asking is why do I want a compute card with better DP performance for gaming? What games does it benefit? Why am I paying AMD for it and what do I get in exchange for the extra transistors and heat that will make games run better?

Seems the answer was really a simple "none"....

It isn't as simple as what games use it because they are not strictly just used for gaming:

n6lIOI4.png


We pay AMD/Nvidia for it whether gamers want it or not because gpu's are multi purpose and put to good use for science and even earned owners cash.:)
 
Last edited:
It isn't as simple as what games use it because they are not strictly just used for gaming:

n6lIOI4.png


We pay AMD/Nvidia for it whether gamers want it or not because gpu's are multi purpose and put to good use for science and even earned owners cash.:)

Exactly,and the upper GPUs have to serve the dual uses especially.

Saw the following mentioned on overclock.net today:

http://www.overclock.net/content/type/61/id/2140360/width/500/height/1000/flags/LL

LL


That is an AMD feature sheet and timeline for their GPUs. Look at the ones for the GPUs which are to be released.

A number of similar mechanisms are integrated into Maxwell to improve energy efficiency.

I will repeat the post for a third time,since we should start adding information to this thread.

So AMD will be implementing new power consumption fine tuning mechanisms. The only problem is when.

Edit!!!

Looking at the future tech:

1.)Integrated voltage regulation

I would assume hw based voltage monitoring,ie,faster??

2.)Inter-frame power gating

Switching off parts of the GPU used between rendered frames??

3.)Per-part adaptive voltage

So,scaling voltage for individual parts of the GPU based on usage??

4.)Intelligent boost and Performance Aware Energy Optimisation

More refined AMD Powertune which boosts and reduces clockspeeds closer to actual usage??

A lot of this sounds eerily similar to what Nvidia is doing with Maxwell??

But when AMD,when?? :p

It appears they have been developing for a while,but if they don't release at least one part with such tech in the next three months or so,it looks more reactionary.

Edit!!

It does look more like early 2015,so this week could just be an announcement??
 
Last edited:
Sadly your on another tangent butting into a conversation which was nothing to do with you and with the person talking with me not needing your help at all. He did not ask for your support or your help but feel to add it.

If i wasnt part of the discussion and it wasnt anything to do with me why didnt you say that at the start instead of replying to me in the first place? :rolleyes:
Only now when you still couldn't infact answer you have looked for another way out ..
If you wish to have private discussions with someone and dont wish for others to ask questions i would suggest not posting on public forums.

I will assume from now on though you will only be participating in threads That you have been asked to support others in as per your own rules



Thanks for that info Btw tommy :)
 
If i wasnt part of the discussion and it wasnt anything to do with me why didnt you say that at the start instead of replying to me in the first place? :rolleyes:
Only now when you still couldn't infact answer you have looked for another way out ..
If you wish to have private discussions with someone and dont wish for others to ask questions i would suggest not posting on public forums.

I will assume from now on though you will only be participating in threads That you have been asked to support others in as per your own rules

And I deleted what I said since I thought it was not fair to say it. But OTH when you were told that information you was after(in another thread) was in other threads already(which you did know since you have posted in them),you got annoyed,and instead waited for a way out. And yes I did answer the question(and even semi-apologised for my lack of clearness to gm),since it was not spelled out for you - TB only just re-iterated what I said,but you will never admit it ever so its a moot point! :D

But anyway,want to add any new info to the thread or do you want the last word??
 
Last edited:
Back
Top Bottom