• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Battle of the GPUs: Is power efficiency the new must-have?

Thought the sub-heading of the discussion is "Is power efficiency the new must-have?", which mean we should really be just discuss about efficiency? :confused:

I expressed my opinion strictly relating to that. And if I'm completely honest, I honestly think answer to that questions is "No it is not, but instead greater performance increase per gen than the pass is THE new must-have, considering the arrival of affordable 4K monitors."

GPU manufacturers pulling an Intel...focusing on efficiency at the cost of sacrificing performance increase in the face of people who need graphic cards performance to greatly improve hugely asap for driving 4K without costing the moon seem to be timing mis-match. It's nice to have efficiency yes, but I'm more concerned about how badly is the graphic card capability in general is lagging behind in the face of 4K.

I honestly wouldn't want AMD to jump on completing on efficiency wagon and focus on pushing performance increase. But if they did jump on that wagon, and both GPU makers carry on prioritising on pushing efficiency instead of performance like that, in 4 years time when we look back, it would probably be like looking back at SandyBridge from Haswell today performance wise...

I agree in that I don't think it's a must-have, but it is a nice-to-have
Surely even if you overclock starting from a lower base is better? Some of the efficiency will carry through won't it?
Also, this thread title is about efficiency not performance, so let's leave performance talk out of it and stay on topic shall we? :)

haha, exactly this! :D



PS. You also need to mention that we ALL have high end PSU's too ;)

EDIT:

I would love to be able to go back to the time of the nvidia 480 release and see what everyone was saying back then and quote those posts here :)

It would be interesting to go back and see if the people saying the noise, heat and power inefficiency of the 480 are the same people now saying it doesn't matter?
 
I think this is it, wasn't posted in this thread.

http://www.tomshardware.com/reviews/...l,3941-11.html

That article pretty much says that the average power consumption is low because GPU Boost 2.0 is just so efficient and fast at switching the power levels.

Problem is that on my custom BIOS I have GPU Boost 2.0 completely disabled so going by the above the above article my 970's would then see a dramatic increase in power consumption.

Only they don't.
 
I agree in that I don't think it's a must-have, but it is a nice-to-have
Surely even if you overclock starting from a lower base is better? Some of the efficiency will carry through won't it?
Also, this thread title is about efficiency not performance, so let's leave performance talk out of it and stay on topic shall we? :)
Funny. Think you can pull a quick one calling me "off-topic" when I'm not?

Performance and efficiency are relative, as greater improvement on one side would mean at the cost of the lesser improvement on the other- kinda like playing an RPG when you only got 10 stat points to spend...the more you invest in AGI (efficiency), the lesser points you would have to invest in STR (power). Regardless of what you think, the simple reality is this- coming from the 780/780Ti, the 970/980 (have much higher base clock, rather than same base clock as the 780/780Ti) while no doubt improved quite a bit on efficiency, graphic progression wise however has barely moved on after one and three quarters of a year has passed...this is actually quite concerning.

I know lots of people (include the OP himself) got hung up with the talk on bang for bucks or buying choices etc, but IMO they don't really belong under this topic, and should probably have their own threads for discussions such as that as they include too much variables such as perceived values, features etc.
 
Last edited:
Funny. Think you can pull a quick one calling me "off-topic" when I'm not?

Performance and efficiency are relative, as greater improvement on one side would mean at the cost of the lesser improvement on the other- kinda like playing an RPG when you only got 10 stat points to spend...the more you invest in AGI (efficiency), the lesser points you would have to invest in STR (power). Regardless of what you think, the simple reality is this- coming from the 780/780Ti, the 970/980 (have much higher base clock, rather than same base clock as the 780/780Ti) while no doubt improved quite a bit on efficiency, graphic progression wise however has barely moved on after one and three quarters of a year has passed...this is actually quite concerning.

I know lots of people (include the OP himself) got hung up with the talk on bang for bucks or buying choices etc, but IMO they don't really belong under this topic, and should probably have their own threads for discussions such as that as they include too much variables such as perceived values, features etc.

Although the 970 and 980 are Nvidia's current high end cards and are priced as such you have to remember that they were really a replacement for the GTX 670/680 levle cards.

So comparisons to the 780/780ti is a bit silly.
 
Correct, we only got the Titan/780/Ti's due to Tesla surplus, all Nvidia slides show the 970/980 compared to a 670/680 which is doubles performance dependant on games/benches).
 
Although the 970 and 980 are Nvidia's current high end cards and are priced as such you have to remember that they were really a replacement for the GTX 670/680 levle cards.

So comparisons to the 780/780ti is a bit silly.

dont really get that. all comparisons are valid and there's no reason that comparing nvidias current top end cards with the previous ones is a silly comparison. It doesnt really make any difference that we know the 980gtx isnt a full-fat maxwell and isnt supposed to be this generations top-end card - that's all irrelevant.
 
So was the 680 said to be after poor GK100 (or was it GK110, cannot remember) yields, Nvidia are comparing it that way.

Also more affordable then the Titan/780/Ti's were at launch AFAIR.

You personally can compare to to what you like.
 
While I do not disagree that the the 970/980 are "technically" replacement for the 670/680, regardless of the reasons or what caused the mainstream GK110's delay (if it was really a delay at all rather than the release being pushed back on purpose), the end result was that we ended up having what's essentially a line-up of mid-range cards and flag-ship cards that should had been in the same gen being splitted into "two gens" and release over the duration of 2 to 2 and half years. Bit-tech's conclusion on their review for the GTX780 back then pretty summed up this issue:

http://www.bit-tech.net/hardware/2013/05/23/geforce-gtx-780-review/11
"In comparison to the existing 6-series, it’s more difficult to recommend; despite the 30 per cent performance edge over GTX 680 2GB, GTX 780 3GB demands close to a 40 % price premium, and without the hero-card status afforded to Titan, has to be more realistically considered. With GTX 680 over a year old, this seems like something of a raw deal, especially as GTX 680 remains an extremely capable card at 1,920 x 1,080. We don’t think that over a generation gap we’re unreasonable looking for an increase in performance without the price bump and GTX 780 3GB just doesn’t offer that. It could have launched 12 months ago in the exact same place in the market and not been out of place, but that's 12 months ago."

The 970/980 4GB in essence are just more efficient refresh of the 780/780Ti 3GB with more vram, similar to the 8800GTS 320MB (G80) to 8800GTS 512MB (G92) with more vram, better efficiency and a little bit of performance increase, but the difference is that the gap between the release 8800GTS 320MB (G80) to 8800GTS 512MB (G92) was just 9 months, whereas between the release of the 780 and the 970/970 was 18 months. While the improvement on efficiency on the 8800GTS 512MB coming from the 8800GTS 320MB "was nice to have", however performance barely increased over 9 months was not great, and now with the 780/780Ti to 970/980 we are stuck with performance barely increasing after 18 months have passed, plus whatever number of months before the big mainstream Maxwell hitting the market. What we are potentially in for is the performance being more or less at a standstill for up to 30 months, if the mainstream big Maxwell doesn't get released until Sep 2015.

Anyway, I honestly hope that the reason for Nvidia prioritising improving efficiency over performance is purely due to the 28nm restriction. If that becomes their long-term approach (and if AMD jump on that wagon as well), it could become a concern...as I mentioned previously, I believe fundamentally, people would rather have more performance increase with the power-consumption remaining the same, rather than driving power-consumption down at the cost performance increase being restricted or held-back.
 
Last edited:
For me, personally, I'm not bothered about saving any money on the electric bill. I own tropical species and burn a lot more electric on UV lighting and ceramic heaters anyway.

However, if power efficiency translates into more power by reducing heat, then I'm all for it. If it is merely reduced heat and energy consumption but less power then I'm not bothered.

In short, it's all about raw power for me.
 
Funny. Think you can pull a quick one calling me "off-topic" when I'm not?

Performance and efficiency are relative, as greater improvement on one side would mean at the cost of the lesser improvement on the other- kinda like playing an RPG when you only got 10 stat points to spend...the more you invest in AGI (efficiency), the lesser points you would have to invest in STR (power). Regardless of what you think, the simple reality is this- coming from the 780/780Ti, the 970/980 (have much higher base clock, rather than same base clock as the 780/780Ti) while no doubt improved quite a bit on efficiency, graphic progression wise however has barely moved on after one and three quarters of a year has passed...this is actually quite concerning.

I know lots of people (include the OP himself) got hung up with the talk on bang for bucks or buying choices etc, but IMO they don't really belong under this topic, and should probably have their own threads for discussions such as that as they include too much variables such as perceived values, features etc.

I don't think that's entirely true. I'm sure you could be less efficient without improving performance.
And if you're adding performance into the mix, well bang-for-buck is dependant on 2 main things, performance and cost. Cost has already been mentioned, so you have all the things you need for bang-for-buck!
 
That article pretty much says that the average power consumption is low because GPU Boost 2.0 is just so efficient and fast at switching the power levels.

Problem is that on my custom BIOS I have GPU Boost 2.0 completely disabled so going by the above the above article my 970's would then see a dramatic increase in power consumption.

Only they don't.

It is just liberal use of GPU boost as a catch all term lacking any other provided. Dynamic, load based, fine grain power management doesn't have the same convenience/ring.

Afaik GPU boost main purpose is to manage the clock speed to take advantage of thermal/power headroom. It doesn't have to have have a bearing on the gpu's ability to finely gate power where it can, which it makes sense to have the chip do whether boosted to 1500mhz or at stock.
 
Wow can't believe this thread is still going. Look everyone is happy to see power use come down and all that. But with these new cards the power consumption is the only good thing about them. Let's be honest here for a second. If the 980 and 970 were using roughly the same power as the 780/780ti, how many people would have upgraded?


Again it's nice to see power use been brought down, but, most people here would have no problem buying a card that uses a lot of power as long as it offered a substantial increase in performance over previous generation cards.
 
I'm guessing if a power-hungry card was released, which used significantly more power than the opposition but also trounced it from a performance aspect, it would still be the card of choice for most enthusiasts.
 
Wow can't believe this thread is still going. Look everyone is happy to see power use come down and all that. But with these new cards the power consumption is the only good thing about them. Let's be honest here for a second. If the 980 and 970 were using roughly the same power as the 780/780ti, how many people would have upgraded?


Again it's nice to see power use been brought down, but, most people here would have no problem buying a card that uses a lot of power as long as it offered a substantial increase in performance over previous generation cards.

I think you're right, but also oversimplifying the achievement.
You could also ask, if the 290X offered the same performance as the 7970 who would've bought one of those? The big thing it offered was a performance boost, but Nvidia did that last gen with the Titan and 780 while AMD were still lower the price of the 7970 anx bundling more games to keep competitive.

What would be interesting to see is, if in a parallel world where it was AMDs release that increased efficiency, would the same people still be making the same arguments or would the suddenly be seeing things from the other side because of the brand?
I know everyone will say that they would still have the same opinion, but that's easy to say with no way to test.
 
Back
Top Bottom