Can a PC be too powerful?

Electricity is getting more expensive, therefore the mid range cards will probably stay at around 200-300W for a few generations*. But the top end cards costing more than £1k will probably continue to rise in TDP because people who can afford that much for a GPU will probably not be worried about electricity costs, I predict we'll be at 750W within 3-5 generations unless there's a big increase in efficiency (which I doubt).

This increase is nothing new, the graph is slightly misleading (through no fault of the OP) in that it shows GPU power staying steady for most of the 2010s, which is true on a per card basis. However the total power high-end machines dedicated to graphics processing was increasing during this time, but it was doing so because their owners used multiple GPUs. It's was fine for a 980Ti to only draw 250W because if you needed more power you'd just SLI them. Now SLI is dead, the flagship cards have had to nearly double their TDP to keep up. Resolution and refresh rate increases have been mostly steady, the main reason for the jump in GPU TDP in the last couple years is to compensate for the death of SLI.

*I bet if the OP re-did the graph but instead of choosing the flagship card from each generation they looked at the midrange (xx60/xx60 Ti from each gen) then there would be much lower increases over the same timeframe.
 
Last edited:
I tend to underclock/undervolt not so much for efficiency but because my room gets uncomfortably warm (5950X/3090). I don't want more heat output than what I already have, that's what limits power for me.
 
Last edited:
Power consumption has always been high on my list of priorities for a GPU and system in general. FPS/Watt will be one of the deciding factors when I buy a new gpu.

I have my main PC in a small home office and if I keep the door closed it gets very very hot within a short period of time, and that is with my PC only using approx 400w of power. With a more powerful gpu I could open the door and heat the whole house in winter but that is a ridiculous situation.
 
Last edited:
...unless there's a big increase in efficiency (which I doubt)...
I don't think any amount of efficiency improvement would help. The problem is that no matter how efficient a chip is (and efficiency does improve massively with every process node), it's always possible to make it faster by shoving more power into it, and performance sells more than efficiency (so far).

*I bet if the OP re-did the graph but instead of choosing the flagship card from each generation they looked at the midrange (xx60/xx60 Ti from each gen) then there would be much lower increases over the same timeframe.
I'd like to do that, but how would I chose a card from each generation? The model numbers aren't consistent, and there isn't necessarily a midrange at all (e.g. the GeForce 256 was the only card of its generation, and there are no mid-range RTX 4000 series cards at the moment).
 
I was just about to post on exactly the same question - me and my husband have two 'gaming/power' PCs for games, 3d graphics, photoshop etc.

But most of the day we're working from home these days and remote PCing into a virtual PC elsewhere.

Reading these sorts of posts was making me wonder if I should get a couple of thin client PCs that I can switch between, so I only use the full power of the grunt PCs when I need it:

Need to do the maths, but I wonder if it's not that PCs have too much power, rather they're not particularly good at managing that power - monitoring my PC kw usage through the day, it runs at 300w even when relatively idle....

See I've wondered about building something centred around things like this:

15W would certainly add up throughout the year and probably pay for itself within 12 months....
 
Last edited:
I tend to underclock/undervolt not so much for efficiency but because my room gets uncomfortably warm (5950X/3090). I don't want more heat output than what I already have, that's what limits power for me.
Interesting idea - how much of a difference does this make to energy use? I wonder if a hardware button on the desk could be a good solution for me...?
 
And as if to drive the point home, today's reviews of the Ryzen 7950X show that despite using a more efficient 5nm process compared to the 5950X which was made on 7nm, it's actually less efficient because they have pushed it way, way past the point of best efficiency. Presumably this is to make sure it doesn't look slow compared to Intel's upcoming CPUs.

See I've wondered about building something centred around things like this:

15W would certainly add up throughout the year and probably pay for itself within 12 months....
We have one of those. It gets used as an HTPC, and for general use. It's really low power (10W when idle), but still fast enough for everything most people use a computer for. You're not going to be playing anything but the most basic of games on it though.
 
I don't think any amount of efficiency improvement would help. The problem is that no matter how efficient a chip is (and efficiency does improve massively with every process node), it's always possible to make it faster by shoving more power into it, and performance sells more than efficiency (so far).


I'd like to do that, but how would I chose a card from each generation? The model numbers aren't consistent, and there isn't necessarily a midrange at all (e.g. the GeForce 256 was the only card of its generation, and there are no mid-range RTX 4000 series cards at the moment).

You're right on the efficiency of course, I forgot about Jevon's Paradox (https://en.wikipedia.org/wiki/Jevons_paradox).

I'd be happy to help you choose the mid-range from each generation, I'll make a little list here, but as you say there may be some generations where it is not really possible, in some cases there might be two midrange options, I would take the mean of their TDPs. I'll start with the FX series in 2003.

FX series: 5600 or 5700
6 series: 6600 or 6700
7 series: 7600 or 7650
8 series: 8600
9 series: 9500GT or 9600
100 series: GT130 or GT140
200 series: GT 240 or GTS 250
300 series: GT340
400 series: GTX 460
500 series: GTX 560
600 series: GTX 650Ti or GTX 660
700 series: GTX 760 or GTX 760Ti
900 series: GTX 960
10 series: GTX 1060
16 series: GTX 1660
20 series: GTX 2060 or GTX 2060 super
30 series: GTX 3060Ti
 
You do of course have a choice now on how exactly you choose to use the hardware, I think people are becoming much more aware of that now, having being involved in small formfactor yet high powered pc's for a number of years now I became quite aware of this some time back. I.E you can undervolt a GPU or a CPU a little bit and lose a few percent performance but require significantly less power and consequently generate far less waste heat. The new Ryzen CPU's are a good example of this without needing to have the knowledge to play with PBO settings as you can just use the Eco mode, particularly for the single CCD parts, the 7600X and 7700X this yields a significant power and heat saving for a minor performance hit and even then that depends what you are doing with the CPU exactly, can be no performance hit at all in many cases. Point is, there is some choice about how you use these, do you want to run it for better efficiency or absolute maximum performance which has the added cost of power attached to it.
 
Back
Top Bottom