Can a PC be too powerful?

Associate
Joined
19 Jul 2015
Posts
523
A long time ago, PCs consumed so little power that nothing special had to be done to cool them. As they became faster, power consumption increased too, ands so small heatsinks were added, then bigger heatsinks, then fans, then heatpipes. To illustrate this trend, I plotted the TDP of nVidia's top GPUs from each generation from 1999 to today (ignoring dual GPUs, Titans, and OEM-only cards):

gpu_tdp_2022.png

GPU,Year,TDP/W,Source
GeForce 256 DDR,1999,20,http://hw-museum.cz/vga/162/asus-geforce-256-ddr
GeForce2 Ultra,2000,33,http://hw-museum.cz/vga/164/asus-geforce2-gts-tv
GeForce3 Ti500 ,2001,43,http://hw-museum.cz/vga/174/nvidia-geforce3-ti-500
GeForce4 Ti4600 ,2002,45,http://hw-museum.cz/vga/184/msi-geforce4-ti-4600
GeForce FX 5950 Ultra,2003,74,https://www.techpowerup.com/gpu-specs/geforce-fx-5950-ultra.c79
GeForce 6800 Ultra,2004,81,http://hw-museum.cz/vga/308/msi-geforce-6800-ultra-pci-e
GeForce 7950 GT,2006,65,https://www.techpowerup.com/gpu-specs/geforce-7950-gt.c183
GeForce 8800 Ultra,2007,175,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_8_(8xxx)_series
GeForce 9800 GTX+,2008,141,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_9_(9xxx)_series
GeForce GTX 285,2009,204,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_200_series
GeForce GTX 480,2010,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_400_series
GeForce GTX 580,2010,244,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_500_series
GeForce GTX 680,2012,195,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_600_series
GeForce GTX 780 Ti,2013,230,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_700_series
GeForce GTX 980 Ti,2015,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_900_series
GeForce GTX 1080 Ti,2017,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series
GeForce RTX 2080 Ti,2018,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_20_series
GeForce RTX 3090,2020,350,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_30_series
GeForce RTX 4090,2022,450,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_40_series

The numbers aren't perfect due to some being the TDP of the chip itself, and some being TBP for the entire card, plus the usual fudging of official specs vs. actual power consumption, and the difficulty of finding the offical TDP for older cards. It's good enough to illustrate the trend though. Similar plots could be made for GPUs and CPUs from other companies.

You can see a steady increase until 2010 when TDP stabilized at around 250W. I thought that might have been the ultimate limit for GPUs, but this year's releases show that nVidia and AMD think more is acceptable, and the demand for those GPUs suggests they are correct.

There must be an upper limit somehwere though - some value so high that customers will refuse to buy. Such limits have been pushed against before, e.g. with 200W CPUs. Will we see those again, as the slowing of semiconductor fabrication improvements makes increasing power consumption the only way to improve performance, or will we decide that there's an upper limit beyond which we won't go? Does the popularity of 1kW+ PSUs mean that people are willing to own computers that actually use as much power as a kettle? How much power would you be willing to feed to a machine to make it faster?

EDIT: Added RTX 4090.
 
Last edited:
Our demands are increasing. We used to game on a 75Hz CRT so FPS didn't need to be very high. Also, a typical resolution was 1280x1024 and remained the norm for quite a while. Neither of these parameters were increasing. We only needed to upgrade to allow for advances in games. Not for resolution and FPS increases. Also, people were happy to game on medium settings etc. Ultra was something people used when they played older games, because the extra settings were meant for future hardware. These days people expect to use the highest settings on release day.

Now people are demanding higher resolution, higher refresh as well as ultra settings.

I think that new hardware is being pushed too far. If the clock speeds were much lower, they'd need far less cooling and far less power. They're being pushed to the limit.

Back in the day, they weren't pushed at all. They could easily have fitted HUGE coolers and ramped up the voltage.
 
Last edited:
Over years cpu and gpu dies shrink then they can fit more instructions on the die which needs to be powered so increasing the power needed.

More power consumption immits more heat.

Don't forget a pc can do far more than the previous generations and a lot of people don't care about power consumption as long as it does what they want.
 
Some good research, but probably not unexpected.

Just look at the demand for nvidias latest cards, users are demanding better graphics at higher resolutions, so it's not surprising the amount of power that has to go into these cards.
 
Now people are demanding higher resolution, higher refresh as well as ultra settings.

I think this is the key. Increasing resolution is exponentially increasing demands on GPU power. Throw in high refresh rates to compound it along with RT and it puts a lot or pressure driving more GPU performance. Power budgets have been driven up to help achieve higher performance.

For me the main limiting factor is heat and noise. My room gets uncomfortably hot on a summer evening already.
 
Yes top end GPU power has been creeping up, but that's not the story over the whole industry. CPU power is been crashing with the widespread switch to laptop and lower power devices. The new Ryzens have pronominal performance but their power consumption is low with respect to *any* mainstream desktop CPUs over the last 20 years.

I think the trajectory, for all but a small group of hobby/enthusiasts is lower power consumption across the board. Handhelds, laptops, desktops - The Ryzen 5000 APUs should satisfy most desktop users for a much lower power budget.
 
I honestly expect power consumption of high end GPUs to level out or fall from this point onwards. I would go as far to say that the latest Nvidia numbers are only because they were backed into a corner with a need to improve performance substantially while also committing to the Samsung 8nm node, a bit like intels latest 14nm iteration. The performance per watt change over last gen is pretty shambolic as a result.
 
This is true.
In the past progression to the next level of processing largely consisted of making the chips smaller. Making the individual transistors smaller reduced their power, increased their speed and allowed us to have more and more for a given size. The problem is there is a limit to this. All matter has a wave function, a property that allows it to behave in ways we might consider unexpected. Transistor junctions are now so small that they are similar in size to electron wave functions. This allows the electrons in a transistor to behave in odd ways. In particular they can now spontaneously jump right across the junction. These electrons add up to a current that generate heat. The smaller the junctions the greater the number of electrons that can suddenly appear of the wrong side so the greater the leakage current and the greater the heat. This is why chips have been getting hotter and hotter when idling over recent years. We have been approaching the quantum limit of the size of transistor junctions. The only way to go after this is better design and chips containing more transistors (and producing more heat). But it really does make you wonder whether we are fast approaching a limit or whether there is some way round this. It could well be that the answer is no. I guess we can only wait and see.
 
A long time ago, PCs consumed so little power that nothing special had to be done to cool them. As they became faster, power consumption increased too, ands so small heatsinks were added, then bigger heatsinks, then fans, then heatpipes. To illustrate this trend, I plotted the TDP of nVidia's top GPUs from each generation from 1999 to today (ignoring dual GPUs, Titans, and OEM-only cards):

gpu_tdp.png

GPU,Year,TDP/W,Source
GeForce 256 DDR,1999,20,http://hw-museum.cz/vga/162/asus-geforce-256-ddr
GeForce2 Ultra,2000,33,http://hw-museum.cz/vga/164/asus-geforce2-gts-tv
GeForce3 Ti500 ,2001,43,http://hw-museum.cz/vga/174/nvidia-geforce3-ti-500
GeForce4 Ti4600 ,2002,45,http://hw-museum.cz/vga/184/msi-geforce4-ti-4600
GeForce FX 5950 Ultra,2003,74,https://www.techpowerup.com/gpu-specs/geforce-fx-5950-ultra.c79
GeForce 6800 Ultra,2004,81,http://hw-museum.cz/vga/308/msi-geforce-6800-ultra-pci-e
GeForce 7950 GT,2006,65,https://www.techpowerup.com/gpu-specs/geforce-7950-gt.c183
GeForce 8800 Ultra,2007,175,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_8_(8xxx)_series
GeForce 9800 GTX+,2008,141,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_9_(9xxx)_series
GeForce GTX 285,2009,204,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_200_series
GeForce GTX 480,2010,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_400_series
GeForce GTX 580,2010,244,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_500_series
GeForce GTX 680,2012,195,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_600_series
GeForce GTX 780 Ti,2013,230,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_700_series
GeForce GTX 980 Ti,2015,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_900_series
GeForce GTX 1080 Ti,2017,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series
GeForce RTX 2080 Ti,2018,250,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_20_series
GeForce RTX 3090,2020,350,https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_30_series

The numbers aren't perfect due to some being the TDP of the chip itself, and some being TBP for the entire card, plus the usual fudging of official specs vs. actual power consumption, and the difficulty of finding the offical TDP for older cards. It's good enough to illustrate the trend though. Similar plots could be made for GPUs and CPUs from other companies.

You can see a steady increase until 2010 when TDP stabilized at around 250W. I thought that might have been the ultimate limit for GPUs, but this year's releases show that nVidia and AMD think more is acceptable, and the demand for those GPUs suggests they are correct.

There must be an upper limit somehwere though - some value so high that customers will refuse to buy. Such limits have been pushed against before, e.g. with 200W CPUs. Will we see those again, as the slowing of semiconductor fabrication improvements makes increasing power consumption the only way to improve performance, or will we decide that there's an upper limit beyond which we won't go? Does the popularity of 1kW+ PSUs mean that people are willing to own computers that actually use as much power as a kettle? How much power would you be willing to feed to a machine to make it faster?

nice data, but its flawed, if you mix in dual GPU you will see that we havent really gone up much since 2010.

So I dont think the PC are getting to powerful, ie to power thirsty.
 
Is modern clever power management a factor here as well?

In the past your card would overheat and crash if your case cooling setup was sub par.

With modern cards they will throttle back and run slower to accommodate the higher temperatures.

Newer cards are factory overclocked and will only use full power when conditions are right.
 
I honestly expect power consumption of high end GPUs to level out or fall from this point onwards. I would go as far to say that the latest Nvidia numbers are only because they were backed into a corner with a need to improve performance substantially while also committing to the Samsung 8nm node, a bit like intels latest 14nm iteration. The performance per watt change over last gen is pretty shambolic as a result.

I think this too, I can see it falling from here on.

The high-end cards prior to this generation couldn't keep up with what was required, so a significant generational leap in performance was needed, which has meant compromises on the power side. The consoles are now hardware-locked and so I'd expect modest improvements and power reductions for the next 5 years.
 
Tbh really high power rated PSU were very common in the late 90s and early 00s. I had some 1000w+ in my systems. Those system had pretty big power draws due to overclocking CPU and GPU.

I think depend on what era you are looking at. GPU certainly pushed things up by a heck a lot as of late. GPU > CPU power draw is a fairly recent trend. But for CPU to draw around 100w power has been common since mid 2000s right through to 2010s and to date.

so I would say since AMD’s zen+ processors there has been a significant improvement of power efficiency ie you can get the same amount done for less power or the same amount of power but more can be done.

but on GPU side, there doesn’t seem to be a slow down on the power race.
 
I know this is thread is old, but now that we have the offical specs for the 4090, I wondered if anyone had changed their opinion. With energy prices and teperatures rising, is power consumption more important now? Did you always buy the best performing card before, but now think that you might go one or two steps down for lower power?

I updated the chart in the first post. It now looks like the flat-ish TDP between 2010-2020 was just a temporary respite from the steady rise in power.
 
This reminds me of the early days of cpus.
Can't beat the competition with better architecture? Just push the TDP higher.

At some point diminishing returns kicks in and the efficiency drops off a cliff.

I'm hoping that we don't end up with silly high TDP for gpus in the next 3 or 4 generations.

Personally, I tend to go for the best overall gpu while trying to balance efficiency, price, features and performance.

There's also the option of underclocking/undervolting the higher end gpus; 90% of the performance for 70% of the power required. It worked well with the 20xx and 30xx series. Let's hope it's the same for the 40xx series.
 
Last edited:
People are not demanding anything, companies listen to the top 0.01% of things and think this is what everyone wants as it is a great excuse to charge and push technology forwards whilst not caring about the majority.

Remember sites like this, Reddit etc are just ech chambers of elitist people and enthusiasts. The issue is ignorance.

Good luck with that one! Minority but very vocal.
 
Last edited:
Back
Top Bottom