You are making assumptions.
- Yes a more modern server uses less power and has better performance ... but look at the price difference for a modern box compared with it. Even a £150/year price difference equates to many years of usage before you hit the cost of buying a more modern server with equivalent capacity.
- Pulling 150W is hardly guzzling ... I would imagine that there are many gamers on this forum have PCs which pull far more power and are probably on far longer ....
- .... because its not on 24x7 ... I have other systems which are and draw far less power. I have a requirement which means that I need to spin up a large number of VM on temporary basis but actually be able to install the OS on them (otherwise I'd just use AWS).
But you say you are running a R1700 ... I assume you mean you are running a Ryzen processor ... so are you actually running a server or just a glorified PC?
Assumptions are something we are seemingly both guilty of, for example you don’t know what my R1700 setup cost me and i’d bet it was a lot less than your G7. Also gamers tend to use power when gaming, at idle, a modern PC is highly efficient (intel power gating technology is a thing of beauty) and both AMD and Nvidia have been reducing idle power consumption for years.
Also did you really just suggest a server is defined by its hardware rather than it’s function? Hetzner/Myloc/OneProvider/OVH/WSI all have DC’s packed with glorified PC’s, HP/Dell/Fujitsu Siemens and even apple all have/do sell servers based on consumer grade hardware or a mix of consumer/enterprise class hardware and have done so for a long time, are they all wrong to call them servers?
Either way a G7 is likely approaching 8 years old with up to 70K hrs on it, if you only use it very sparingly, then your logic isn’t bad, thing is most of my lab stuff runs 24/7 as it’s actually used, guess that’s just me and anyone else running actual network services on our glorified PC’s.