Mini/Micro PCs.

Associate
Joined
3 May 2018
Posts
604
tldr;

How do the modern, Celeron J class microPCs fair under normal desktop/office type use? The most I'm looking for is a fairly smooth 4K Youtube and browsing experience.

Do you have an recommendations for the sub £300 region?

Basically anything that can produce 2xHDMI 2.0 4k@60Hz displays and not chug out when running a few YouTube windows.


----------------
I'm back again and still looking to reduce power draw of my tech. Electric goes up 30% again 1st of Oct.

Presently, now with monitoring and graphs.

The 24/7 server consumes 50W or 70W depending on whether the disk pack has timed out and shut down or is awake*.

My main desktop/gaming PC draws between 95W and 108W idle. By "idle" I mean where the graph bottoms out on average. There are spikes when things light up the CPU or GPU, but generally it's sitting around 100W.

My work laptop on it's docking station pulling about 20W 8 hours a day.

When you add a bunch of other gadgets like monitors, routers, switches, etc. The total consumption when all are on is around 280W. Run a game and that jumps to 580W.

The elephant in the room is, as always the gaming PC. 100W is too high. It's a waste. When I'm "working", I only use my personal desktop for, well, personal stuff, like email, FB, watch Youtube and doing internet searches that are blocked in work. (Like github, believe it or not!)

I'm sure I can get this kind of basic desktop performance out of a 20W mini-pc.

*-keeping the USB disk pack in standby is proving difficult. Current issue is the software RAID waking the raid1 pair VAULT to update it or something. Wakes it up f or about 5 minutes every 10!.

GdGXuVm.png
 
I have considered the ewaste route, such as the Dell optiplex SFFs and the now getting to sensible price USFF. I already have 4 Optiplexes :)

Certainly the 2-6th gen versions seem to idle around 20W. Maybe the newer 8th-10th gen USFFs will idle at half that you think?

On the lower end of the spectrum, a raspberry pi wouldn't cut it. Maybe a PI4, but they are like hens teeth.

I have an old laptop, but it's a little too old. Like a 2nd gen I3 with 6Gb RAM.

As to switching. I don't really mind switching monitor inputs. Switching/sharing keyboard/mouse is not nice though. So I suppose another £150 for a decent dual monitor KVM... which could itself pull 3+W.
 
The problem is mainly that standard ATX PSUs are very inefficient at low load and have high overheads (like 6-7 watts just for being turned on). Most OEM systems have 12VO type PSUs, so they can achieve much lower idle figures. If the motherboard has all the power features enabled and is designed for it, then you can get mid teens with most AMD APU and Intel IGP systems and a standard PSU, but otherwise it'll be in the 20s, possibly 30s with high-end memory.

Yep. They more or less have the same slowly rising, gentle crest, gently falling efficiency curve. Been a while since checked, but I'm fairly sure at the lower end you can be looking at only 60% efficiency.

"Lower end" will of course then be relative to the MAX output and the peak efficiency output.

So my Corsair 850W PSU, while the PC is pulling only, say 50W, will actually pull more like 90W out of the wall and emit 40W of heat. Sure when it's gaming, pulling 450W the PSU is in it's efficiency plateau and only pulling 500W from the wall.

If I put a 300W PSU in it, and it pulled 50W it might only pull 70W from the wall. However, as soon as I fire up the 3080 the PSU faults and the system shuts down.

This is why those small, corporate desktop ewaste boxes are so appealing. They are partly designed and built to make real power saving impacts in businesses that run 1000s of them 24/7.

If I had loads-a-money(tm) I'd buy a big fat dual board, dual PSU box with a built in KVM. With prices of new gear where they are, that is still going to run me well over £1000 and that includes reusing everything from my current build!
I've run "new stuff" builds on OC shop and I can't get much under £400. That's picking a 5600g and a micro-atx board, 8Gb RAM.
 
So I dug out my old laptop. Tried it, checked it's specs and it's an i3-32xxS something.

However I remembered I still had the Optiplex 7010 (USFF) that used to run the living room tele, but could not do 4K, so got replaced with an Optiplex 990 (SFF)+Gtx1030.

I got lucky. Not only is it's spec better, (an i5 3440, 8Gb RAM), but I lucked out and the IntelHD gfx does actually support my 3440x1440 monitor.

It will take a bit of time to be sure, but it looks like it's going to work fine for anything but gaming or VR. The box itself consumes about 20W. So a net saving of around 90W during the day.

In fact, in the evenings, my energy monitor graph went "green" meaning less than 300W house wide!
 
Spoke a little too soon. I had a little "chewiness", hard to explain, not like "desktop'ing through treacle", but something was off.

The current setup with a DP output through an HDMI converter to an HDMI2.0 port gives me only 30Hz at 3440x1440.

That might not be fixable, but I have to try direct DP-DP connection to the monitor. That will be a pain as the full DP port is used by the gaming PC :( Meaning I'll definately need a KVM to switch back and forth. Current I only need to move 1 USB and the audio jack if I want wired audio on the gaming PC.
 
It's passive (cheap). I think you are right the adapter only supports 1.4 which tops out at something just over HD at 60Hz. The input port is HDMI 2.0 but its not worth risking the cost of an active 2.0 converter.

I did a bit of doodling with pen and paper and decided to upgrade one of the Dell eWaste machines. I will need to do a bit of moving SDDs about but I think I can get all the Dell eWaste boxes where they should be without too much hassle. Windows 10 'should' be fine, all the boxes have licenses, just need to swap product keys and hope that Windows 10 doesn't have a panic attack on a new (very similar) box. Linux won't care. I'll just need to check the disk UUIDs.

The upgrade was because... when I shut the gaming PC down, I realised I lost not just my gaming rig, but the bits of pieces of development environments on there. Like VSCode, Pycharm, Eclipse, MQTT Explorer, etc. etc. I don't mind spinnng up new install Windows boxes, I can get it down in under an hour. But reinstalling and re-setting up dev environments is a right pain in the wazoo.

Solution: Move the dev environment to a VM. Run the VM on the 24/7 server, then it doesn't matter which "client" box I have booted, I can RDP/VNC to the dev VM and work on it from any room in the house!

Problem: The server only has a i5-3470 and 8Gb of RAM. So launching a Windows 10 VM with 4Gb of RAM, over commits the system, leaving too little for disk cache.

Plan: I bought an i7-4770 with 16Gb RAM which will replace the server and should give me enough room for 1 or 2 4G VMs. The i5-3470 then upgrades my new "office" pc from an i3-32nn. This is a bonus as the current one has a dead CMOS battery and it can be a bear to boot after a power out. The BIOS cannot enable the CPU GFX properly. So it presents the monitor with a VERY, VERY basic VESA framebuffer which the Windows installer determines as 480x320 grey scale. Only 1 monitor I have supports that res. So to boot it I have to get an old 22" monitor out of the attic! AND a wired USB keyboard! I'll happily return that to collecting dust.

I have to check while I'm at it, as I have a suspicion the least used one, in the livingroom may also be a i7 3rd gen. Might do a bit more shuffling around.

The i7-4770 has Intel HD 4500 I think, but I checked and it will do 4k 60Hz over DP at least. I know the others do 4k 30FPS on HDMI, as they needed 1030's to get 4k 60fps. But I might get more lucky with direct DP-DP.
 
Upgrade the server with increased ram and deploy a VDi with zero clients in every room :p

I did consider the "thin client" "fat VM host" approach. To do that properly though, the gaming PC with the 5800X and 32Gb of RAM would be called for. That would make the VDIs more powerful than the Dell eWaste clients. However the current server idles at about 21W, not 120W.

The other issue is I believe NVidia have put their vGPU drivers behind a paywall now. So no GPU virtualisation with ££££.

However, I am using VDI for development... backed up by the Linux host underneath of course. The 4770 being delivered supports Hyper-T and -x should it will support nested hypervisors and can then run WSL (and android studio) on the same VM.

Although.... I wonder what the options are for RDP from a Smart TV. If I could get 4K remote desktop at 60Hz playing Netflix without an eWaste media centre, that might be nice. Somehow I doubt it. RDP will not perform under that kind of load surely.
 
Last edited:
Funny thing. I worked in one of the corporate mega giants that used these eWaste boxes as "thin clients".

The irony was, at the time I recall the thin client had far more power than my VDI did! The thin client had a quad core with 8Gb of RAM. My remote VDI started with 2 cores and 2Gb. It took me 3 months to upgrade it slow to 8 cores and 8Gb of RAM when they finally told me no more was available. (Now they just give you one request called MAX Specs, because anyone who cares about their performance always ends up there anyway )

Oddly the very next year I was working for a different corp mega giant and they gave me my dev VDI straight up with 16 cores and 64Gb of RAM. Interesting how different ones have different policies.
 
Last edited:
Oh... there is the option to just ship HDMI (and USB) around the house Linus LTS style. All my smart switches support multimedia VLANs, but at 1Gb/s I would need to upgrade the relevant ones to 2.5Gb

Just sounds too expensive right now.
 
Oh and update, in case anyone is following along with similar issues with drive power saving...

""*-keeping the USB disk pack in standby is proving difficult. Current issue is the software RAID waking the raid1 pair VAULT to update it or something. Wakes it up f or about 5 minutes every 10!.""

This turns out to be caused by Ubuntu's default enabling if Smart Mon tools which poke the drive periodically for a Smart data update.

Using: systemctl disable smart-mon-tools

Took care of it and now the drives actually sleep all night. That's another few quid a month saved.
 
A very important point missed here, as discussed before. Once we move into having to heat the room with actual heating/radiators. The gaming PC comes back online. The 100W is just heat. I'm not struggling for heat right now, it's still ... well just autumn. In a month, that 100W will just stop the heating coming on as much!
 
Something which can be irritating with many off the shelf NAS - they have the ability to put the discs into power saving mode but periodically various system tools will poke the discs for things like SMART updates or updating media indexes/thumbnails and keep waking the discs up unnecessarily with no ability to prevent it without 3rd party firmware or horrible hacks of the OS.

Indeed. With a "canned NAS OS" you will likely find a lot of "user space" dynamic filesystems and auto-mounting etc. All the bells and whistles that make it 'transparent' and easy to use. Those are all candidates for waking up disks to scan them.

On bare Ubuntu with only the services I use enabled, like samba, it's currently staying asleep so much that nearly every time I use it, it has to spin up.
A word to the wary though. If you are thinking of buying a multi-bay SATA->USB3 enclosure, beware that mine has a common chip (JMicron something). On all disks spinning down the SATA controller will go to sleep. If you wake a single drive, all 5 (in my case) will get spun up when the SATA controller wakes back up.... sequentially. If the drive you are waiting on is the last of the 5, or a member of a RAID array that needs rebuilt/scanned/assembled for use after shutdown... you could be waiting nearly a minute. Luckily the most accessed drives come online within about 15 seconds.

There are often odd behaviours with Windows and things like "Quick access" media files reaching out for icons and spinning drives up. There are also instances of applications and OS components timing out and producing errors while drives are spinning up. No instability though. Just annoying, retries once in a while.
 
So, this evening. Work laptop shutdown, just one monitor, the server, the (thin) desktop and some LED lights and.... 140W total.

Compare that to 2 weeks ago when it was around 250W+. It's £30 a month saving maybe.

I went round and spec'ed my ewaste machines, trying to get them in the right places.

Living room MC: i5-2400 8Gb Gtx 1030 - 4K 55" LG
Bedroom MC : i7-3470T 8Gb Gtx 1030 - 4K 43" LG
Server : i5-3470 8Gb
Desktop : i5-3470S 8Gb (USFF)

Bought a new one:
???? : i7-4790 16Gb

I tried to rank them with CPU Mark benchmarks online, but it just seemed odd.

3 x 3470s, but they differ by the T or S. Those are low and lower power requirements IIRC. So I think they rank as S, T, none.

The new box is pretty much slotted for the "Server" replacement. the 4th gen i7 has more VM/HV support.

The i5-2400 looks out of place, but... I don't have a better SFF. If I dropped the 2400 and used the 3470S in it's place, it's an USFF and will not take the 1030 for 4K@60Hz :(

Maybe I need another one! Would make a nice xmas present from my tech aware brother. "Buy me an eWaste box to replace my i5-2400"... and add", if you want to spend less than £180, I'll sub it.
 
Last edited:
Update:
Austerity setup is working out much better than expected.

My daily is still the i5-3470 Dell Optiplex. I only notice the 30fps now when I switch the big gaming PC on as it's so much slicker and snappy. It's like comparing a 1.0L fiesta to a BMW M4. But the fuel bills match up too :)

The server upgrade went amazingly well. I had to buy a SATA power splitter, but I just swapped the drive in and it booted perfectly first time.

Better yet, having multiple client machines provoked me to move my development environments to VMs which turned out to work really well. An 8Gb Windows 10 dev machine works perfectly. Adding a 4Gb Linux Dev VM obviously works, running both VMs together does over commit the machine, but it will swap a bunch of stuff and then behave fine with a little lag on somethings.

I use both 21:9s during the day. 1 for work, 1 for personal. In the evenings I just use one. I have only fired up the gaming PC for select tasks like gaming or Fusion 360 where 3D rendering power is important and what most other things lack.

My "base load" electric now sides around 250W rather than 350W-400W it was 3 months ago.
 
Soon the big PC will be beneficial for heating purposes. As soon the radiator thermostat starts using the heating in the office routinely I get to move back the big machine for the daily, the power is not wasted then, it's heating the room!
 
Back
Top Bottom