Home Server + 10G ethernet

I managed to squeeze the 2x2GB corsair sticks in but had to remove the fins from the one module to get it to fit under the cooling fan. The intel expander card is now in the board which makes things much tidier. I can also now fit the side panel fan in place to aid cooling the cards.


Back side looks reasonably tidy.


It initially wouldn't boot up as it didn't like the memory config. Tried upping the DRAM voltage and tried dropping to 1333 (from 1600) but no dice, I had to run 1066 in order to get it to boot. Might be possible to run it all at 1600 by tweaking the timings but I'm not sure it's worth it in this usage case. Linux seemed to cope ok with the changes, however I've run into network issues as it will not pickup an ip address. I'll need to look into it further tomorrow.
 
Spent most of today trying to debug the problem and not getting very far with it. Decided it would be easier to debug down in the house rather than in the loft so I labelled all of the drives and cables, pulled the disks out and moved it downstairs. Once rebuilt, it was completely dead. I've tested the power supply on the old x79 system and that appears to be working fine. Took the working antec supply out of that and plugged it into this and got squat, won't even spin a fan. Tried testing it with no expansion cards and tried it with no ram same result. I suspect the motherboard has died.

The 790FX and phenom could go back in and get it working but a new board is definitely needed in order to accommodate the additional expansion card for 10Gbe. Dilemma is do I bite the bullet and go new and write off the 20GB of DDR3 I have laying around or find something used that will utilise that ram.
 
The budget side to this project just went out of the window. I have caved in and gone and bought the following after a long hard think.

i5 9600k
Asus Prime Z390-A
16GB Teamgroup vulcan 3000C16

Was looking to go ryzen with a 1600 or 2600, however, looking at the pcie lanes available, I'd not have enough to run the nvs310 gpu, raid card and 10Gbe NIC. That meant going with something with an igp. With ryzen there are only the 2200g & 2400g which are ok, but I wanted to move up to a hex core. That left me looking at intel, even though ryzen has 8 more pcie lanes, i'd have to tie 16 of them up with a gpu meaning i'd be 8 lanes worse off than going intel. Considered the i5-8400 as that's the cheapest 6 core they do, but considering the £40 difference between that and the 9600k oem chip I felt the 9th gen part was more sensible considering it's soldered and its a K sku. The challenge now will be to get the cooler to fit as I have an original prolimatech megahalems which only came with the 775/1366 mounting parts. I bought the amd retention kit separately back in 2009, but the 115x kit seems to be very hard to find these days.
 
Was looking to go ryzen with a 1600 or 2600, however, looking at the pcie lanes available, I'd not have enough to run the nvs310 gpu, raid card and 10Gbe NIC. That meant going with something with an igp. With ryzen there are only the 2200g & 2400g which are ok, but I wanted to move up to a hex core. That left me looking at intel, even though ryzen has 8 more pcie lanes, i'd have to tie 16 of them up with a gpu meaning i'd be 8 lanes worse off than going intel. Considered the i5-8400 as that's the cheapest 6 core they do, but considering the £40 difference between that and the 9600k oem chip I felt the 9th gen part was more sensible considering it's soldered and its a K sku. The challenge now will be to get the cooler to fit as I have an original prolimatech megahalems which only came with the 775/1366 mounting parts. I bought the amd retention kit separately back in 2009, but the 115x kit seems to be very hard to find these days.
I went for one of these to allow me to keep enough lanes free on my Ryzen server build, might be worth a look if not to late: https://www.zotac.com/sa/product/graphics_card/gt-730-pcie-x1

You can get a GT710 version for about £40
 
Good point on that gpu, didn't realise they offered anything with an x1 slot.

That said, the price difference excluding getting one of those x1 gpus is £84, add £40-50 on the ryzen build cost and it's between £30 & £40 difference. I don't think it's all that worthwhile trying to change it now.
 
Good point on that gpu, didn't realise they offered anything with an x1 slot.

That said, the price difference excluding getting one of those x1 gpus is £84, add £40-50 on the ryzen build cost and it's between £30 & £40 difference. I don't think it's all that worthwhile trying to change it now.

Fair enough matey - I'm surprised they are so close on price, as the Ryzen 7 1700 (8c16t) is about £140, and that i5 seams to be a lot more expensive at the places I have looked. The Z390 boards also appear to be quite a bit more that the X370/X470 boards as well.
 
Extreme OTT server upgrade:
i5 9600k
Asus Z390 Prime-A
16GB Team Group Vulcan 3000C16



Looking at creating my own mounting interface for my original megahalems using a noctua backplate and the original prolimatech mounting kit for LGA775/1366.
 
Started work on the bracket mods needed in order to make my original megahalems (775/1366 only) cooler fit on 1151. Decided that the best approach would be to swipe a backplate from one of my other coolers and make it work with a mixture of mounting hardware. One noctua backplate combined with some bolts with the same threads as the original prolimatech parts and the noctua black plastic spacers filed down to match the thickness of the prolimatech metal spacers. The only bolts I had which were the correct thread and sufficient length had countersunk heads so I found some suitable washers for them. The upper mounting plates then needed the 775 holes filing out towards the 1366 mounting holes as 775 is 72mm spacing, 115x is 75mm and 1366 is 80mm.



Backplate fitted.


Mounting plates fitted.


Test run with paste to see how the spread looked. IMO, pretty much as good as any stock mounting setup.


Built. Currently tested and working ok, albeit, the install media for centos is not playing ball. It's locking up with a black screen after selecting install centos 7. The old opensuse 42.3 install boots ok, but it's definitely got driver issues as the gui is dog slow suggesting no gpu acceleration and again, no network interfaces are working.


In other news, the mellanox connectx-3 that I picked up for the server is as dead as a doornail. I've tried it in 4 different systems in a variety of different pcie slots and it doesn't show up as a device in either windows or linux.
 
Had a hell of job installing a new version of linux. Some kind of issue regarding the integrated intel gpu driver (i915) resulting in complete system freezes once you select an install option from the usb boot menu. After many hours of trying various kernel options via grub, I gave up and put the quadro back in. That immediately fixed the installation issues. Once I had fedora installed and setup using the quadro, I shut it down and took it out and re-enabled the intel igpu and haven't had a single issue since doing so.

So far I've managed to get everything except samba, vnc and tftp back to the way it was prior to the initial botched "upgrade". I've got so stumped with the samba issue that I've had to post a question on it in the linux section. At least I've now got plex working properly again.

I also took the opportunity to run the disk benchmark on the array as I've not run it since I built it nearly 5 years ago. Performance seems pretty decent considering the age of the raid controller. (2007)
 
I have a kind of media sever working in my main pc, streaming video to apple and the tv devices around the house. Atm I have a 4tb drive with the media stuff on, but going to upgrade to 8 or 10tb drive soon.
 
These arrived in work today. The end is in sight now, just need to get another NIC for the server and some fibre.


At present I have only got one OM3 MM fibre patch cable so I can't test it properly but it was extremely pleasing to see this once I connected my machine up.
 
Finally finished sorting out the OS. Had a hell of a job dealing with setting up vncserver due to policykit issues making executing gui applications with elevated permissions difficult. Simple solution was to switch over to using x0vncserver instead which made a lot more sense as it means I'm not running a separate desktop. Still suffering from some intel i915 driver issues. Weirdly, the problems returned when I changed screen. Turning off the window compositing in xfce stopped the shed load of errors and it makes little difference to the usability of the OS.

Memory usage seems a tad high but from what I can tell, most of it is the lsi megaraid storage manager server. (1.7GB being used by java)


All that I need to sort now is finding a replacement SFP+ NIC for 10Gbe and re-cable to the loft and my room. I've been looking at solarflare NICs as mellanox connect-x 3's are pretty expensive still and having had one DOA makes me less keen on them. Another intel x710 is a potential option albeit the most expensive.
 
Last edited:
Why are you using SFP+ rather than straight RJ45 cards? There are umpteen different SFP dongles and they need to be matched. Granted this mainly applies to fibre links but RJ45 keeps it simple and seems sufficient for your use case.
 
Why are you using SFP+ rather than straight RJ45 cards? There are umpteen different SFP dongles and they need to be matched. Granted this mainly applies to fibre links but RJ45 keeps it simple and seems sufficient for your use case.
Simply put, the switch is the main reason why I went sfp+ instead of copper. I got this 24 port Poe+ / 4 sfp+ Juniper switch for a lot less than a 10GbaseT netgear or Ubiquiti.

The other issue is my flood wiring is cat 5 which was installed nearly 20 years ago when 100Mb was the norm. Fibre will have the advantage that in principle, it’s less limited by speed and distance than copper so it could potentially be reused in the distant future on a 40 or even 100Gb upgrade.
 
Replacement NIC bought. After some lengthy research, I decided to try solarflare and picked up a SFN7122F card nice and cheap to replace the DOA Mellanox CX3.



Tested it on one of the windows machines first and all signs were good, although the drivers would not install. Kept causing an NDIS SYSTEM_THREAD_EXCEPTION_NOT_HANDLED bluescreen. Not sure on the cause, possibly the firmware on the card could have done with being updated. But the card showed a link with my intel SFP installed and windows recognised the device which was good enough for me. Swapped it back out with my X710 card and put it into the server and surprisingly, it worked straight from the get go. SFC9120 driver present and a 10Gb link. Yay. Now all that I need to sort out is running some fibres.


Things have been running well so far. Ironed out the vivaldi framework ram issues but plex DLNA server seems hell bent on slowly chewing through ram too.
 
Finally picked up a second OM3 fibre so I can do a test run at 10Gb. Performance in one direction looks pretty good, but not so good in the other direction.

Running my threadripper workstation with the intel x710 as the iperf server net 7Gbps with the 9600k/solarflare NAS as the client. The other way around only got 3.5Gbps. Currently I've adjusted the Tx/Rx buffers on the intel card to their maximums but not made any changes to the solarflare as I'm less familiar with linux driver tweaks.
 
I had a AMD Phenom II X4 940 (4 cores at 3ghz) until just recently and the CPU was still powerful. In fact it seemed to be getting faster over the years - I assume that this was due to the software (Windows, games etc) being able to utilise more of the cores. Just after I got that I had an overclocked Core Du 2.6 to 4.2ghz that runs games initially better than the AMD but went in the bin after only a few years when the Phenom started to stretch its legs. A year ago I started to have boot issues and gradually had to start removing HDD's in order to make the computer work. I tested the HDD's in another computer and they were fine.

I assume that my mobo was starting to fail (PCI controller?). Is your mobo relatively new?
 
Sorry to ask but you've kind of done the upgrade that I am wanting to do.
You've used Intel X710 cards into your machines apart from the server which has a Solarflare, any reason you didn't go with another Intel?

I have my main PC, Fileserver, ESXi box with the VM's pulling from said file server and Backup machine all needing to be lashed together (thinking something MikroTik in the middle of them all) but have been really holding off as I don't know what hardware to put in or of it will work.
Then will run a 10GB backbone from the MikroTik switch in the cave to my "core" switch in the attic to widen the pipe for futureproofing.

So....Intel SFP+ cards & SFP's, what kind of pricing did they work out at?
 
Sorry to ask but you've kind of done the upgrade that I am wanting to do.
You've used Intel X710 cards into your machines apart from the server which has a Solarflare, any reason you didn't go with another Intel?

I have my main PC, Fileserver, ESXi box with the VM's pulling from said file server and Backup machine all needing to be lashed together (thinking something MikroTik in the middle of them all) but have been really holding off as I don't know what hardware to put in or of it will work.
Then will run a 10GB backbone from the MikroTik switch in the cave to my "core" switch in the attic to widen the pipe for futureproofing.

So....Intel SFP+ cards & SFP's, what kind of pricing did they work out at?
I was seriously considering another intel x710 for the server too but it was 2.5x the cost of the solarflare/mellanox card. I tried the mellanox connectx-3 first but it was DOA and there were no others available at the time. Having got some info on solarflare from STH, I took a punt which paid off as it seems to perform very well. The x710's I bought cost £125 each and the intel SFP's were £36 each. The solarflare was £50 and the avago transceivers were free which is why I went that route. In hindsight, doing this again, I'd probably go solarflare for the whole lot as I could have gotten some current gen SFN8522 cards for £70 ish each which would have been a bargain had I known they were good at the time.

The windows 1903 update seems to have mostly resolved the issues I was seeing with iperf, though I'm still getting better speeds to the server than I do from it.

I bought some more OM3 fibres from FS along with some LC-LC links for the wall boxes I got from RS a few months back. Wall boxes should be pretty resilient as the fibres will enter at the base of the boxes keeping the connection virtually flush with the wall to avoid breaking them.


Since I now have sufficient fibres for a proper test run, it'd be rude not to. Just a quick test to make sure all ports and fibres work ok, I connected the server with two links using the 3 avago transceivers I have. All working great, the juniper doesn't seem to mind the avago transceiver I put in it.


Quick file transfer test from my pc to the server. This is off an nvme ssd to avoid sata bottlenecks. Normal transfers will be a fair bit slower sadly but at least now I can have two pc's copying to the server and two tv's streaming from it without interruption.
 
Last edited:
Bought some new fans for the juniper switch to try and quieten it down a bit without triggering a fan failure warning.

2x 40x40x28mm 12,500rpm San Ace fans instead of 18,000rpm. Idle fan voltage is 4.5V so something like a noctua that maxes out at 5000rpm would run too slow at idle.
A pack of molex plug bodies and pins for fan headers (as the fans above come with bare ends)
A pack of molex ATX pins and sockets for making up custom PSU cables in the future
A molex pin extraction tool for the ATX pins
Crimping dies for insulated and un-insulated terminals


Fitted and now tested. All working ok, significantly quieter both at idle and full speed and no fan failure warnings. Best of all, it is still idling at 44 degrees.


Server cpu temps looking rather chilly at this time of year.
 
Last edited:
Back
Top Bottom