Options for 10G router (Qotom/Gowin/Other)

Associate
Joined
29 Dec 2014
Posts
2,461
Location
The "North"
Hey all,

Recently I've been working on a project to upgrade my home network to 10G by replacing various switches and cables with fibre and now I'm at the stage where I am looking to replace my router.

I've spent ages looking at varios units from OPNSense (Dec), Topton, Gowin and other suppliers but I've yet to find one that ticks all the boxes and I'm concious I don't want to pay shedloads for something and have it be overkill.

At the moment the requirements are:
  • Passive or quiet (Be able to be in a cabinet that is near a bedroom)
  • Be able to operate at near 10 Gbit/s between VLANs (Not NAT'd)
  • Lower power consumption
  • Be able to run OPNSense
So far I have found the following units which seem to be in the running:

Gowin GW-FN-1UR2 25G
Pros:
  • N305 Intel CPU (Power efficient and new)
  • ConnectX4 NIC (Newer than most other systems in similar bracket)
  • Reasonable support if bought direct from manufacturer
Cons:
  • Pricey at £720+ after import fees and VAT
  • Risk of noisy PSU fan
  • Likely overkill for usage

Qotom Mini PC Q20300G9-S10​

Pros:
  • Cheap at £320 after import fees and VAT
  • Passive (Completely silent)
  • 4x 10 Gbit/s NICs
Cons:
  • Older weaker CPU (C3808)
  • Older X553 Intel NICs
  • Potentially under powered for 10 Gbit/s
Minisforum MS-01
Pros:
  • Middle of the road at £409 after delivery
  • Newer X710 NICs
  • Tried and tested
Cons:
  • Overkill CPU for usage
  • Not really thermally designed for location
  • High power usage for planned tasks
Topton X8 N305 10G unit
Pros:
  • £310 after shipping and import fees
  • N305 CPU more kitted out for planned usage
Cons:
  • Ageing 82599 Intel NIC (14+ years old)
  • Thermally dubious
OPNSense DEC 750
Pros:
  • Modern Ryzen CPU
  • Supported by OPNSense
  • New and power efficient
Cons:
  • Pricier than planned
  • Only 8GB of RAM
  • Unknown upgrade paths

Currently I'm siding more with the Qotom units as I don't think I can bear to stump up £700+ for a router unless it ticked every single box of being quiet, powerful and being reasonably upgradeable however if anyone here has had a similar dilemma or has any thoughts on the situation I'd be grateful to get your thoughts and opinions as I'm a bit stuck on deciding at the moment :)
 
Just curious why you need 10 Gb between internal networks? Or do you need >1Gb on the WAN side also?
>1Gbit/s WAN will come eventually once Openreach/Spring/Netomnia get to my house :D

10 Gbit/s between internal networks is due to the segregation of backups on our home network. Our backup server is on a completely seperate VLAN with the bare minimum open between it and the server VLAN to keep it safe in the event something did get infected.

It's paranoia and there may be a better way to do it but for now that is how it is setup :)
 
Last edited:
>1Gbit/s WAN will come eventually once Openreach/Spring/Netomnia get to my house :D

10 Gbit/s between internal networks is due to the segregation of backups on our home network. Our backup server is on a completely seperate VLAN with the bare minimum open between it and the server VLAN to keep it safe in the event something did get infected.

It's paranoia and there may be a better way to do it but for now that is how it is setup :)
What servers are you running? Ideally the backup server (or a proxy) should have an interface on the same VLAN as the management interfaces of the servers.
 
What servers are you running? Ideally the backup server (or a proxy) should have an interface on the same VLAN as the management interfaces of the servers.
Veeam for backups and a mixture of Debian, Ubuntu and Windwos for internal side.

I'm trying to mitigate how much exposure the backup LAN has to internal so I'm trying to avoid a NIC bridging across the two of them. I could setup a proxy but wouldn't that still act as a bridge to some degree? :)
 
Are any of your 10G switches L3? If so, you could route (with an ACL if needed) between VLANs on that at 10G with an upstream route for the Internet to a more common 1G router?
 
I would recommend self build, spec it how you want, 10Gb dual SFP+ and a quad i226 would be ideal, wouldn’t need a top end processor but you can spec big to enable high throughput in the future.
 
Veeam for backups and a mixture of Debian, Ubuntu and Windwos for internal side.

I'm trying to mitigate how much exposure the backup LAN has to internal so I'm trying to avoid a NIC bridging across the two of them. I could setup a proxy but wouldn't that still act as a bridge to some degree? :)
Are they VMs or physical hosts?
 
I would recommend self build, spec it how you want, 10Gb dual SFP+ and a quad i226 would be ideal, wouldn’t need a top end processor but you can spec big to enable high throughput in the future.
That's not a bad shout to be fair, at least that way I can manage the hardware generation and power consumption. I wonder if I can get it to fit in a 1U case as it'd be nice to keep it in a form factor that can fit in the rack :)

Are they VMs or physical hosts?
All VMs apart from 2 physical host (Physical hosts are over 1 Gbit/s connections but VMs will be on hosts with 10 Gbit/s connectivity) :)
 
Then you only need management connectivity to do the backup, you don’t need access to the guest.
So if I move the hosts and backup server onto the same VLAN that will get around my routing issues, then the physical servers can just go through the routers as normal. That makes sense since the majority are VMs and it takes the overhead off of the router

My main concern is a security perspective but I guess that the hosts should be on their own network really since access to them is access to everything they host. I know with backups best practice is often to have them completely isolated (Which is why I wanted to keep Veeam on its own VLAN) but I guess that is more for a high security environment rather than home/labs :)
 
Last edited:
I think you’re being a bit paranoid. It’s a home environment!
You are most likely right, I try and keep things as locked down as humanly possible (Hope for the best, plan for the worst mentality) :D

I'll restructure the network based on your advice to reduce the amount of traffic flowing to the router, thank you for the input :)
 
That's not a bad shout to be fair, at least that way I can manage the hardware generation and power consumption. I wonder if I can get it to fit in a 1U case as it'd be nice to keep it in a form factor that can fit in the rack :)


All VMs apart from 2 physical host (Physical hosts are over 1 Gbit/s connections but VMs will be on hosts with 10 Gbit/s connectivity) :)

You can pick up some decent 1u machines pre loved these days. Supermicro and dell are often plentiful. Bang some noctua fans in and they go silent if needed as well.
I’ve just dropped an r230 in place of an r210 so I can run two pcie cards (dual sfp+ and quad copper gig) for a similar purpose.
 
I'll restructure the network based on your advice to reduce the amount of traffic flowing to the router, thank you for the input :)
If they are image based backups, the backup server (or its proxies) never talk to the guest OS except for indexing etc if you give it the credentials in the backup job settings. All the backup traffic is between the backup server/proxy, the host management VMKernel (if ESXi), and to the backup repository. Usually a snapshot is taken, the cold part of the guest disk is mounted to the backup server/proxy, which then sends the traffic to the backup repositories. After it completes, the disk is unmounted and the snapshot is removed. Normally you would only use proxies if you had different clusters using different L3 networks, or in really high performance environments you can have a proxy per host.

If the backups are that important to you, then you're better off concentrating on the actual backups, ie, encryption, immutability, off site 3-2-1 rule etc.
 
If they are image based backups, the backup server (or its proxies) never talk to the guest OS except for indexing etc if you give it the credentials in the backup job settings. All the backup traffic is between the backup server/proxy, the host management VMKernel (if ESXi), and to the backup repository. Usually a snapshot is taken, the cold part of the guest disk is mounted to the backup server/proxy, which then sends the traffic to the backup repositories. After it completes, the disk is unmounted and the snapshot is removed. Normally you would only use proxies if you had different clusters using different L3 networks, or in really high performance environments you can have a proxy per host.

If the backups are that important to you, then you're better off concentrating on the actual backups, ie, encryption, immutability, off site 3-2-1 rule etc.
So if I put the backup server on the same network as the hosts that should allow direct communications etc rather than sending it throute the router, that makes sense. I will definitely be focussing on the 3-2-1 rule next month as today my backup NAS decided it was going to see heaven by having a failing mainboard which wasn't ideal :D

You can pick up some decent 1u machines pre loved these days. Supermicro and dell are often plentiful. Bang some noctua fans in and they go silent if needed as well.
I’ve just dropped an r230 in place of an r210 so I can run two pcie cards (dual sfp+ and quad copper gig) for a similar purpose.
I've seen a few Supermicro ones in the past so I may have to have a look again as the rack it's going in is fairly shallow for a R230 :)
 
What did you go for in the end?
A hybrid of @ChrisD. approach and the Qotom unit due to cost and the thermal limits of my cupboard.

So far the unit is warmer than the N100 but when IDS is disabled I can push upwards of 4 Gbit/s which is good enough

On Veeam I added another NIC and locked the local firewall so it could access the bare minimum on the server LAN :)
 
A hybrid of @ChrisD. approach and the Qotom unit due to cost and the thermal limits of my cupboard.

So far the unit is warmer than the N100 but when IDS is disabled I can push upwards of 4 Gbit/s which is good enough

On Veeam I added another NIC and locked the local firewall so it could access the bare minimum on the server LAN :)

I should have thought of you! My R210 is sat dormant now. All it would have needed is a 10Gb card and it would have been smashing!
 
Back
Top Bottom