OPNSense & BT Full Fibre 900 (PPPoE)

I will run the command and share the results.
Networking mode in the QNAP virtual machine setup is ‘Bridged network’.
Is the WAN interface 'external'?

Then have the LAN interface as bridged.

 
Last edited:
I’ve attached some photos to help show my current setup which may help potentially root cause diagnose any potential setup issues, thanks.

 
Dude, clean your screen! :D

Put adaptor 1 on an external network switch, not bridged mode and see if that improves things. Also, change the MTU of the external switch to 1504 and also enable baby jumbo frames within OPNsense, that may also improve things.
 
ssh to it and run the command, it should automatically show whatever is using CPU at the top of the list. I don't know if it has top or htop bundled.

Also what networking mode is it? bridged, host etc?
Ran the top command.
From what I could see, the CPU did not go above 5%…hopefully I’ve ran the correct command/looking at the right thing!

 
Is this what you are referring to in relation to using em?
You've made progress since I posted, and tbh this is probably more a virtualisation/Qnap issue rather than anything *BSD based. However, to answer your question: No, different NICs require different drivers, even from the same manufacturer. For example, staying relevant to this case, some Intel NICs use the em driver and some use the igb driver. It's the former which has issues with PPPoE on FreeBSD and related OS (like OPNSense).

If you use such a NIC (eg Intel I350) you will suffer single queue/core issues with maxing speed under PPPoE. Switching to a NIC that uses any other driver (such as the aforementioned igb driver) removes this bug/limitation and no speed issues present, PPPoE or not. You can check which driver you're using by checking the dmesg log: more /var/run/dmesg.boot and scrolling through until you see mention of the network adapter(s) and their driver.

Edit: On *BSD PPPoE is still single-threaded and as such will only use a single encapsulated queue on the NIC. My advice about running Linux based OS for routing/firewalling using PPPoE stands, and your headache will instantly disappear... Try OpenWRT, VyOS or IPFire (no IPv6 or WireGuard in the latter), or even barebones Debian and some text file config magic.
 
Last edited:
Dude, clean your screen! :D

Put adaptor 1 on an external network switch, not bridged mode and see if that improves things. Also, change the MTU of the external switch to 1504 and also enable baby jumbo frames within OPNsense, that may also improve things.
Hahaha!!! Once its all setup, i’ll have a office cleanup! :p

The only option I see in the virtual machine is to select the ‘User mode networking’ option rather than ‘Bridged network’ - is this what you meant by putting adapter 1 on an external network switch?

Thanks.
 
You've made progress since I posted, and tbh this is probably more a virtualisation/Qnap issue rather than anything *BSD based. However, to answer your question: No, different NICs require different drivers, even from the same manufacturer. For example, staying relevant to this case, some Intel NICs use the em driver and some use the igb driver. It's the former which has issues with PPPoE on FreeBSD and related OS (like OPNSense).

If you use such a NIC (eg Intel I350) you will suffer single queue/core issues with maxing speed under PPPoE. Switching to a NIC that uses any other driver (such as the aforementioned igb driver) removes this bug/limitation and no speed issues present, PPPoE or not. You can check which driver you're using by checking the dmesg log: more /var/run/dmesg.boot and scrolling through until you see mention of the network adapter(s) and their driver.

Edit: On *BSD PPPoE is still single-threaded and as such will only use a single encapsulated queue on the NIC. My advice about running Linux based OS for routing/firewalling using PPPoE stands, and your headache will instantly disappear... Try OpenWRT, VyOS or IPFire (no IPv6 or WireGuard in the latter), or even barebones Debian and some text file config magic.
I have looked at dmesg.boot and can’t find anything related to em or ig0…I am using network adapters in a virtualised environment so not sure if this is why?

Yep, take your point regarding BSD…I was going to see how it went with OPNSense and if no success, move over to a Linux based OS firewall/router…
 
Also just plugged WAN and LAN into the Intel adapter 3 and 4 and reconfigured the virtual adapters and still same results i.e. slow download speed :-(

Managed to get 897Mbps with a laptop connected directly to the ONT modem…
 
Last edited:
Dude, clean your screen! :D

Put adaptor 1 on an external network switch, not bridged mode and see if that improves things. Also, change the MTU of the external switch to 1504 and also enable baby jumbo frames within OPNsense, that may also improve things.
Was not able to select 1504, just 1500 or some other random numbers such as 9000.
I am however able to set a WAN MTU of 1504 if this is what you meant?
Thanks.
 
For further reference, I have uploaded LAN, WAN and Dashboard Interface photos.

Only weird thing I have noticed is for the WAN, I do not see 10Gbase-T <full-duplex> like I do for the LAN…is this normal when using PPPoE for the WAN?

Thanks.

 
You've made progress since I posted, and tbh this is probably more a virtualisation/Qnap issue rather than anything *BSD based. However, to answer your question: No, different NICs require different drivers, even from the same manufacturer. For example, staying relevant to this case, some Intel NICs use the em driver and some use the igb driver. It's the former which has issues with PPPoE on FreeBSD and related OS (like OPNSense).

If you use such a NIC (eg Intel I350) you will suffer single queue/core issues with maxing speed under PPPoE. Switching to a NIC that uses any other driver (such as the aforementioned igb driver) removes this bug/limitation and no speed issues present, PPPoE or not. You can check which driver you're using by checking the dmesg log: more /var/run/dmesg.boot and scrolling through until you see mention of the network adapter(s) and their driver.

Edit: On *BSD PPPoE is still single-threaded and as such will only use a single encapsulated queue on the NIC. My advice about running Linux based OS for routing/firewalling using PPPoE stands, and your headache will instantly disappear... Try OpenWRT, VyOS or IPFire (no IPv6 or WireGuard in the latter), or even barebones Debian and some text file config magic.
I am going to try OpenWRT.

Does anyone know best practices for example, should/can I spin up a virtual machine and run OpenWRT?

Or, is it better to use Docker to run OpenWRT? If so, as a Container or, a Docker host?

Thanks!
 
Hi, just wanted to post an update to get some further thoughts.

I’ve installed OpenWRT which is running in QNAP’s Virtualization Station.
Then setup a PPPoE connection in OpenWRT.

In a wired test, I’m still getting fluctuating speedtest results, now between 650-850Mbps.

If I plugin my BT Business Smart Hub, I get 930Mbps straight away wired test.

Is the CPU Atom C3558 in the QGD-1602 underpowered which means I won’t be able to achieve a faster download speed?

Any thoughts from anyone please?

Thanks!
 
What's the CPU load on each core when you're running these tests? If they aren't hitting 100% then you aren't exhausting CPU.
 
What's the CPU load on each core when you're running these tests? If they aren't hitting 100% then you aren't exhausting CPU.
Hi, thanks for the reply.

I’ve installed:

luci-app-statistics​


I then ran a speed test an hour or so ago, please find results below:


On the Server, I have 1 CPU core allocated to Docker Unifi, which is running in Container Station.

CPU doesn’t seem to be maxing out based on the attached images from my understanding.

Also to add, in OpenWRT, I have enabled Software Flow Offloading.

Where else could the bottleneck be?

Thanks!
 
Last edited:
Back
Top Bottom