LOLVirginMedia. Everyone knows their routing is crap verging on insane, but this takes the cake. My 'internet experience' has been getting worse and worse this last year, despite a large head-end uplift and upgrade, new segments added, new CMTS etc. Every peak time traffic would start to slow down, I'd lose packets, websites loaded slowly (obviously using self-hosted DNS not VM's) and it was just awful. Then, having used Unix (BSD, Linux etc) for 15-20 years and VPNs for more than half that, I decided to do something about it. As I briefly mentioned in an earlier post a while ago, my setup is as follows:
I had been running VPNs on various local devices individually but got sick of load balancing, swapping connections/servers/locations between them (especially for family member devices). So in between attending the OcUK Motors meet, I spent the weekend playing with FreeBSD 11.1 p10 and getting my hands dirty.
Now all devices route through my home made router as usual, but the VPNs (plural) have been moved off the local devices and on to that box. I now have interfaces as follows:
With manually set outbound NAT - plus hairpin NAT and proxy helper for self-hosted domain resolution due to the VPNs - each gateway (vpn.ac, NordVPN, AirVPN, PIA) has its own route to the 'real' WAN to maintain a connection 24/7. Extra locations and servers can be added trivially if or when the need arises. Originally I was running a single VPN and didn't know much about how to add (or even load balance between) a second or more. As I said, though, I've been busy playing with FreeBSD (11.1p10, Mate Desktop) for a few days though and digging around in ports and the networking stuff. Now I have it set so that all VPNs idle 24/7, all have NAT routes out via the main WAN gateway, and LAN access (or even individual client access) is controlled by pf rules like this:
LAN
* Pass, Source: LAN NET, Destination: ANY, Gateway: 'desired VPN or WAN gateway'
* Block, Source: ANY, Destination: ANY, Gateway: VM WAN
The second rule makes it impossible for the VPN to leak, as if the local clients can't resolve via the desired VPN gateway (chosen in rule 1), by default they would fall back to the 'normal' VM Gateway. With rule 2 in place, they now simply have all their packets dropped until I fix it again. For those who don't know, firewall rules (certainly in pf, ipfw, iptables etc) are read and used in order from top to bottom.
DNS is resolved separately per interface (VPN DNS per VPN interface, SecureDNS with DNSSEC over TLS for WAN). I noticed the TiVO v6 box didn't like this (the Netflix and YouTube apps would no longer work), so I set the DHCP daemon to provide the V6 with VM DNS servers as well as a static IP, while keeping the rest of the LAN devices 'clean' (encrypted with proper DNS). The TiVO still fetches its traffic over the VPN interfaces however, as does everything else LAN-side. Policy based routing FTW. The end result?
Using the bare naked VM350 connection (speedtest.net app to Vispa server):
Two clicks (Edit allow LAN rule, change output gateway from VM to VPN > Save):
Using my preferred VPN gateway (speedtest.net app, to the same Vispa server a moment apart from the first test):
Yes, you read that correctly. Yes, it was 'peak time' when the tests were undertaken. No, I haven't made a mistake with the labels (check the source network in the images for proof).
With the VPN enabled (AES-128-GCM) my pings to the same server from the same LAN machine (desktop PC, specs in sig) have gone down by 66%.
Jitter is improved by 50%. Speed is barely impacted outside of margin of error. No leaks, DNS working properly, policy based routing pushing everything to the right place both LAN and WAN side. Job's a good un... Until I decide to tweak something else.
Edited to add: For those who don't know, VPNs are 'supposed' to slow down your connection compared to the 'bare' ISP link. They're also 'supposed' to increase latency / make pings worse. They're also 'supposed' to make your routing more complicated. In this case, VM's is so poor my VPN actually fixed it. I'll spare you all the traceroute printouts, but suffice to say a trace from my desktop to a server now has five less hops, missing all the VM-node-28237 steps with abysmal response times and convoluted routing. I now go direct from desktop PC > VPN server > destination in less than 6 hops. Win!