Recently I bought one of the N5105 firewall boxes from Aliexpress for $158 CAD, and I’ve been evaluating the best way to set it up. I’d like to run a firewall/router as a virtual machine, as I’d like to also run a small file server on it. In other words, I’d like it to have some of the typical features you get with a mid-to-high end consumer router from say Asus but with more processing power and flexibility, as well as 2.5GbE networking.
In the unit I got with the i226 Intel network chip, I installed a Patriot P310 1TB NVME SSD and 16GB of laptop RAM. Both were the cheapest I could find at those sizes and being from only mildly sketchy brands.
I chose to run Proxmox as a hypervisor as it’s free, fairly straightforward and popular. XCP-ng is also a popular choice, but some performance testing revealed worse networking performance with a virtualized Pfsense instance than Proxmox, so I dropped it. The only other alternatives involve licensing or potential unsupported hardware issues, from Microsoft and VMware, so I didn’t bother evaluating them. BSD has an alternative to KVM now – bhyve – but as far as I can tell no one has made a proper virtualization OS out of it, so it’s out of the running as well.
With that decided, I needed to figure out the best router VM to use for this little box. I got the 2.5GbE networking as that was the only way to actually get my full provided speed from my ISP (the ONT has 2.5GbE) and obviously the configuration needed to be able to actually route 1Gbit through it without too much difficulty. Servethehome’s great articles on these systems are what led me to buy one, and they clearly show that the unit is capable of routing 2.5GbE line speed. However, they test under conditions of both WAN and LAN NICs being passed through. This is possible with this unit, but that only really makes sense if you also have a switch in your network, and in my case I don’t yet have enough wired devices where I’m setting this up that I need more than the 4 ports the unit provides. Thus, I need to use VirtIO interfaces that are not passed through, and I’ve never had great luck with those – hence the testing to find the most efficient setup.
All testing was done by treating my existing router as the WAN on one port, with a test client machine on the LAN on the other port. For the bare metal test, the WAN port went to my router, and the LAN port went to a physical client machine. In the virtualization tests, the WAN and LAN were both virtio interfaces, with the WAN bridged on Proxmox to my existing LAN on one interface, and the LAN being connected to a Proxmox-only bridge with the only other machine being a small Ubuntu 22.04 container. All firewall VMs were configured following standard VM install guidelines as posted by the developers if available, although I did not extensively verify everything and I may have missed a setting or two. All firewalls were set up in a typical home setup: the WAN IP is forwarded to LAN via NAT, with the default rulesets. I did not set up any fancy firewall rules, port forwards or an IDS. In all cases, Iperf3 was used with 3 parallel streams running on the container as a client connecting to the nyfiosspeedX.west.verizon.net servers. Uploads seemed to use less CPU than downloads, so all results reported are downloads. All tests maxed out a 1Gbit link i.e. 950Mbits/s. CPU usage was measured using the basic top utility. For the Proxmox tests, the usage includes the iperf client itself, but as this applies to all Proxmox tests they are comparable to each other. An example command line string is:
iperf3 -c nyfiosspeed1.west.verizon.net –time 60 -P 3 -R
Testing – CPU Usage
|Router type||CPU Usage|
|Bare metal – OPNSense 23.1||10-15%|
|Proxmox – Pfsense 2.7||50-60%|
|Proxmox – OPNSense 23.1.11||70-75%|
|Proxmox – OpenWRT 22.03.5||30%|
|Proxmox – IPFire 2.27 Core Update 175||40%|
I also tested throughput from the same container client to another container on my WAN acting as server. Given that these are both on the same machine, the throughput will be limited solely by the VM’s performance and Proxmox’s ability to move packets to and from the router VM and each container.
|Router type||Average Throughput over 60s|
|Proxmox – Pfsense 2.7||2.20Gbits/s|
|Proxmox – OPNSense 23.1.11||2.57Gbits/s|
|Proxmox – OpenWRT 22.03.5||9.96Gbit/s|
|Proxmox – IPFire 2.27 Core Update 175||3.69Gbits/s|
From those results, it’s obvious that OpenWRT is the clear winner. I’m not sure why that is, and my initial assumption of it being a Linux vs BSD VM difference went out the window when IPFire (which uses Linux) performed much worse than OpenWRT.
As weird as it is to use OpenWRT as a VM rather than on an embedded device, I think I’ll be going that route. For this setup, I don’t need anything fancy, and I don’t think it is any worse from a security perspective than running it on an embdded device with other services like file sharing as is more common in a home use scenario. It also cannot possibly be worse than my ISP provided router, an Adtran 854-v6 which has decent hardware it seems but uses a white label version of the Plume Homepass system. This means my only way of configuring it is via a very buggy app where everything goes through cloud servers, and where the app itself is so buggy it’s difficult to enter my ISP login credentials! The app also offers very limited configuration options which boil down to setting the IPv4 LAN IP range and setting up some basic port forwards. I can’t even enable some features Plume normally allows, like switching to bridge rather than router mode, so I can’t even use the thing as a basic WiFi access point.
I should also note that these performance numbers only apply when using virtual interfaces. I don’t have any numbers, but I believe most of my bottlenecks come from the virtual interfaces so if you have a setup that allows you to use only PCI-passthrough NICs in your VM, these performance limitations should either not apply or be significantly reduced.
Disclaimer: These tests were done quickly in my spare time with flawed experimental methodology. Don’t rely on them for anything serious.