Edit for anyone that finds this later: my problem was that I was missing a route in the VMs to send traffic for the OpenVPN subnet to the main host bridge interface. So, it was using “default”, which sent it to my internet router.
Maybe better to ask this in a Linux group, but trying here first.
I’m running a Linux server with Home Assistant in a VM, and a whole bunch of other stuff.
I recently moved my OpenVPN server onto the same physical box as Home Assistant. OpenVPN runs native on the host OS in tunnel mode.
OpenVPN works fine - clients can get to the host running OpenVPN, to applications running in docker containers on the same host and to other hosts on my network (once I update their routing to send traffic for my VPN network back to the OpenVPN host).
OpenVPN clients cannot get to my Home Assistant VM.
If I use tcpdump to watch the VM network interface (vnet0), from the host, and ping the VM from a VPN client I see the echo request go in and the reply come out. If I do the same, but watch the OpenVPN interface (tun0) I only see the request go in, but no reply. It’s like the kernel doesn’t know what to do with packets from the VM addressed to the VPN.
There is no firewall running on the host. This is not specific to my Home Assistant VM - I bought up a vanilla Alpine Linux VM and had exactly the same issue.


Been a while since I messed around with my server, but I feel like I set up the VM network NICs, as a bridge, br0, and then set up the host network as a bridge, br0, so my VM’s and host could all talk to each other. I wonder if your VM’s NICs are set up as NAT and its isolating the VM’s and connections?
https://wiki.libvirt.org/Networking.html