Not super related to the OP but since we're discussing network topologies; I've recently had an insane idea that nfs security sucks, nfs traversing firewalls sucks, kerberos really sucks, and that just wrapping it all in a wireguard pipe is way better.
How deranged would it be to have every nfs client establish a wireguard tunnel and only have nfs traffic go through the tunnel?
For site-so-site ovelay networks, use wireguard, vxlan should be inside of it, if at all. Your "network" is connected by wireguard, and it contains details like vxlan. Even within your network, when crossing security boundaries across untrusted channels, you can use wireguard.
Others mentioned tailscale, it's cool and all but you don't always need it.
As far as security, that's not even the consideration I had in mind, sure wireguard is secure, but that's not why you should have vxlan inside it, you should do so because that's the purpose of wireguard, to connect networks securely across security/trust boundaries. it doesn't even matter if the other protocol is also wireguard, or ssh or whatever, if it is an option, wireguard is always the outermost protocol, if not then ipsec, openvpn,softether,etc..whatever is your choice of secure overlay network protocol gets to be the tunnel protocol.
https://man.openbsd.org/vxlan.4#SECURITY seems unambiguous that it's intended for use in trusted environments (and all else being equal, I'd expect the openbsd man page authors to have reasonable opinions about network security), so it sounds like vxlan over ipsec/wg is probably the better route?
VXLAN over WireGuard is acceptable if you require a shared L2 boundary.
IPSec over VXLAN is what I recommend if you are doing 10G or above. There is a much higher performance ceiling than WireGuard with IPSec via hardware firewalls. WireGuard is comparatively quite slow performance-wise. Noting Tailscale, since it has been mentioned, has comparatively extremely slow performance.
edit: I'm noticing that a lot of the other replies in this thread are not from network engineers. Among network engineers WireGuard is not very popular due to performance & absence of vendor support. Among software engineers, it is very popular due to ease of use.
Instead you can create multiple Wireguard interfaces and use policy routing / ECMP / BGP / all the layer 3 tricks, that way you can achieve similar things to what vxlan could give you but at layer 3.
There's a performance benefit to doing it this way too, in some testing I found the wireguard interface can be a bottleneck (there's various offload and multiple core support in Linux, but it still has some overhead).
This is the correct answer, routing between subnets is how it’s suppose to work. I think there are some edge cases like DR where it seems like stretching L2 might sound like a good idea, but it practice it gets messy fast.
I use vxlan on top of wireguard in my hobby set up. Probably wouldn't recommend it for an actual production use-case. But that is more or less because of how my homelab is setup (Hetzner -> Home about 20ms latency roundtrip).
I considered dropping my root wireguard and setting up just vxlan and flannel, but as I need NAT hole punching I kind of need the wireguard root so that is why i ended up with it.
Going Wireguard inside the vxlan (flannel) in my case, would likely be overkill, unless I wanted my traffic between nodes between regions to be separated from other peers on the network, not sure where that would be useful. It is an easy way of blocking out a peer however, but that could just as well be solved on the "root" wireguard node.
There might be some MTU things that would be messed up going nested wireguard networks.
Is there a WireGuard equivalent that does L2 instead of L3? Need this for a virtual mesh network for homelabbing. I have this exact setup, running VXLAN or GENEVE over WireGuard tunnel using KubeSpan from Talos Linux but I simply think having L2 access would make load balancer much easier
I achieve load balancing by running native wireguard on a vps at hetzner, I've got a native wireguard mesh, I believe Talos can do the same, where the peers are manually set up, or via. tailscale etc. I then tell k3s that it should use the wireguard interface for vxlan, and boom my kubernetes mesh is now connected.
flannel-iface: "wg0" # Talos might have something similar.
I do use some node-labels and affinities to make sure the right pods end up in the right spot. For example the metallb annoucer always has to come from the hetzner node. As mentioned in my reply below, it takes about 20ms roundtrip back to my homelab, so my sites can take a bit of time to load, but it works pretty well otherwise, sort of similar to how cloudflare tunnels would work, except not as polished.
What are your discovery mechanisms? I don't know what exists for automatic peer management with wg. If you're doing bgp evpn for vxlan endpoint discovery then I'd think WG over vxlan would be the easier to manage option.
If you actually want to use vxlan ids to isolate l2 domains, like if you want multiple hypervisors separated by public networks to run groups of VMs on distinct l2 domains, then vxlan over WG seems like the way to go.
BGP is vastly superior to any L2 make-believe trash you can imagine, and amazingly, it often has better hardware offloading support for forwarding and firewalls. For example, 100G switches (L3+) like MikroTik's CRS504 do not support IPv6 in hardware for VXLAN-encapsulated flows, but everything just works if you choose to go the BGP route.
Any ASIC switch released in the last decade from Cisco/Juniper/Arista supports EVPN/VXLAN in hardware. EVPN is built on BGP. This has become the industry standard for new enterprise and cloud deployments.
The lack of support for hardware EVPN is one of the many reasons that Mikrotik is not considered for professional deployments.
You can have EoIP over WG with any VLANs you like.
You can have a VXLAN over plain IP, over EoIP, over WG, over IPSec. Only WG and IPSec (with not NULL sec) do providecany semblance ofvencryption in transit
What gave you that idea? Internally, Google uses GRE/GENEVE-like stuff but for reasons that have nothing to do with "preventing compromise" or whatever, but because they're carrying metadata (traces, latency budgets, billing ids.) That is to say, encapsulation is just transport. It's pretty much L3 semantics all the way down... In fact, this is more or less the point: L2 is intractable at scale, as broadcast/multicast doesn't work. However, it's hard to find comparisons to anything you're familiar with at Google scale. They have a myriad of proprietary solutions and custom protocols for routing, even though it's all L3 semantics. To learn more:
How deranged would it be to have every nfs client establish a wireguard tunnel and only have nfs traffic go through the tunnel?
Others mentioned tailscale, it's cool and all but you don't always need it.
As far as security, that's not even the consideration I had in mind, sure wireguard is secure, but that's not why you should have vxlan inside it, you should do so because that's the purpose of wireguard, to connect networks securely across security/trust boundaries. it doesn't even matter if the other protocol is also wireguard, or ssh or whatever, if it is an option, wireguard is always the outermost protocol, if not then ipsec, openvpn,softether,etc..whatever is your choice of secure overlay network protocol gets to be the tunnel protocol.
IPSec over VXLAN is what I recommend if you are doing 10G or above. There is a much higher performance ceiling than WireGuard with IPSec via hardware firewalls. WireGuard is comparatively quite slow performance-wise. Noting Tailscale, since it has been mentioned, has comparatively extremely slow performance.
edit: I'm noticing that a lot of the other replies in this thread are not from network engineers. Among network engineers WireGuard is not very popular due to performance & absence of vendor support. Among software engineers, it is very popular due to ease of use.
You'd be surprised to know that this is especially popular in cloud! It's just abstracted away (:
Instead you can create multiple Wireguard interfaces and use policy routing / ECMP / BGP / all the layer 3 tricks, that way you can achieve similar things to what vxlan could give you but at layer 3.
There's a performance benefit to doing it this way too, in some testing I found the wireguard interface can be a bottleneck (there's various offload and multiple core support in Linux, but it still has some overhead).
I use Tinc as a daily driver (for personal things) and have yet to come up with a new equivalent, given that I probably should. Does Vxlan help here?
These days I lean towards WireGuard simply because it's built into Linux, but Tinc would be my second choice.
I considered dropping my root wireguard and setting up just vxlan and flannel, but as I need NAT hole punching I kind of need the wireguard root so that is why i ended up with it.
Going Wireguard inside the vxlan (flannel) in my case, would likely be overkill, unless I wanted my traffic between nodes between regions to be separated from other peers on the network, not sure where that would be useful. It is an easy way of blocking out a peer however, but that could just as well be solved on the "root" wireguard node.
There might be some MTU things that would be messed up going nested wireguard networks.
I achieve load balancing by running native wireguard on a vps at hetzner, I've got a native wireguard mesh, I believe Talos can do the same, where the peers are manually set up, or via. tailscale etc. I then tell k3s that it should use the wireguard interface for vxlan, and boom my kubernetes mesh is now connected.
flannel-iface: "wg0" # Talos might have something similar.
I do use some node-labels and affinities to make sure the right pods end up in the right spot. For example the metallb annoucer always has to come from the hetzner node. As mentioned in my reply below, it takes about 20ms roundtrip back to my homelab, so my sites can take a bit of time to load, but it works pretty well otherwise, sort of similar to how cloudflare tunnels would work, except not as polished.
My setup is here if it is of help
https://git.kjuulh.io/kjuulh/clank-homelab-flux/src/branch/m...
https://docs.zerotier.com/bridging/
But it's not necessarily a bad idea. It depends on the circumstances, even when traversing a public network.
L2 is a total waste of time.
The lack of support for hardware EVPN is one of the many reasons that Mikrotik is not considered for professional deployments.
VXLAN is L2-like tranport over L3
You can have EoIP over WG with any VLANs you like.
You can have a VXLAN over plain IP, over EoIP, over WG, over IPSec. Only WG and IPSec (with not NULL sec) do providecany semblance ofvencryption in transit
And mandatory X\Y problem.
IPSec-equivalent, VXLAN-equivalent, IPSec-equivalent.
Prevents any compromised layer from knowing too much about the traffic.
Andromeda https://research.google/pubs/andromeda-performance-isolation...
Orion https://research.google/pubs/orion-googles-software-defined-...