> IPv6 restores globally routable addresses to every node, letting peers connect without contortions.
Global routeability doesn't automatically mean global reachability.
Many consumer and professional routers will block inbound TCP connections, and incoming UDP traffic without at least similar outbound UDP traffic preceding it, so you will still need hole punching.
Hole punching does get significantly more easy with v6, though, since there's really only one way to do "outbound connections only" firewalling (while there's several ways to port translate, some really hostile to hole punching).
Arguably one thing that's missing is a very simple, implicit standard that allows signalling a willingness to accept an inbound TCP connection from a given IP/port that such stateful firewalls can honor, similar to how they already implicitly do it for UDP, but with HTTP 3 running over UDP, the point might well be moot soon.
That simple, implicit standard exists since RFC793:
Simultaneous initiation is only slightly more complex, as is shown in
figure 8. Each TCP cycles from CLOSED to SYN-SENT to SYN-RECEIVED to
ESTABLISHED.
TCP A TCP B
1. CLOSED CLOSED
2. SYN-SENT --> <SEQ=100><CTL=SYN> ...
3. SYN-RECEIVED <-- <SEQ=300><CTL=SYN> <-- SYN-SENT
4. ... <SEQ=100><CTL=SYN> --> SYN-RECEIVED
5. SYN-RECEIVED --> <SEQ=100><ACK=301><CTL=SYN,ACK> ...
6. ESTABLISHED <-- <SEQ=300><ACK=101><CTL=SYN,ACK> <-- SYN-RECEIVED
7. ... <SEQ=101><ACK=301><CTL=ACK> --> ESTABLISHED
Simultaneous Connection Synchronization
Figure 8.
Every stateful firewall supports this. All you need to communicate off-band is IP addresses and ports.
> But it's often disabled for the same reason as having router-level firewalls in the first place.
Yeah, anything that allows hosts to signal that they want to accept connections, is likely the first thing a typical admin would want to turn off.
It’s interesting because nowadays it’s egress that is the real worry. The first thing malware does is phone home to its CNC address and that connection is used to actually control nodes in a bot net. Ingress being disabled doesn’t really net you all that much nowadays when it comes to restricting malware.
In an ideal world we’d have IPv6 in the 90’s and it would have been “normal” for firewalls to be things you have on your local machine, and not at the router level, and allowing ports is something the OS can prompt the user to do (similar to how Windows does it today with “do you want to allow this application to listen for connections” prompt.) But even if that were the case I’m sure we would have still added “block all ingress” as a best practice for firewalls along the way regardless.
> Ingress being disabled doesn’t really net you all that much nowadays when it comes to restricting malware.
But how much of this is because ingress is typically disabled so ingress attacks are less valuable relative to exploiting humans in the loop to install something that ends up using egress as part of it's function.
If it weren't for Internet infrastructure hobbling SCTP (via firewall), SCTP provides the same QUICC (session multiplexing) within same 5-tuple and with way much lower packet overhead and smaller code base too.
As with any network protocol design, the tradeoff is slighty gained from versatility over loss of privacy. So it depends on your triage of needs: security, privacy, confidentiality.
Now with the latest "quadage", unobservability (plausible deniability).
Unfortunately most of the existing communication protocols that are standardized conform to a broken model of networking where security is not provided by the network layer.
Cryptography can't be thought of as an optional layer that people might want to turn on.
That bad idea shows up in many software systems.
It needs to be thought of as a tool to ensure that a behavior is provided reliably.
In this case, that the packets are really coming from who you think they are coming from.
There is no reason to believe that they are without cryptography.
It's not optional; it's required to provide the quality of service that the user is expecting.
DTLS and QUIC both immediately secure the connection. QUIC then goes on to do it's stream multiplexing.
The important thing is that the connection is secured in (or just above) the network layer.
Had OSI (or whoever else) gotten that part right, then all of these protocols, like SCTP, would actually be useful.
From what I recall, one downside to SCTP is that things like resuming from different IP addresses and arbitrarily changing the amount of connections per socket didn't work well in standard SCTP. Plus the TLS story isn't as easy. QUIC makes that stuff easier to work with from an application perspective.
Still a fascinating protocol, doomed to be used exclusively as a weird middle layer for websockets and as a carrier protocol for internal telco networks.
This article focuses on the transport-layer design, not a torrent client replacement.
The goal is to provide a reusable IPv6-native P2P connection layer (QUIC-based, NAT-free) that existing clients or new applications can integrate without touching their higher-level logic.
Thanks for sharing. I want to ask you something: I understand that with IPv6 the idea is that every household receives several of IPv6 addresses so that every single IoT device has their unique IPv6 address and there is no NAT needed.
Would it be possible to use a dozen of IPv6 addresses at the same time? Like send one UDP packet over certain IPv6 interface, next packet over another IPv6 interface, and so on. If both sending and receiving end have access to multiple IPv6 addresses I can see how this significantly increases complexity for tracking.
Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
> Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
Yes
> I can see how this significantly increases complexity for tracking
Not really. You just track at some prefix level. In general, the ISP will hand out a /64 per consumer so that's what you can track. From there, you can build more complex and more precise grouping rules for tracking.
The biggest tracking hurdle is to figure out if the ISP that handed out the block of addresses is handing out /64s, /56s, or /48s. The network provided to you is functionally the same as the IP address assigned to you with IPv4.
In theory I could rent an IPv4 /29 (of which 6 addresses are usable) for like 20 euros a month from my home ISP to cause the same confusion but I doubt it'd confuse trackers to use those.
I thought most ISP's give out at least /64's for free these days? Telia gives out a /56, although unfortunately there's no way to migrate them. This was a big deal for my homelab when I was moving, as I had to manually update all prefixes everywhere. A pain in the ass.
By convention they're supposed to DHCP you at least a /64 if not something wider. I don't believe there's any expectation it be static (although it typically is AFAIK) and there are some providers that defy expectations by handing out narrower slices (up to and including /128).
If you assign a subnet to a host, or allow the host to claim multiple addresses via ND from the link subnet, then you can use as many addresses as you want. You could give every process on your machine its own IPv6 address for example.
Yes, and if your host has access to several IPv6 addresses and maybe an IPv4 address it'd be nice to have something like wireguard actually utilize all of them in some random order. Same on the receiving end, wireguard server listenes both on IPv4 and IPv6 at same time and internally puts received packets in the proper order.
I feel this would create significant struggles for any surveillance software because most firewalls I know are modeled on a source address / target address basis.
If you have access to enough source IPv6 addresses you might even put your whole wireguard traffic into ICMP packet payload?
It is more than convention, the /64 is the minimum allocation to support SLAAC. If you're getting less than a /64 you're not getting full support for IPv6.
Well you're not getting support for SLAAC but I didn't understand that to be a core requirement to qualify as a functional IPv6 implementation.
Regardless, my point is that allocations narrower than /64 exist in the wild for better or worse. So do IPv6 NAT implementations for that matter. If you assume either of those things don't exist then you might be in for a surprise.
Networks are supposed to do egress filtering to prevent any packets with fake IPs from ever leaving the network. In practice it's not always so, but it mostly is. So you'd be limited to fake IP addresses in your own network, and doing so might raise alerts depending on the network infrastructure you live in.
Packets with fake source address can easily be spotted, and will raise an alert. In terms of using multiple interfaces for a single service it might be easy to hack together in a python script, but last time I checked the linux kernel support for bundling multiple interfaces is limited to redundancy and failover.
What I'd like to have is a single service dynamically using many network interfaces with randomized packet timings and randomized packet scheduling (5 packets on first interface, pause on 2nd, some on third interface, sometimes send traffic simultaneously).
I realize it's intended to be an unsupported edge case but I'm curious. What happens in the event a NAT is present along the IPv6 network path? Do you just forward a port the same as you would with the various IPv4 solutions and move on? Or does it break catastrophically? Something else?
Global routeability doesn't automatically mean global reachability.
Many consumer and professional routers will block inbound TCP connections, and incoming UDP traffic without at least similar outbound UDP traffic preceding it, so you will still need hole punching.
Hole punching does get significantly more easy with v6, though, since there's really only one way to do "outbound connections only" firewalling (while there's several ways to port translate, some really hostile to hole punching).
Arguably one thing that's missing is a very simple, implicit standard that allows signalling a willingness to accept an inbound TCP connection from a given IP/port that such stateful firewalls can honor, similar to how they already implicitly do it for UDP, but with HTTP 3 running over UDP, the point might well be moot soon.
But it's often disabled for the same reason as having router-level firewalls in the first place.
Yeah, anything that allows hosts to signal that they want to accept connections, is likely the first thing a typical admin would want to turn off.
It’s interesting because nowadays it’s egress that is the real worry. The first thing malware does is phone home to its CNC address and that connection is used to actually control nodes in a bot net. Ingress being disabled doesn’t really net you all that much nowadays when it comes to restricting malware.
In an ideal world we’d have IPv6 in the 90’s and it would have been “normal” for firewalls to be things you have on your local machine, and not at the router level, and allowing ports is something the OS can prompt the user to do (similar to how Windows does it today with “do you want to allow this application to listen for connections” prompt.) But even if that were the case I’m sure we would have still added “block all ingress” as a best practice for firewalls along the way regardless.
But how much of this is because ingress is typically disabled so ingress attacks are less valuable relative to exploiting humans in the loop to install something that ends up using egress as part of it's function.
As with any network protocol design, the tradeoff is slighty gained from versatility over loss of privacy. So it depends on your triage of needs: security, privacy, confidentiality.
Now with the latest "quadage", unobservability (plausible deniability).
Cryptography can't be thought of as an optional layer that people might want to turn on. That bad idea shows up in many software systems. It needs to be thought of as a tool to ensure that a behavior is provided reliably. In this case, that the packets are really coming from who you think they are coming from. There is no reason to believe that they are without cryptography. It's not optional; it's required to provide the quality of service that the user is expecting.
DTLS and QUIC both immediately secure the connection. QUIC then goes on to do it's stream multiplexing. The important thing is that the connection is secured in (or just above) the network layer. Had OSI (or whoever else) gotten that part right, then all of these protocols, like SCTP, would actually be useful.
Still a fascinating protocol, doomed to be used exclusively as a weird middle layer for websockets and as a carrier protocol for internal telco networks.
This article focuses on the transport-layer design, not a torrent client replacement. The goal is to provide a reusable IPv6-native P2P connection layer (QUIC-based, NAT-free) that existing clients or new applications can integrate without touching their higher-level logic.
Feedback on design trade-offs is very welcome.
The project is very impressive, as is https://github.com/TheusHen/ternary-ibex and having papers: https://orcid.org/0009-0009-5055-5884
What's the education path for a 14 year old that does this stuff?
Would it be possible to use a dozen of IPv6 addresses at the same time? Like send one UDP packet over certain IPv6 interface, next packet over another IPv6 interface, and so on. If both sending and receiving end have access to multiple IPv6 addresses I can see how this significantly increases complexity for tracking.
Could you split up the traffic across dozens or hundreds of IPv6 source addresses?
Yes
> I can see how this significantly increases complexity for tracking
Not really. You just track at some prefix level. In general, the ISP will hand out a /64 per consumer so that's what you can track. From there, you can build more complex and more precise grouping rules for tracking.
In theory I could rent an IPv4 /29 (of which 6 addresses are usable) for like 20 euros a month from my home ISP to cause the same confusion but I doubt it'd confuse trackers to use those.
I feel this would create significant struggles for any surveillance software because most firewalls I know are modeled on a source address / target address basis.
If you have access to enough source IPv6 addresses you might even put your whole wireguard traffic into ICMP packet payload?
What is ND? Do you have a link with details?
Regardless, my point is that allocations narrower than /64 exist in the wild for better or worse. So do IPv6 NAT implementations for that matter. If you assume either of those things don't exist then you might be in for a surprise.
What I'd like to have is a single service dynamically using many network interfaces with randomized packet timings and randomized packet scheduling (5 packets on first interface, pause on 2nd, some on third interface, sometimes send traffic simultaneously).
I realize it's intended to be an unsupported edge case but I'm curious. What happens in the event a NAT is present along the IPv6 network path? Do you just forward a port the same as you would with the various IPv4 solutions and move on? Or does it break catastrophically? Something else?