The article seems to perpetuate one of those age old myths that NAT has something to do with protection.
Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding. But implementing NAT on it's own says nothing about the behavior of your router firewall with regards to receiving Martians, or with regards to whether the router firewall itself accepts connections and if the router firewall itself isn't running some service which causes exposure.
To actually protect things behind NAT you still need firewall rules and you can keep those rules even when you are not using NAT. Thus those rules, and by extension the protection, are separable from the concept of NAT.
This is the kind of weird argument that has caused a lot of people who hadn't ever used IPv6 to avoid trying it.
If you think about it, NAT offers pretty much the same protection as a default stateful firewall. Only allowing packets from the outside related to a connection initiated from the inside.
> Only allowing packets from the outside related to a connection initiated from the inside.
NAT a.k.a IP masquerading does not do that, it only figures out that some ingress packets whose DST is the gateway actually map to previous packets coming from a LAN endpoint that have been masqueraded before, performs the reverse masquerading, and routes the new packet there.
But plop in a route to the network behind and unmatched ingress packets definitely get routed to the internal side. To have that not happen you need to drop those unmatched ingress packets, and that's the firewall doing that.
Fun fact: some decade ago an ISP where I lived screwed that up. A neighbour and I figured out the network was something like that:
192.168.1 and 192.168.2 would be two ISP subscribers and 10.0.0.x some internal local haul. 192.168.x.1 would perform NAT but not firewall.
You'd never see that 10.0.0.x usually as things towards WAN would get NAT'd (twice). But 10.0.0.x would know about both of the 192, so you just had to add respective routes to each other in the 192.168.x.1 and bam you'd be able to have packets fly through both ways, NAT be damned.
Network Address Translation is not a firewall and provides no magically imbued protection.
>Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding.
Don’t forget source routing. That said, depending on your threat model, it’s not entirely unreasonable to just rely on your ISP’s configuration to protect you from stuff like this, specifically behind an IANA private range.
Yeah, I keep meaning to write something about this. I've definitely noticed people wary of IPv6 because their machines get "real" IP addresses rather than the "safe" RFC1918 ones. Of course, having a real IP address is precisely the point of IPv6.
It's like we've been collectively trained to think of RFC1918 as "safe" and forgotten what a firewall is. It's one of those "a little knowledge is a dangerous thing" things.
In a world where people think NAT addresses are safe because you don’t need to know anything else about firewalls, IPv6 _is_ fundamentally less secure.
In both cases the only consumer security comes from "the home router defaults to being a stateful firewall". The only difference between the two is whether it also defaults to doing NAT with that state, which is not what was making IPv4 secure for people unaware either.
That's quite recent. There was some time after AWS started charging for ipv4 addresses where you could not realistically go for an ipv6 only setup behind Cloudfront because it would for example not connect to a v6 only origin.
This is probably a result of all AWS services being independent teams with their own release schedule. But it would have made sense for AWS to coordinate this better.
We did this at OpsLevel a few years back. Went from AWS managed NAT gateway to fck-nat (Option 1 in the article).
It’s a (small) moving part we now have to maintain. But it’s very much worth the massive cost savings in NATGateway-Bytes.
A big part of OpsLevel is we receive all kinds of event and payload data from prod systems, so as we grew, so did our network costs. fck-nat turned that growing variable cost into an adorably small fixed one.
I looked at using fck-nat, but decided it was honestly easier to build my own Debian Trixie packer images. See my comment below[1]. How has your experience been with fck-nat?
I'm not to much into networks, although I've been sysadmin my vps for years.
why I would need a NAT Gateway? it's not enough with a good set of rules of ufw or similar software?
I can't believe people are paying these crazy amounts for what is basically a fleet of firewalls. What is the difficulty in running VMs with nftables rules?
running a VM where? on an ec2 instance? who's going to keep that updated for me? who's going to reprovision it when aws retires the underlying hardware? who's going to monitor it for PCI compliance for me? i don't want to deal with all that. i could dump it on fargate, but at that point it's barely cheaper than just using the official version.
i've had to look at my nat gateway zero times since i set it up a couple years ago. i can't say that about any VM host i've got. to me, that's easily worth the few dollars a month that aws charges for it. it's cheaper than hiring somebody, and it's cheaper than me.
It costs a lot more than a few bucks when you’re putting a lot of traffic through it. And running your own NAT instance does not incur per-GB traffic costs.
That said, the paid NAT gateways do also publish metrics. That can be nice when debugging a legitimate issue, such as when your gateway actually runs out of NAT ports to use.
1) You can't `npm install` it, which is a huge barrier to entry to the modern breed of "engineers".
2) Companies will happily pay thousands in recurring fees for the built-in NAT gateway, but if an engineer asks for even half that as a one-off sum to motivate them to learn Linux networking/firewalling, they'd get a hard no, so why should they bother?
For company hosting cloud solutions gets you the various compliance stuff for free which can be worth it if you're not too large, and of course faster turnaround if you need to get a product out.
For personal a cheap vps will end up costing around the same as something you can do on your own, without the risk of messing up your machine/network from a vulnerable endpont
This is really it: compliance. The cost is in having to prove that you did the right things. But I do wonder if we will see an easier path forward with that. After all if there was a way to pay someone a once a year fee for an audit and filling out the paperwork and the cost was lower than the cost of using AWS then surely people would do that and it is an opportunity for an audit business that is willing to work with self-hosted setups. Or just have GPT-5 fill out the compliance docs. I suspect it won’t be long until GPT-5 is reading them.
I'll admit, bit of a poor choice of word,l. But when you need to do e.g. physical security, costs add up quickly over what you'd spend on cloud in say a year, and the compliance companies are usually a huge headache to deal with so that'll be some nice amount of your staff's time lost
I think AI coding is another part of why this is seeing a resurgence. It’s a lot quicker to build quick and dirty scripts or debug the random issues that come up self hosting.
A lot of this is support. If you’re self hosting, when things don’t work the way they should, the team has no one to blame. On AWS, they can always lean on aws not working the way it should as an excuse.
I think it might be as simple as ipv4 is just nicer to look at…maybe we should have just done “ipv5” and added another block. Eg 1.1.1.1.1. I know its stupid, but ipv6 addresses are just so hard to remember and look at that I think its just human nature to gravitate towards the simplicity of ipv4.
dead::beef is just as memorable as 1.1.1.1, and my v6 delegated prefix is just as unmemorable as my public v4. The "easier to remember" argument just sucks hard.
The problem with "add another block" is, that you have to change everything everywhere to make it work... and if you're changing everything, why not expand it properly.
Only a tiny minority of people have to look at those addresses, the majority just types "facebook", enter, clicks on first google result and gets facebook (because ".com" is too hard to write).
Who remembers IPv4 addresses? If you have more than a small handful of devices in your network you're probably going to want some kind of name service.
I have difficulty remembering ten numbers, why do I have to say 1-212-487-1965 when I can just say Santa Rosita 71965? Maybe we should have just done another exchange name and added another name. Eg Hawthorne Santa Rosita 71965. I know its stupid, but 10 digit phone numbers are just so hard to remember and look at that I think its just human nature to gravitate towards the simplicity of telephone exchange prefixes.
Yet again, another fundamental misunderstanding (either genuine or not, I'm not sure) about the low-level technologies and their origins that underpin all of this. "Can't we just..."? No.
Fwiw, the solutions mentioned here don't seem to properly secure the kernel's network stack against common attacks (rp_filter, accept_redirects, accept_source_route, syncookies, netfilter rules, etc). Ask your local security guru to harden the instance before deploying.
As an OG networking person, developer, and Linux user, the state of modern dev culture just makes me sad.
Modern devs are helpless in the face of things I taught myself to do in a day or two when I was fourteen, and they’re paralyzed with terror at the thought of running something.
It’s “hard” goes the cliche. Networking is “hard.” Sys admin is “hard.” Everything is “hard” so you’d better pay an expert to do it.
Where do we get these experts? Ever wonder that?
It’s just depressing. Why even bother.
It really makes me worry about who will keep all this stuff running or build anything new in the future if we are losing not only skills but spine and curiosity. Maybe AI.
Yes, networking and sysadmin are hard, because the Internet is a much more hostile place than it was 20 years ago and the consequences for getting things wrong are much more severe. Early 2000s, ISPs had ports open by default and getting a static IP-address was a question of just asking. With dyndns, we were hosting websites off home computers. I remember a comment on HN saying that some US university provided publicly routable static IPs to dorm room port. Not even sure I could get a static IP-address nowadays as a home consumer, never mention the willingness to host something that is not behind a WAF.
And when you got things wrong back in the day, you came home from school, saw a very weirdly behaving computer, grumbled and reinstalled the OS. Nowadays it is a very different story with potentially very severe consequences.
And this is just about getting things wrong at home, in corporate environment it is 100x more annoying. In corporate, anyway you spend 80% of the development time figuring out how to do things and then 20% on actual work, nobody will have the time to teach themselves something out of their domain.
For those who DID think "I wonder what my 'when I was a kid' will be about when I'm old" what kind of things did you guess it'd be and what did it actually end up being?
I'm only in my 30s but I was thinking recently "when I'm retired I feel like I'm going to be telling stories about how back in my day we had this thing called the filesystem and you'd just browse it directly..."
Man, just this week I had a moment like this that killed me. I had just woken my tweenager up for school and realized I’d turned into the kind of asshole who comes into your room in a good mood at 6 am. Stood in the shower and came to terms with that, but it took a while.
I actually kinda think ai will help with this, in a roundabout way.
I think of AI as a kind of floor, a minimum required skill to be able to get a job as a professional anything. If you want to find paid work as a developer, you have to at least be better than AI at the job.
Optimistically AI will filter out all the helpless Devs who can't get anything done from the job market. "Code monkeys" won't be a thing.
Juniors will have to enter unpaid trainee programs I guess, but that might not be such a bad thing
All of this. I despair with some of the takes on basic technology being hard. And when you try to defend understanding just the most rudimentary things, you're labeled a problem because you should just be paying out the nose for the service and writing even more shit code to cover it up.
I build my own NAT instances from Debian Trixie with Packer on AWS. AWS built-in NAT Gateways use an absurdly outdated and end-of-life version of Amazon Linux and are ridiculously expensive (especially traffic).
The bash configuration is literally a few lines:
cat <<'EOF' | sudo tee /etc/sysctl.d/99-ip-forwarding.conf > /dev/null
net.ipv4.ip_forward=1
EOF
sudo sysctl --system
sudo iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE
sudo iptables -F FORWARD
sudo iptables -A FORWARD -i ens5 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -o ens5 -j ACCEPT
sudo iptables-save | sudo tee /etc/iptables/rules.v4 > /dev/null
Change ens5 with your instance network interface name. Also, VERY IMPORTANT you must set source_dest_check = false on the EC2 NAT instances.
Also, don’t assign a EIP to your EC2 NAT instances (unless you absolutely must persist a given public IP) as that counterintuitively routes through public traffic. Just use a auto-assigned public IP (no EIP).
NAT instance with EIP
- AWS routes it through the public AWS network infrastructure (hairpinning).
- You get charged $0.01/GB regional data transfer, even if in the same AZ.
> Also, don’t assign a EIP to your EC2 NAT instances (unless you absolutely must persist a given public IP) as that counterintuitively routes through public traffic. Just use a auto-assigned public IP (no EIP).
Could you point me to somewhere I can read more about this? I didn't know there was an extra charge for using an EIP (other than for the EIP itself).
That's what you did before AWS had the "NAT Gateway" managed service. It's literally called "NAT Instance" in current AWS documentation, and you can implement it in any way you wish. Of course, you don't have to limit yourself to iptables/nftables etc. OPNsense is a great way to do a NAT instance.
I believe the NAT instances also use super old and end-of-life Amazon Linux. I prefer Debian Trixie with Packer and EC2 instances and no EIP. Most secure, performant, and cost effective setup possible.
> NAT AMI is built on the last version of the Amazon Linux AMI, 2018.03, which reached the end of standard support on December 31, 2020 and end of maintenance support on December 31, 2023.
Sure that one’s case, though you might be able to give out a host instead of IP to others to whitelist. Then you just set a low TTL and update the DNS record.
Yeah, I just use a VPS box I pay $20/year for. Only the most basic config goes on this machine. Basically load is 0.1 , and has no data.
Then I run my stuff locally.
And then I use ssh tunneling to forward the port to localhost of the remote machine. Its a unit file, and will reconstruct the tunnel every 30s if broken. So at most 30s downtime.
I use Tailscale myself, but if you want everything totally under your control (and don't want to go to the trouble of setting up headscale or something similar) then that's one of the absolutely simplest, lowest-effort ways of doing it. EDIT: Well, except for the VPS box I suppose, but if that provider went down or you had any reason to suspect they were doing anything suspicious, it would be quite simple to jump to a different provider, so that's pretty darn close to controlling everything yourself.
Particular things: I use letsencrypt wildcard, so my subdomains aren't leaked. If you register per subdomain, LE leaks all your subdomains as part of some transparency report. Learned that and had to burn that domain.
The VPS is from LowEndBox. Like 2 core, 20GB storage 2GB ram. But runs perfectly fine.
I run jellyfin, audiobookshelf, Navidrome, and Romm. Ssh tunnel per application.
It would also be trivial to switch providers as well. But again, not a seed box, not doing torrents, not doing anything that would attract attention. And best of all, no evidence on the VPS. Its all SSL and SSH.
For anyone else who is super confused as to wtf this is about: 1) it's not "NAT Gateway " but rather "The AWS service called NAT Gateway" and 2) it's not "self-hosting" but "hosting in EC2", in the same sense that "running postgresql on an EC2 instance" wouldn't be "self hosting aurora".
Agreed. Assuming an AWS "NAT Gateway" is the same as a regular NAT?
Security is not the purpose of a NAT. It's there to give you more IPs than you have. There's all sorts of NAT hole punching techniques. If you want a firewall, you need a firewall.
The firewall provides the stateful one way door, the router moves packets between the set of subnets it can see, and NAT makes it so things on the public internet think the conversations from one private address+port combo are actually coming from another public address.
The last part isn't adding the security, and you can absolutely NAT without preventing the "outside" subnets from being allowed to route to the "inside" subnet, it's just that NAT is almost always done on the box providing the stateful firewall too so people tend to think of the 3 functions as combined in concept as well.
> I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.
I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.
This shouldn't be mistaken for an anti-IPv6 post. There's also some steps you have to go through to enable IPv6 on your VPS networks, and there's still stuff like GitHub not handling IPv6. So, much as we need to migrate, we still have to support IPv4 connectivity for the foreseeable future.
> and there's still stuff like GitHub not handling IPv6.
And virtually everything inside of AWS still requires IPv4 so even if you have zero need to reach out to WAN, if you need any number of private AWS endpoints, you're going to be allocating some ipv4 blocks to your VPC :(.
I've worked at four tech companies and never saw a hint of IPv6 (except for some tests that verified that third-party networking code accepted that address family).
Instead I played with IPv6 at home to make sure I understood it well enough should it ever come up at work. We'll see!
In theory.. but what happens when you want to change ISPs or your ISP doesnt assign static ipv6 blocks? Its recomnended but ISPs have no incentive to give a shit about you. Now all internal infra is not routable.
There absolutely are annoyences IPv6 get rid of, that are much embedded in IT culture we only see them if we look.
Port forwarding, external/internal address split, split horizon DNS, SNI proxies, NAT, hairpin routing - some of the hacks made mostly because of shortage in IP space.
Everyone has to address their spiritual beliefs every time they mention something vaguely related to them? Else they lack epistemic humility? ...Did it occur to you that most people have actually thought of this question?
Wait, is "seems lacking in epistemic humility" just coded language for "I disagree, therefore you couldn't possibly be thoughtful"?
Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding. But implementing NAT on it's own says nothing about the behavior of your router firewall with regards to receiving Martians, or with regards to whether the router firewall itself accepts connections and if the router firewall itself isn't running some service which causes exposure.
To actually protect things behind NAT you still need firewall rules and you can keep those rules even when you are not using NAT. Thus those rules, and by extension the protection, are separable from the concept of NAT.
This is the kind of weird argument that has caused a lot of people who hadn't ever used IPv6 to avoid trying it.
NAT a.k.a IP masquerading does not do that, it only figures out that some ingress packets whose DST is the gateway actually map to previous packets coming from a LAN endpoint that have been masqueraded before, performs the reverse masquerading, and routes the new packet there.
But plop in a route to the network behind and unmatched ingress packets definitely get routed to the internal side. To have that not happen you need to drop those unmatched ingress packets, and that's the firewall doing that.
Fun fact: some decade ago an ISP where I lived screwed that up. A neighbour and I figured out the network was something like that:
192.168.1 and 192.168.2 would be two ISP subscribers and 10.0.0.x some internal local haul. 192.168.x.1 would perform NAT but not firewall.You'd never see that 10.0.0.x usually as things towards WAN would get NAT'd (twice). But 10.0.0.x would know about both of the 192, so you just had to add respective routes to each other in the 192.168.x.1 and bam you'd be able to have packets fly through both ways, NAT be damned.
Network Address Translation is not a firewall and provides no magically imbued protection.
Don’t forget source routing. That said, depending on your threat model, it’s not entirely unreasonable to just rely on your ISP’s configuration to protect you from stuff like this, specifically behind an IANA private range.
It's like we've been collectively trained to think of RFC1918 as "safe" and forgotten what a firewall is. It's one of those "a little knowledge is a dangerous thing" things.
Death , taxes and transfer fees
This is probably a result of all AWS services being independent teams with their own release schedule. But it would have made sense for AWS to coordinate this better.
It’s a (small) moving part we now have to maintain. But it’s very much worth the massive cost savings in NATGateway-Bytes.
A big part of OpsLevel is we receive all kinds of event and payload data from prod systems, so as we grew, so did our network costs. fck-nat turned that growing variable cost into an adorably small fixed one.
[1] https://news.ycombinator.com/item?id=46010302
i've had to look at my nat gateway zero times since i set it up a couple years ago. i can't say that about any VM host i've got. to me, that's easily worth the few dollars a month that aws charges for it. it's cheaper than hiring somebody, and it's cheaper than me.
That said, the paid NAT gateways do also publish metrics. That can be nice when debugging a legitimate issue, such as when your gateway actually runs out of NAT ports to use.
2) Companies will happily pay thousands in recurring fees for the built-in NAT gateway, but if an engineer asks for even half that as a one-off sum to motivate them to learn Linux networking/firewalling, they'd get a hard no, so why should they bother?
No your service does not need the extra .099% availability for 100x the price...
Make your own VPN while you are at it, wireguard is basically the same config.
For personal a cheap vps will end up costing around the same as something you can do on your own, without the risk of messing up your machine/network from a vulnerable endpont
Only a tiny minority of people have to look at those addresses, the majority just types "facebook", enter, clicks on first google result and gets facebook (because ".com" is too hard to write).
Yet again, another fundamental misunderstanding (either genuine or not, I'm not sure) about the low-level technologies and their origins that underpin all of this. "Can't we just..."? No.
Modern devs are helpless in the face of things I taught myself to do in a day or two when I was fourteen, and they’re paralyzed with terror at the thought of running something.
It’s “hard” goes the cliche. Networking is “hard.” Sys admin is “hard.” Everything is “hard” so you’d better pay an expert to do it.
Where do we get these experts? Ever wonder that?
It’s just depressing. Why even bother.
It really makes me worry about who will keep all this stuff running or build anything new in the future if we are losing not only skills but spine and curiosity. Maybe AI.
And when you got things wrong back in the day, you came home from school, saw a very weirdly behaving computer, grumbled and reinstalled the OS. Nowadays it is a very different story with potentially very severe consequences.
And this is just about getting things wrong at home, in corporate environment it is 100x more annoying. In corporate, anyway you spend 80% of the development time figuring out how to do things and then 20% on actual work, nobody will have the time to teach themselves something out of their domain.
I'm only in my 30s but I was thinking recently "when I'm retired I feel like I'm going to be telling stories about how back in my day we had this thing called the filesystem and you'd just browse it directly..."
I think of AI as a kind of floor, a minimum required skill to be able to get a job as a professional anything. If you want to find paid work as a developer, you have to at least be better than AI at the job.
Optimistically AI will filter out all the helpless Devs who can't get anything done from the job market. "Code monkeys" won't be a thing.
Juniors will have to enter unpaid trainee programs I guess, but that might not be such a bad thing
The bash configuration is literally a few lines:
Change ens5 with your instance network interface name. Also, VERY IMPORTANT you must set source_dest_check = false on the EC2 NAT instances.Also, don’t assign a EIP to your EC2 NAT instances (unless you absolutely must persist a given public IP) as that counterintuitively routes through public traffic. Just use a auto-assigned public IP (no EIP).
Could you point me to somewhere I can read more about this? I didn't know there was an extra charge for using an EIP (other than for the EIP itself).
That's what you did before AWS had the "NAT Gateway" managed service. It's literally called "NAT Instance" in current AWS documentation, and you can implement it in any way you wish. Of course, you don't have to limit yourself to iptables/nftables etc. OPNsense is a great way to do a NAT instance.
> NAT AMI is built on the last version of the Amazon Linux AMI, 2018.03, which reached the end of standard support on December 31, 2020 and end of maintenance support on December 31, 2023.
Then I run my stuff locally.
And then I use ssh tunneling to forward the port to localhost of the remote machine. Its a unit file, and will reconstruct the tunnel every 30s if broken. So at most 30s downtime.
Then nginx picks it up.
I use Tailscale myself, but if you want everything totally under your control (and don't want to go to the trouble of setting up headscale or something similar) then that's one of the absolutely simplest, lowest-effort ways of doing it. EDIT: Well, except for the VPS box I suppose, but if that provider went down or you had any reason to suspect they were doing anything suspicious, it would be quite simple to jump to a different provider, so that's pretty darn close to controlling everything yourself.
Particular things: I use letsencrypt wildcard, so my subdomains aren't leaked. If you register per subdomain, LE leaks all your subdomains as part of some transparency report. Learned that and had to burn that domain.
The VPS is from LowEndBox. Like 2 core, 20GB storage 2GB ram. But runs perfectly fine.
I run jellyfin, audiobookshelf, Navidrome, and Romm. Ssh tunnel per application.
It would also be trivial to switch providers as well. But again, not a seed box, not doing torrents, not doing anything that would attract attention. And best of all, no evidence on the VPS. Its all SSL and SSH.
Client automatically deals with reconnecting, never have to touch it.
SSH tunnel would have been simpler, just didn’t want it open.
SSH tunnel probably needs the keep alive on, otherwise connection loss may not be detected.
It is a damn service, which is defined as "you pay someone to do it".
(your second sentence is a bit confusing)
Repeat after me: NAT is not a firewall. And we need to stop pretending it is.
Security is not the purpose of a NAT. It's there to give you more IPs than you have. There's all sorts of NAT hole punching techniques. If you want a firewall, you need a firewall.
The last part isn't adding the security, and you can absolutely NAT without preventing the "outside" subnets from being allowed to route to the "inside" subnet, it's just that NAT is almost always done on the box providing the stateful firewall too so people tend to think of the 3 functions as combined in concept as well.
2.) Market segmentation: keeps home users from easily hosting their own services without spending $$$ on an upgraded plan.
3.) Adding on to #2, I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.
I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.
Shoutout to Hacker News for having IPv6 support!
And virtually everything inside of AWS still requires IPv4 so even if you have zero need to reach out to WAN, if you need any number of private AWS endpoints, you're going to be allocating some ipv4 blocks to your VPC :(.
Instead I played with IPv6 at home to make sure I understood it well enough should it ever come up at work. We'll see!
For someone just getting started with networking and learning things, this seems rhe best way to go forward.
Port forwarding, external/internal address split, split horizon DNS, SNI proxies, NAT, hairpin routing - some of the hacks made mostly because of shortage in IP space.
Using both GUA/ULA together solves enough to get by, but its not ideal
Why state this as absolute fact? Seems a bit lacking in epistemic humility.
Wait, is "seems lacking in epistemic humility" just coded language for "I disagree, therefore you couldn't possibly be thoughtful"?