> Practically speaking, that means that people and organisations running a Matrix server with open registration must verify the ages of users in countries which require it. Last summer we announced a series of changes to the terms and conditions of the Matrix.org homeserver instance, to ensure UK-based users are handled in alignment with the UK’s Online Safety Act (OSA).
At least you can self-host matrix and messages are end to end encrypted, unlike IRC.
> Practically speaking, that means that people and organisations running a Matrix server with open registration must verify the ages of users in countries which require it.
Practically speaking, I would just ignore this requirement. The UK government has no jurisdiction on this side of the pond.
That's assuming UK authorities can even identify who is operating the Matrix instance. At the very least this assumes that a warrant is served to the registrar and/or the owner of the server/VPS in the correct jurisdiction, and that obfuscation measures were not taken by the operator. All of this will probably go nowhere.
I'm sure your server will be fine. Now if you put up a big declaration like that on your website, some bureaucrat might just decide to pick on you when they get the chance like France with Durov.
There are a few IRC clients that support OTR. irssi-otr is one [1] weechat-otr is another [2]. Pidgin though I have not used it in a very long time. Hexchat using an always work in progress plugin. There may be others.
OTR could use some updates to include modern ciphers similar to the recent work of OpenSSH but probably good enough for most people.
E2EE aside having chat split up into gazillions of self hosted instances makes it much harder for chat to be hoovered up all in one place. It takes more effort to target each person and that becomes a government scalability issue. Example effort: [3]
Links 1 and 2 have not had updates in 10 and 8 years respectively, they probably don't even compile anymore. They implement OTRv3 which was published in about 2005 and uses 1536-bits primes. As far as I know, neither the protocol nor the implementations were audited (and especially not audited recently). This is not good encryption at all.
Additionally, OTRv3 does not allow multiple clients per account, which makes it unusable for anyone who wants to chat from two devices.
I use link [1] all the time. It comes pre-compiled for many Linux distributions but not installed by default. And yeah like I said it needs cipher updates like was recently performed in OpenSSH. HN has a handful of cryptographic nerds that could update OTR in their sleep if they so desired maybe even rewrite in Rust but being cryptographic nerds they probably have no need. If the same is true with cryptographers as is with car mechanics and plumbers they probably only use plain text as mechanics have broken down cars in their yards and some plumbers have old leaky pipes due to burn out.
As a mechanic-minded person, all the broken down junk i plan to fix someday has no bearing on the state of the tools i actually use day to day
(In my case, all the old broken guitar pedals and vintage computers littering my house have no bearing on the state of my workstations and gigging setup)
You can try to self-host. Neither Synapse nor Dendrite is in a good state for running a server. I tried Dendrite for a while and it was always playing catchup to Synapse, despite being the supposed successor, and is now not even under development? I can't even tell what's going on over there.
Anyway, my main experience of Matrix is "failed to decrypt message". It's... not great. I wish it were better.
You did it wrong. The correct approach is to flip a coin and let it decide between tuwunel and continuwuity, then self hold that until it dies along with its database format
IRC is also most commonly used for open servers where anyone can join whenever they want to without as much as needing to register for an 'account'! You just pick a nickname out of thin air and off you go.
In that kind of environment, end to end encryption really doesn't add value.
You verify identity over the now-encrypted channel, just like SSL should have done 30 years ago but refused to for doctrinal reasons. And in the (frequent) cases where you don't actually care about the other party's identity you just don't verify it at all.
Are we talking about with OTR? You're meant to verify fingerprints out of band as usual. Without, I guess you check if they've authenticated to nickserv if there are services. Or do your own checks or heuristics.
- notes left there for work, family organization, etc basically things for which an email is "too much" but a small scrap of text seen by some serve the purpose well
- calls, whether audio-only or audio + video
For social use, I see Lemmy or Nostr/Habla more than Matrix. But for all of this, there's a major lack of a single app that is easy go install-able, pip install-able, or cargo build-able without a gazillion dependencies and a thousand setup problems, to the point that most people just choose Docker, using stuff made by others that they know almost nothing about because setting up and maintaining these solutions is just too complex.
I appreciate their effort but isn't Matrix (the company) based out of the UK and primary hosted instances on AWS in the UK? The UK were the first AFAIK to create such internet laws [0]. I could imagine people running their own instances in places where the age laws are not yet active but that number is shrinking fast. [1]
Their solution is for everyone to pay for Matrix with a credit card to verify age. I assume that means there must be a way to force only paid registered accounts to join ones instance? What percentage of the accounts on Discord are paid for with a credit or debit card? Or boosted? I don't keep up with terminology
> isn't Matrix based out of the UK and primary hosted instances on AWS in the UK?
It doesn't matter what country you run your server in or where your company is based; if you're providing public signup to a chat server then the countries (UK, AU, NZ etc) which require age verification will object if you don't age verify the users from those countries. (This is why Discord is doing it, despite being US HQ'd). In other words, the fact that The Matrix.org Foundation happens to be UK HQ'd doesn't affect the situation particularly.
(Edit: also, as others have pointed out, Matrix is a protocol, not a service or a product. The Matrix Foundation is effectively a standards body which happens to run the matrix.org server instance, but the jurisdiction that the standards body is incorporated in makes little difference - just like IETF being US-based doesn't mean the Internet is actually controlled by the US govt).
> Their solution is for everyone to pay for Matrix with a credit card to verify age.
Verifying users in affected countries based on owning a credit card is one solution we're proposing; suspect there will be other ways to do so too. However: this would only apply on the matrix.org server instance. Meanwhile, there are 23,306 other servers currently federating with matrix.org (out of a total of 156,055) - and those other servers, if they provide public signup, can figure out how to solve the problem in their own way.
Also, the current plan on the matrix.org server is to only verify users who are in affected countries (as opposed to try to verify the whole userbase as Discord is).
> It doesn't matter what country you run your server in or where your company is based; if you're providing public signup to a chat server then the countries (UK, AU, NZ etc) which require age verification will object if you don't age verify the users from those countries. (This is why Discord is doing it, despite being US HQ'd).
Whether it matters depends very much on what sort of organization you are.
Discord is a multinational for-profit corporation planning an IPO. It takes payments from users in those countries, likely partners with companies in those countries, and likely wants to sell stock to investors in those countries. Every one of those countries has the ability to punish Discord if it does not obey their laws, even if it does not have a physical presence there.
The situation is likely quite different for most of the 23,306 Matrix servers that federate widely. The worst thing Australia, for example could do to one of their operators is make it legally hazardous for them to visit Australia.
It does not actually need to be configured in a federated state and frankly scales better when it's not. The login can be tied to anything or use it's own. From a modern SAML SSO to an old school forum.
You can run one for a few friends and it scales just as well as a private discord for a few friends. Just need persistent storage for media uploads if people are sharing video a lot.
> (This is why Discord is doing it, despite being US HQ'd)
Right, but also the US isn't far behind on the same legislation wave. It's a lot less likely to be US federally regulated in the same way that the EU is debating EU-wide legislation, but a handful of US States have a version of this legislation already on the books and about to be enforced, or considered about to be on the books (some of which like South Carolina's partially passed bill written to be enforceable Day 1 with no grace period).
(Tangential to your comment but apropos of the Discord news...)
Have any of the Matrix/Element teams seriously considered taking advantage of current events by offering a gamer-focused class of premium account, for Discord refugees who want to redirect their Nitro budgets to fund Matrix gaming features? (Perhaps on a separate homeserver, to avoid the lag during times when matrix.org is overloaded.)
If it were positioned as Patreon-style crowdfunding rather than selling a finished product, and expectations were set appropriately, I wonder if it could end up a nontrivial source of income with which to develop features that Matrix deserves but corporate/government customers won't pay for.
The idea of crowdfunding Discordish features for Matrix from disaffected Discorders (e.g. using the premium acct system we've built for matrix.org) has come up a bunch.
The problem is more that Element team is seriously stretched (particularly after the various misadventures outlined here: https://youtu.be/lkCKhP1jxdk?t=740) - so even if there was a pot of money to (say) merge custom emoji PRs... the team is more than overloaded already with commitments to folks like NATO and the UN. Meanwhile, onboarding new folks and figuring out how to do the Discordy features and launch a separately Discordy app under a Discordy server would also be a major distraction from ensuring Element gets sustainable by selling govtech messaging solutions.
So, we're caught in a catch-22 for now. One solution would be for other projects to build Discordy solutions on top of Matrix (like Cinny or Commet), or fork Element to be more Discordy (and run their own crowdfunders, perhaps in conjunction with The Matrix Foundation). Otherwise, we have to wait for Element to get sustainable via govtech work so it can eventually think about diversifying back into consumer apps.
The internet was built on noncompliance with laws. The hens are coming home to roost that is all. Sovereign countries can only let social media and tech companies poison their societies so much before it becomes a real threat to the nation.
It was all fun and games while it was a few geeks and early adopters having (mostly) fun. Now it is corporations making billions while destroying the mental health and productivity of their "users".
I appreciate that answer, it makes sense that it is based on the country. What I'm hoping to avoid is having to give my actual identity to all services on the internet. It will just allow terrible monitoring and oversight that isn't helpful for democracy. I don't trust the current us administration to know everything I say, everything I do, I don't really trust any government to have that power (and I want to stop crime and abuse..). I like some privacy. We are heading to that already with the Texas and Florida age requirements on the internet today.
This matrix discussion here is missing the point - many people don't want ubiquitous tracking of everything we do on the internet. You and matrix are seemingly not honestly addressing that point, because matrix doesn't seem different discord (in the requirements).
The difference with Discord is that Matrix is a protocol, not a service. It's made up of thousands of servers run by different people in different countries. Public instances may choose to verify users in affected countries to abide by the law; others may choose to run a private instance instead.
Matrix is a protocol, not a service. It's likely the UK government can enforce laws against content and accounts hosted on the matrix.org servers, but no single government has jurisdiction over the entire network.
That sounds more like a recipe for overreach than a method to escape the law, to be honest. Governments don't typically go "aw, shucks, you've caught us on a technicality" without getting the courts involved.
Clueless lawmakers will see this app called Element full of kids chatting without restrictions and tell it to add a filter. When the app says "we can't", the government says "sucks to be you, figure it out" and either hands out a fine or blocks the app.
There are distinctions between the community vibe Discord is going for (with things like forums and massive chat rooms with thousands of people) and Matrix (which has a few chatrooms but mostly contains small groups of people). No in-app purchases, hype generation, or kyhrt predatory designs, just the bare basics to get a functional chat app (and even less than that if you go for some clients).
I'd say being based in the UK will put matrix.org and Element users at risk, but with Matrix development being funded mostly by the people behind matrix.org that implies an impact to the larger decentralized network.
It would take some clever crafting to outlaw Matrix clients without also outlawing web browsers and conventional email clients. Let's assume they did though. The best they can do is block it from app stores, which won't stop anyone but iOS users.
More likely, it just won't become popular enough for lawmakers to notice because the UX is a little rough, and people have very little patience for such things anymore.
Google has backed away from that, stating that an "advanced workflow" with more warnings than the current settings toggle will remain available. We should all be concerned they even considered such a thing though.
(Non-Android) Linux phones often aren't GNU; PostmarketOS is one of the more popular options, and that's based on Alpine Linux which uses musl and Busybox.
It's not really a method to escape the law, or a technicality - it's that people other than Matrix.org are operating chat services, and the law applies to those people, but those people are not Matrix.org.
This won't save Matrix.org if legislators throw stupid at it, of course, but Matrix.org has the opportunity (though maybe not the resources) to engage with UK legislators to ensure they feel respected and that honest efforts are being made to comply.
There are a number of alternate Matrix clients, and nothing is stopping a non-UK dev from forking any of them at any time, including Element. And many are not “apps” that can be blocked from a “store”, they are desktop or web clients.
> Governments don't typically go "aw, shucks, you've caught us on a technicality" without getting the courts involved.
That might happen here, but I don’t think that principle holds generally. If that were true, wouldn’t every component of the service provider chain be sued for people e.g. downloading pirated or illegal stuff? The government cracks down on e.g. torrent trackers and ISPs, but they haven’t seriously attacked torrent clients or the app stores/OSes that allow users to run those clients. Why not?
Governments go after websites that don't host anything illegal all the time. Torrents don't contain any illegal information yet torrent websites are taken down routinely through legal challenges, by court orders, and in some countries where the government is at the behest of the entertainment industry, by special anti-piracy organisations with ridiculous censorship powers.
Apple and Google have both been forced to take down apps and ISPs block IP ranges all the time. Usually without much of a fight. Apps for reporting ICE, for instance, have been taken down without any clear legal precedent and without much judicial challenge. The entire chain is already being threatened, sued, and censored.
The trick is usually to escape the jurisdiction of countries that care by hosting serves in foreign countries, to host app executables and such off-platform, and maybe adding a CDN like Cloudflare to the mix to protect against getting arrested too easily. For this to work for Matrix, the company developing Matrix needs to leave the UK and move to a place where this age verification bollocks isn't necessary. I don't think that kind of behaviour is good for a company currently financed in large part by government contracts.
It's just a reality that law is harder to enforce when you cannot target a given server and take out an entire service. Regardless of what you think of the law.
This is why to this day torrenting of copyrighted material is alive and well.
I thought it was both and their hosted service is in the UK. Is it not? I know people can host their own but I have had very little success in getting people to host their own things. Most here at HN will not do anything that requires more than their cell phone. Who knows maybe Discords actions will incentivize more people to self host.
That's why the bittorrent protocol is in such dire straights /s
Bittorrent actually has fewer real uses than Matrix. The former is useful for Microsoft and others trying to roll out big patches, but the latter is used by NATO, the German Armed Forces, and the French government
Couldn't you simply set up your own instance and link up with the wider network? I guess you would have to age verify yourself if you live in a country that requires it, but regulating that would be sort of hilarious.
Whether or not authorities with jurisdiction over you would notice your instance (homeserver) or bother you about age verification is an issue you'd have to consider for yourself.
I'm more familiar with Australian legislation than others, but here at least a home server would definitely not require age verification. Kids are free to make group chats with their friends in a bunch of services.
The spirit of the law is definitely not against chatting with friends, but it is against the idea of connecting minors with strangers, so while federation is generally not codified (or, IMO, understood well by legislators) and you're probably not going to be bothered by authorities about it, I reckon sooner or later the law will come for federated networks.
(Since we all seem fine just taking some uncertified random third party's word for it that their AI face recognition definitely didn't see a thumb with a face drawn on it, maybe it'd be adequate for Matrix.org to add an "18+ user" flag to the protocol and call it a day?)
Couldn't you simply set up your own instance and link up with the wider network?
I honestly have no idea. As much as they love money I am not paying my lawyers to research AI this one. I would probably wait for others to get made example of.
It's an interesting legal question, but I would imagine for a federated service, the burden of proof should be on the individual's home server for age verification. That's where the user account is, after all.
Matrix is basically labeled "adults only" everywhere, so restricting certain servers/rooms due to possible innocent eyes is likely out of scope.
Yeah that's the thing. No matter what you do, it's bound to be illegal somewhere in the world. Be it North Korea or Iran or Australia. You simply can't follow everyone's laws because they are often contradictory.
It did however deliver the hilarious quote "The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia", in regards to end-to-end encrypted messaging.[1]
It went down as well as about you would think it would.
It was a scheme to sell math textbooks involving a purported method for squaring the circle which depended on that incorrect value of pi. Squaring the circle had already been proven impossible, but that didn't stop them from trying.
I don't care about what Australia wants. If I ran a private Matrix instance (e.g. to chat with my gaming buddies) I wouldn't even agree to divulge who is registered on it.
I've been threatened by the governments of Pakistan and Germany for stuff I've said pseudonymously on the Internet. As much as they may think everybody needs to care about their laws, I happen not to.
> Since then Australia, New Zealand and the EU have introduced similar legislation
I am not aware that the EU pushed legislation onto us here in central Europe with regards to "Age Verification". I am not saying it has not happened (I simply don't know right now), but this needs a source rather than just a statement. From what I remember, local media in german critisized the UK, so it would be strange to see the same legislation suddenly come into effect here.
Also, it seems we did not really win a lot if a private company operates matrix.
The EU will probably wait until the launch of a digital wallet that can do anonymous age verification. Otherwise it won't get enough political support.
Greece/Austria/Finland/Belgium/Italy also discussing.
The best one for me is Portugal, parliament approved this law all while the country is being devastated by hurricane winds and flooding with several calamity zones. They are really bringing Law into effect by maximum obfuscation.
EU anonimity online is over because ivory tower folks want to speedrun all of us into 1984.
And this is obviously just a stepping stone to mass message scanning. The revolution will not be organizable.
>If it doesn't have enough of the utility, performance, and positive UX, it will never gain enough market share to matter.
That's part of why billionaires will continue to screw people over. They will try and stay in bed with the familiar evil, rather than put up with the temporary inconvenience of freedom.
And it's a negative spiral. Less users means less money to bring in staff which means less means to improve. Discord didn't become discord in a month, but other competitiors don't get that grace period.
Last time I tried matrix (~2022) they still didn't have voice channels--they had voice calls but not a mechanism where people can join/leave a particular voice chat at will. To me this is a must have feature for anyone who has used discord/mumble/ventrilo.
I was actually playing with voice rooms the other day. One can create a standing call room that people can join or leave as they see fit, without having to set up a new call each time. Discord currently has more integrations with streaming and voicechat rooms, but they had a bit of a head start, and even Element (let alone Commet & Cinny) are catching up
I agree with you. The good news is that it looks like some of the alternate clients are focusing on it. https://commet.chat/ has voice channels (video rooms but default to camera off), and cinny's element call support PR defaults to camera off in video rooms as well iirc.
It is really great to see a post from the Matrix Foundation that forthrightly acknowledges it is not ready for mainstream adoption and shows awareness of its limitations. I hope this is a good omen for the future of Matrix.
It seems both the Linux desktop and Matrix have the opportunity of a life time now. If they don't rise to the occasion and grab that marketshare, I fear there may never be an opportunity like this again.
Linux won't rise to the occasion because there's no figurehead leading the rise. Linux's greatest strength and weakness is in its breadth of a community. But that's not how you traditionally attract a mainstream audience.
That's why Valve is the best chance here, and why I'm not too optimistic. Valve's incentives are to make its own walled garden, which in my eyes defies the idea of linux. But that seems to be the only thing that works these days.
Totally agree there and they actually talk about that in the post:
> Finally: we’re painfully aware that none of the Matrix clients available today provide a full drop-in replacement for Discord yet. All the ingredients are there, and the initial goal for the project was always to provide a decentralised, secure, open platform where communities and organisations could communicate together. However, the reality is that the team at Element who originally created Matrix have had to focus on providing deployments for the public sector (see here or here) to be able to pay developers working on Matrix. Some of the key features expected by Discord users have yet to be prioritised (game streaming, push-to-talk, voice channels, custom emoji, extensible presence, richer hierarchical moderation, etc).
Same here, tried a couple of years ago. I was drawn to it because of the protocol concept. The experience was not bad, everything worked. But I remember the signup/domain/keys/backups/etc UX was a bit confusing. Happy to see there is more attention going to Matrix lately. Time to give it another go perhaps
As a Discord user myself, I’ve been surprised at how aggressive the recent data collection direction feels, especially given how much of its appeal came from being lightweight and community-centric. This to me seems like a real opportunity for a simpler alternative that preserves core functionality without the additional data surface area.
I didn't see this specifically coming. But I saw this enshittification from a mile away the moment they changed leadership and how they immediately talked about wanting to IPO. It was never going to stop at aggressively pushing Nitro for this sort of c-suite
>a simpler alternative that preserves core functionality
That's practically a contradiction, sadly. The core features people want are all varied. You'd need 4-5 "simple" apps to replicate them all, but people want all their eggs in one basket.
I think I've been waiting since the 90s hoping somebody will figure out how to make this a real thing (or was it the early 2000s?)
As I recall it seemed to be just one guy, David Chaum, who did so much to show how so many of these things could work, but the rest of us have somehow managed to do very little with his ideas. What are we missing?
You're missing a drive to make billions or wield power, silly.
Of course there's always been ways to do this ethically. But the ones up top don't make money from that. And they can spend billions convincing people the only way it works is with whatever makes them money.
What's the canonical way to block users from age-gate jurisdictions to one's website? I wish Cloudflare had a wizard flow for this. I'm not going to age-gate access to my blog (it's a wiki so it has user-generated content) so I'd rather jurisdiction-gate it.
Perhaps we should have network traffic report its geographic location so that we can comply easily. Would prefer something in an IP packet so that I can just filter at the firewall. Doesn't even need to be implemented in a sophisticated way at clients. Can just have the urgent flag repurposed to mean "respond only if not geo-locked and unconcerned with regulatory" and then I can drop these directly, and regulated source locations could ensure that packet flags are correctly set at the widest peering location out of the UK and so on.
Oh thank you. I don't know why I couldn't find it but it's actually a near first class feature in WAF (select by country/continent and block). I think it's because I wanted to serve a blocked page, which is totally doable with Custom Pages or a Cloudflare Worker. Thank you!
This appeal falls flat when you get to the parts about their homeserver requiring some form of age verification:
> From our perspective, the matrix.org homeserver instance has never been a service aimed at children, which our terms of use reflect by making it clear that users need to be at least 18 years old to use the server. However, the various age-verification laws require stricter forms of age verification measures than a self-declaration. Our Safety team and DPO are evaluating options that preserve your privacy while satisfying the age verification requirements in the jurisdictions where we have users.
Which is actually more strict than Discord's upcoming policy which allows accounts to operate for free without any verification, with some limitations around adult-oriented servers and content.
There has been a lot of FUD about the Discord age verification, so a refresher: The upcoming changes do not actually require you to verify anything to use Discord. It just leaves the account in teen mode by default. This means the account can't join age-restricted channels, can't unblur images marked as sensitive, and incoming message requests from unknown users will go to a second inbox with a warning by default.
You can, of course, run your own Matrix server. Having been there before I would suggest reading up on some typical experiences in running one of these servers. Unless you have someone willing to spend a lot of time running the server and playing IT person for people using it, it can be a real headache. They also note that running a server doesn't actually get around any age requirements:
> Practically speaking, that means that people and organisations running a Matrix server with open registration must verify the ages of users in countries which require it.
The list of locations with those laws is growing very large. From the post:
> Last summer we announced a series of changes to the terms and conditions of the Matrix.org homeserver instance, to ensure UK-based users are handled in alignment with the UK’s Online Safety Act (OSA). Since then Australia, New Zealand and the EU have introduced similar legislation, with movement in the US and Canada too.
...and while we have no choice but implement it on the matrix.org instance, other folks running their own servers are responsible for their own choices.
My security collective is honestly considering going back to IRC.
It's becoming increasingly apparent that if you don't use something truly free and open source and host it yourself, you're just setting yourself up for more of this sort of thing.
You can't trust anyone to properly handle the problem of "how the hell do we keep creeps the f*ck away from kids?" with any amount of common sense.
Even if you self-host matrix there are still multiple ways you could be liable for content you don't even know exists. Especially the last 4 points here:
There are even custom message/media types that people use to upload hidden content you can't see even if you're joined to the same channel using a typical client.
20. "ask someone else’s homeserver to replicate media" -> also fixed by authenticated media
21. "media uploads are unverified by default" - for E2EE this is very much a feature; running file transfers through an antivirus scanner would break E2EE. (Some enterprisey clients like Element Pro do offer scanning at download, but you typically wouldn't want to do it at upload given by the time people download the AV defs might be stale). For non-encrypted media, content can and is scanned on upload - e.g. by https://github.com/matrix-org/synapse-spamcheck-badlist
22. "all it takes is for one of your users to request media from an undesirable room for your homeserver to also serve up copies of it" - yes, this is true. similarly, if you host an IMAP server for your friends, and one of them gets spammed with illegal content, it unfortunately becomes your problem.
In terms of "invisible events in rooms can somehow download abusive content onto servers and clients" - I'm not aware of how that would work. Clients obviously download media when users try to view it; if the event is invisible then the client won't try to render it and won't try to download the media.
Nowadays many clients hide media in public rooms, so you have to manually click on the blurhash to download the file to your server anyway.
I cannot even use Discord if I wanted to... every time I try to sign up I get immediately phone-walled and/or banned, and the appeal is always denied with "our automated system is working properly." I have been trying for close to ten(!) years now off and on, with all different combinations of browsers, OSes, ISPs and physical machines. No VPN or proxy either.
And even if I was able to register, that "automated system" still randomly bans people whenever it feels like it. Search the r/discordapp subreddit or just google "discord random ban", it's a widespread problem with no solution and I have no idea how so many other people seem to have no issues, yet at the same time you can find lots of people just as frustrated as me.
"Automated system discriminating against me with no appeal or recourse" may not be the biggest injustice in the world right now, but I fear/loathe that it seems like it's going to keep getting bigger.
A bug blocking functionality is an annoyance, but a Scarlet Letter branded onto a secret dossier is terrifying.
Does phone-walled mean you have to verify with a phone number? Are you unable to do it because it doesn’t work, or because you don’t want to give it your phone number?
Even times when I've given up and put in a real phone number that has never been used with Discord, it still just bans me immediately after verifying, so they basically just stole the number.
On the two occasions I’ve tried to chat with someone on the public Matrix server, I was completely unable to get it to work. I’ve tried with the new Mac app and with some older thing years ago.
So… choose your poison? I’m sure Matrix/Element works for someone or they would be out of business, but it does not work for me.
I have a similar issue with Matrix as well... even though it's federated, most large rooms use the same bots and blocklists so I end up getting banned from many rooms before I've even attempted to join.
Apparently my monopoly ISP rotates IPs fairly often and I am sharing them with people that have been doing bad things with them, so not only are many Matrix channels blocked but even large regular websites like etsy or locals are completely blocked for me as well. Anything with a CF captcha is also an infinite loop.
As far as I know I wasn’t banned or restricted or anything. The client just never managed to create a room or initiate a chat or whatever they called it.
I had the same issue but in instagram, not for personal use, but few years ago I made few startups, and every time I register the company I create few social accounts, all works well except instagram for some reason, always get flagged and asked to take a selfie with a book or something.. and even after providing that selfie I still get perma banned! I tried to call support, like how you would expect from any company let alone a multi billion one, only to find out that there’s actually no support in anywhere in fecebook wise! And the only way you can get something fixed is through a secret syndicate-like community where you should know someone who knows someone to talk to some person there to fix your issue. Long story short, never bothered with that shitty company again, good riddance.
That K-ID bypass has already been patched, and even if it's bypassed again, Discord is apparently directing some users to Persona instead now. Persona does server-side classification so that one won't be as easy as nulling out the checks on the client.
The 3D model method might work on Persona, but that demo only shows it fooling K-IDs classifier.
Eh, the worldwide rollout hasn't happened yet so for now the only people getting sent to Persona after they promised client-side scanning are those who are fiddling around with Discords internals to trigger the age verification flow early. But yeah if they stick with Persona then they will need to retract the client-side promise before the proper rollout, and that'll be even more fuel on the PR fire.
I've always wished there was a market for mod actions.
Moderation and centralization while typically aren't independent, aren't necessarily dependent. One can imagine viewing content with one set of moderation actions and another person viewing the same content with a different set of moderation actions.
We sort of have this in HN already with viewing flagged content. It's essentially using an empty set for mod actions.
I believe it's technically viable to syndicate of mod actions and possibly solves the mod.labor.prpbl, but whether it's a socially viable way to build a network is another question.
Consider the ActivityPub Fediverse. With notable, short-lived exceptions (when a bad actor shows up with a new technique), the majority of the abuse comes from a handful of instances, whose administrators are generally either negligent or complicit.
So your solution to people using a decentralized, federated protocol to say things you don't like is to stop various servers interacting with each other? At that point why not just use federated services with multiple accounts?
It seems far too risky to sign up on a service for the purpose of intercommunication that is able (or even likely) to burn bridges with another for any reason at any time. In the end people will just accumulate on 2 or 3 big providers and then you have pseudo-federation anyway.
Servers stopping federation with each other is pretty normal IMO. If I had a mastodon server I would also not federate with something like gab.com.
However all the LGBT+ friendly servers federate with each other and that's good enough for me. I like not having to see toxicity, there's too much of it in the world already.
In the Mastodon ecosystem it seems to be often taken to the extreme. As in, there ar servers will not federate with anyone who doesn't share their blocklist, and servers will block anyone using Pleroma (because it's "fascist") etc.
I've only seen that with certain German instances. They have their own particular laws over there and they're very adamant that other countries follow them to the letter, yes. I've seen the discussion. When it comes to nazi imagery I agree that should be forbidden everywhere but I think there were some other stipulations that were more controversial.
But I have not seen that outside the scope of Germany.
I don't know pleroma though. I've always hated twitter for its short-form content (I feel like it stimulates stupid nonsense like "look at my run today" and "I just had dinner" and discourages actual interesting content. So I was never into twitter clones either. I do use lemmy more although it has its own specific attitude issues around its developers (tankies).
Pleroma is Mastodon server software. For a bunch of essentially random reasons, it was popular among right-wingers setting up their Mastodon instances, and some servers responded by blocking any Mastodon server running it outright. A subset of those would also block any server not blocking Pleroma like they do.
> So your solution to people using a decentralized, federated protocol to say things you don't like is to stop various servers interacting with each other?
Yeah. In practice, Fediverse servers have formed clusters based on shared values. And since the second-largest cluster is (iirc) a Japanese CSAM distribution network, everyone is very glad that this sort of de facto censorship is possible. Do you have a viable alternative?
My solution is for instances to stop being negligent. Mastodon still directs everyone to create an account on mastodon.social using dark patterns (see https://joinmastodon.org/), which has lead to the flagship instance being far bigger than its moderation team can handle, leading to a situation where it's a major source of abuse and where defederation is too costly for many to consider.
"People will just accumulate on 2 or 3 big providers" is far from an inevitable circumstance, but there are conditions that make it more likely. That, too, is largely down to negligence or malice (but less so than the abusive communications problem).
> which has lead to the flagship instance being far bigger than its moderation team can handle, leading to a situation where it's a major source of abuse
Is that still true? As the admin of a small instance, I find the abuse coming from mastodon.social has been really low for a few years. There is the occasional spammer, but they often deal with it as quickly as I do.
Throwing in Nostr as a truly decentralized alternative. Instead of relying on federated servers, the messages themselves are signed and relayed for anyone to receive.
it's up to the maintainer of a particular server to moderate what goes on in said server. Now, if the Matrix.org Foundation wants to moderate their servers one way or the other, that's one thing, but to expect the protocol/spec to lay down a content policy is, with all due respect, dumb as hell.
you're free to have your own opinion based on your experiences here, but i wouldn't blame anyone for feeling that way. for the record, i don't think dang or anybody is a transphobe, but i have to imagine the culture here is pretty off-putting to trans people
i don't think it's that "wild". sure, i'm not so cynical as to feel hn's become a nazi bar or anything, but i am willing to recognize that some of the incidents i've witnessed could be reason enough for a trans person to want to avoid this site.
> It's neutral to this topic, it's about tech.
this thread began by xe bringing up failures in moderation affecting trans people
That isn't how it works. The presence of neutral allies doesn't somehow counterbalance and cancel out the transphobia. If a platform allows transphobic users - as Hacker News does because transphobia isn't against the guidelines - and transphobia is common in threads where trans issues or people are a subject (and it is) then it's a hostile platform to trans people.
Asking trans people to ignore this is like asking Jews to be comfortable in a bar where only ten percent of the patrons are Nazis. Arguing that "well not everyone is a Nazi" doesn't help, an attitude of "we're neutral about Nazis, we serve drinks to anyone" still makes it a Nazi bar, just implicitly rather than explicitly.
I'd agree with this logic if we were discussing all kinds of different topics here, and one's stance on gender would be immediately visible to anyone. But I can't remember the last time the matters of gender were discussed here at all, and pretty sure anything openly transphobic would be flagged or deleted pretty soon.
>I'd agree with this logic if we were discussing all kinds of different topics here, and one's stance on gender would be immediately visible to anyone.
We do discuss all kinds of different topics here. Despite what many people here want to believe, Hacker News isn't exclusively for tech and tech-related subjects.
>and pretty sure anything openly transphobic would be flagged or deleted pretty soon.
But not banned, that's the problem. The guidelines are extremely pedantic but nowhere is bigotry, racism, antisemitism or transphobia mentioned as being against those guidelines. You might say that shouldn't be necessary, but it's weird that so much effort is put into tone policing specific edge cases but the closest the guidelines come to defending marginalized groups is "Please don't use Hacker News for political or ideological battle. It tramples curiosity." Transphobia is treated as a mere faux pas on the same par as being too snarky, or tediously repetitive. The real transgression being not the bigotry but "trampling curiosity." Any trans person who posts here knows that bigots who hate them and want to do them harm aren't going to suffer meaningful consequences (especially if they just spin up a green account) and that the culture here isn't that concerned about their safety.
Read the green account just below me. That sort of thing happens all the time. Yes, the comment is [dead] but why should a trans person be comfortable here, or consider themselves welcome, knowing that this is the kind of thing they'll encounter?
I'm not in a position to tell marginalized people how they should feel, but a moderation policy that wouldn't even allow offensive messages by new accounts appear for a short time would make this place into another social media - walled off and tracking their users. I understand the point though.
This is a ludicrous example. Being in the physical presence of somebody who hates you and may want to kill you is quite different than being on a forum with them. Any person who may want to harm a transperson cannot jump through the cables and attack somebody.
I just don't get why anyone is still arguing against age verification tbh. Large social spaces are required by law to do it, whether its discord or matrix or anywhere that allows strangers to interact.
> Practically speaking, that means that people and organisations running a Matrix server with open registration must verify the ages of users in countries which require it. Last summer we announced a series of changes to the terms and conditions of the Matrix.org homeserver instance, to ensure UK-based users are handled in alignment with the UK’s Online Safety Act (OSA).
At least you can self-host matrix and messages are end to end encrypted, unlike IRC.
Practically speaking, I would just ignore this requirement. The UK government has no jurisdiction on this side of the pond.
https://en.wikipedia.org/wiki/Ryanair_Flight_4978
There are a few IRC clients that support OTR. irssi-otr is one [1] weechat-otr is another [2]. Pidgin though I have not used it in a very long time. Hexchat using an always work in progress plugin. There may be others.
OTR could use some updates to include modern ciphers similar to the recent work of OpenSSH but probably good enough for most people.
E2EE aside having chat split up into gazillions of self hosted instances makes it much harder for chat to be hoovered up all in one place. It takes more effort to target each person and that becomes a government scalability issue. Example effort: [3]
[1] - https://github.com/cryptodotis/irssi-otr
[2] - https://github.com/mmb/weechat-otr
[3] - https://archive.ph/4wi5t
Additionally, OTRv3 does not allow multiple clients per account, which makes it unusable for anyone who wants to chat from two devices.
(In my case, all the old broken guitar pedals and vintage computers littering my house have no bearing on the state of my workstations and gigging setup)
Why not provide the URL
Some people cannot access archive.today sites
These sites also serve CAPTCHAs. They block users who prefer not to use Javascript for non-interactive www use, e.g., reading documents
The source URL is at the top of the page as is for every archive.{is|ph|today} snapshot.
Anyway, my main experience of Matrix is "failed to decrypt message". It's... not great. I wish it were better.
Unable to decrypt has improved quite a bit fwiw
In that kind of environment, end to end encryption really doesn't add value.
Even without registering my nick, I would expect a modern protocol to keep my pm communication private by default.
Now you can cryptographically check to who you are talking.
No other party can read your plain text.
You can pick any cryptographic property you like future proofing or deniability, etc.
Becouse IRC is just very nice transport.
And clients can be very easily scrypted to encrypt and display just human readable text.
You can even relay messages to wherever you want, HR lady, video player, anywhere.
Try that with Matrix or Discord ;)
However server side... :) Looked probably twice to hosting Matrix server and Java part was fat no no. And Discord one-click "servers" ? :)
Edit:
Ok, can't find any Java in Matrix servers context... Must be I messed it with Signal server.
- notes left there for work, family organization, etc basically things for which an email is "too much" but a small scrap of text seen by some serve the purpose well
- calls, whether audio-only or audio + video
For social use, I see Lemmy or Nostr/Habla more than Matrix. But for all of this, there's a major lack of a single app that is easy go install-able, pip install-able, or cargo build-able without a gazillion dependencies and a thousand setup problems, to the point that most people just choose Docker, using stuff made by others that they know almost nothing about because setting up and maintaining these solutions is just too complex.
Their solution is for everyone to pay for Matrix with a credit card to verify age. I assume that means there must be a way to force only paid registered accounts to join ones instance? What percentage of the accounts on Discord are paid for with a credit or debit card? Or boosted? I don't keep up with terminology
[0] - https://en.wikipedia.org/wiki/Online_age_verification_in_the...
[1] - https://avpassociation.com/4271-2/
> isn't Matrix based out of the UK and primary hosted instances on AWS in the UK?
It doesn't matter what country you run your server in or where your company is based; if you're providing public signup to a chat server then the countries (UK, AU, NZ etc) which require age verification will object if you don't age verify the users from those countries. (This is why Discord is doing it, despite being US HQ'd). In other words, the fact that The Matrix.org Foundation happens to be UK HQ'd doesn't affect the situation particularly.
(Edit: also, as others have pointed out, Matrix is a protocol, not a service or a product. The Matrix Foundation is effectively a standards body which happens to run the matrix.org server instance, but the jurisdiction that the standards body is incorporated in makes little difference - just like IETF being US-based doesn't mean the Internet is actually controlled by the US govt).
> Their solution is for everyone to pay for Matrix with a credit card to verify age.
Verifying users in affected countries based on owning a credit card is one solution we're proposing; suspect there will be other ways to do so too. However: this would only apply on the matrix.org server instance. Meanwhile, there are 23,306 other servers currently federating with matrix.org (out of a total of 156,055) - and those other servers, if they provide public signup, can figure out how to solve the problem in their own way.
Also, the current plan on the matrix.org server is to only verify users who are in affected countries (as opposed to try to verify the whole userbase as Discord is).
Whether it matters depends very much on what sort of organization you are.
Discord is a multinational for-profit corporation planning an IPO. It takes payments from users in those countries, likely partners with companies in those countries, and likely wants to sell stock to investors in those countries. Every one of those countries has the ability to punish Discord if it does not obey their laws, even if it does not have a physical presence there.
The situation is likely quite different for most of the 23,306 Matrix servers that federate widely. The worst thing Australia, for example could do to one of their operators is make it legally hazardous for them to visit Australia.
It does not actually need to be configured in a federated state and frankly scales better when it's not. The login can be tied to anything or use it's own. From a modern SAML SSO to an old school forum.
You can run one for a few friends and it scales just as well as a private discord for a few friends. Just need persistent storage for media uploads if people are sharing video a lot.
Right, but also the US isn't far behind on the same legislation wave. It's a lot less likely to be US federally regulated in the same way that the EU is debating EU-wide legislation, but a handful of US States have a version of this legislation already on the books and about to be enforced, or considered about to be on the books (some of which like South Carolina's partially passed bill written to be enforceable Day 1 with no grace period).
The US landscape is shifting rapidly on this: https://en.wikipedia.org/wiki/Social_media_age_verification_...
Have any of the Matrix/Element teams seriously considered taking advantage of current events by offering a gamer-focused class of premium account, for Discord refugees who want to redirect their Nitro budgets to fund Matrix gaming features? (Perhaps on a separate homeserver, to avoid the lag during times when matrix.org is overloaded.)
If it were positioned as Patreon-style crowdfunding rather than selling a finished product, and expectations were set appropriately, I wonder if it could end up a nontrivial source of income with which to develop features that Matrix deserves but corporate/government customers won't pay for.
The problem is more that Element team is seriously stretched (particularly after the various misadventures outlined here: https://youtu.be/lkCKhP1jxdk?t=740) - so even if there was a pot of money to (say) merge custom emoji PRs... the team is more than overloaded already with commitments to folks like NATO and the UN. Meanwhile, onboarding new folks and figuring out how to do the Discordy features and launch a separately Discordy app under a Discordy server would also be a major distraction from ensuring Element gets sustainable by selling govtech messaging solutions.
So, we're caught in a catch-22 for now. One solution would be for other projects to build Discordy solutions on top of Matrix (like Cinny or Commet), or fork Element to be more Discordy (and run their own crowdfunders, perhaps in conjunction with The Matrix Foundation). Otherwise, we have to wait for Element to get sustainable via govtech work so it can eventually think about diversifying back into consumer apps.
We need more stuff hosted through obfuscated channels (Tor, I2C, etc) and more user friendly access to those networks.
It was all fun and games while it was a few geeks and early adopters having (mostly) fun. Now it is corporations making billions while destroying the mental health and productivity of their "users".
At some point you don't have a business model if you don't have users.
[0] https://europeannewsroom.com/to-ban-or-not-to-ban-eu-countri...
[1] https://en.wikipedia.org/wiki/Social_media_age_verification_...
This matrix discussion here is missing the point - many people don't want ubiquitous tracking of everything we do on the internet. You and matrix are seemingly not honestly addressing that point, because matrix doesn't seem different discord (in the requirements).
The difference with Discord is that Matrix is a protocol, not a service. It's made up of thousands of servers run by different people in different countries. Public instances may choose to verify users in affected countries to abide by the law; others may choose to run a private instance instead.
tldr, means for American firms to sue due to burdonsome regulations, also some contitution stuff.
Clueless lawmakers will see this app called Element full of kids chatting without restrictions and tell it to add a filter. When the app says "we can't", the government says "sucks to be you, figure it out" and either hands out a fine or blocks the app.
There are distinctions between the community vibe Discord is going for (with things like forums and massive chat rooms with thousands of people) and Matrix (which has a few chatrooms but mostly contains small groups of people). No in-app purchases, hype generation, or kyhrt predatory designs, just the bare basics to get a functional chat app (and even less than that if you go for some clients).
I'd say being based in the UK will put matrix.org and Element users at risk, but with Matrix development being funded mostly by the people behind matrix.org that implies an impact to the larger decentralized network.
More likely, it just won't become popular enough for lawmakers to notice because the UX is a little rough, and people have very little patience for such things anymore.
https://gagadget.com/en/671314-no-more-apk-google-will-block...
The walls are closing in on us all
https://android-developers.googleblog.com/2025/11/android-de...
Seems we lack a nail-on-the-head term for this "but of course you'll still be able to ..." frog-boiling. We collectively fall for it every time!
This won't save Matrix.org if legislators throw stupid at it, of course, but Matrix.org has the opportunity (though maybe not the resources) to engage with UK legislators to ensure they feel respected and that honest efforts are being made to comply.
That might happen here, but I don’t think that principle holds generally. If that were true, wouldn’t every component of the service provider chain be sued for people e.g. downloading pirated or illegal stuff? The government cracks down on e.g. torrent trackers and ISPs, but they haven’t seriously attacked torrent clients or the app stores/OSes that allow users to run those clients. Why not?
Apple and Google have both been forced to take down apps and ISPs block IP ranges all the time. Usually without much of a fight. Apps for reporting ICE, for instance, have been taken down without any clear legal precedent and without much judicial challenge. The entire chain is already being threatened, sued, and censored.
The trick is usually to escape the jurisdiction of countries that care by hosting serves in foreign countries, to host app executables and such off-platform, and maybe adding a CDN like Cloudflare to the mix to protect against getting arrested too easily. For this to work for Matrix, the company developing Matrix needs to leave the UK and move to a place where this age verification bollocks isn't necessary. I don't think that kind of behaviour is good for a company currently financed in large part by government contracts.
This is why to this day torrenting of copyrighted material is alive and well.
I thought it was both and their hosted service is in the UK. Is it not? I know people can host their own but I have had very little success in getting people to host their own things. Most here at HN will not do anything that requires more than their cell phone. Who knows maybe Discords actions will incentivize more people to self host.
You're just talking to the wrong ones :-)
I do hope you are right. Governments have more than enough low hanging fruit to go snatch up and then pat themselves on the back.
Bittorrent actually has fewer real uses than Matrix. The former is useful for Microsoft and others trying to roll out big patches, but the latter is used by NATO, the German Armed Forces, and the French government
Whether or not authorities with jurisdiction over you would notice your instance (homeserver) or bother you about age verification is an issue you'd have to consider for yourself.
The spirit of the law is definitely not against chatting with friends, but it is against the idea of connecting minors with strangers, so while federation is generally not codified (or, IMO, understood well by legislators) and you're probably not going to be bothered by authorities about it, I reckon sooner or later the law will come for federated networks.
(Since we all seem fine just taking some uncertified random third party's word for it that their AI face recognition definitely didn't see a thumb with a face drawn on it, maybe it'd be adequate for Matrix.org to add an "18+ user" flag to the protocol and call it a day?)
I honestly have no idea. As much as they love money I am not paying my lawyers to research AI this one. I would probably wait for others to get made example of.
Matrix is basically labeled "adults only" everywhere, so restricting certain servers/rooms due to possible innocent eyes is likely out of scope.
It went down as well as about you would think it would.
[1] https://www.eff.org/deeplinks/2017/07/australian-pm-calls-en... (2017)
https://www.forbes.com/sites/kionasmith/2018/02/05/indianas-...
I am not aware that the EU pushed legislation onto us here in central Europe with regards to "Age Verification". I am not saying it has not happened (I simply don't know right now), but this needs a source rather than just a statement. From what I remember, local media in german critisized the UK, so it would be strange to see the same legislation suddenly come into effect here.
Also, it seems we did not really win a lot if a private company operates matrix.
That's old news, now is all about "think of the children".
This is too synchronous not to be arranged with the Commission. My vote is on Europol and Palantir lobbying.
France - https://www.lemonde.fr/en/pixels/article/2026/01/31/social-m...
Spain - https://english.elpais.com/technology/2026-02-04/is-16-a-goo...
Denmark - https://edition.cnn.com/2025/10/08/tech/denmark-children-soc...
Portugal - https://www.reuters.com/world/europe/portugal-approves-restr...
Greece/Austria/Finland/Belgium/Italy also discussing.
The best one for me is Portugal, parliament approved this law all while the country is being devastated by hurricane winds and flooding with several calamity zones. They are really bringing Law into effect by maximum obfuscation.
EU anonimity online is over because ivory tower folks want to speedrun all of us into 1984.
And this is obviously just a stepping stone to mass message scanning. The revolution will not be organizable.
If it doesn't have enough of the utility, performance, and positive UX, it will never gain enough market share to matter.
E2EE encryption doesn't matter if you don't have someone else to communicate over it with!
That's part of why billionaires will continue to screw people over. They will try and stay in bed with the familiar evil, rather than put up with the temporary inconvenience of freedom.
And it's a negative spiral. Less users means less money to bring in staff which means less means to improve. Discord didn't become discord in a month, but other competitiors don't get that grace period.
Discord/Twitch/Snapchat age verification bypass - https://news.ycombinator.com/item?id=46982421 - Feb 2026 (435 comments)
Discord faces backlash over age checks after data breach exposed 70k IDs - https://news.ycombinator.com/item?id=46951999 - Feb 2026 (21 comments)
Discord Alternatives, Ranked - https://news.ycombinator.com/item?id=46949564 - Feb 2026 (465 comments)
Discord will require a face scan or ID for full access next month - https://news.ycombinator.com/item?id=46945663 - Feb 2026 (2018 comments)`
They should really rebrand their home server to another name, so the Matrix name is unambiguously referring to the protocol.
That's why Valve is the best chance here, and why I'm not too optimistic. Valve's incentives are to make its own walled garden, which in my eyes defies the idea of linux. But that seems to be the only thing that works these days.
I'm hopeful the experience will improve in the future.
> Finally: we’re painfully aware that none of the Matrix clients available today provide a full drop-in replacement for Discord yet. All the ingredients are there, and the initial goal for the project was always to provide a decentralised, secure, open platform where communities and organisations could communicate together. However, the reality is that the team at Element who originally created Matrix have had to focus on providing deployments for the public sector (see here or here) to be able to pay developers working on Matrix. Some of the key features expected by Discord users have yet to be prioritised (game streaming, push-to-talk, voice channels, custom emoji, extensible presence, richer hierarchical moderation, etc).
>a simpler alternative that preserves core functionality
That's practically a contradiction, sadly. The core features people want are all varied. You'd need 4-5 "simple" apps to replicate them all, but people want all their eggs in one basket.
It would be nice if we could use these digital wallets as a framework for all these things, annonymously.
As I recall it seemed to be just one guy, David Chaum, who did so much to show how so many of these things could work, but the rest of us have somehow managed to do very little with his ideas. What are we missing?
Of course there's always been ways to do this ethically. But the ones up top don't make money from that. And they can spend billions convincing people the only way it works is with whatever makes them money.
Perhaps we should have network traffic report its geographic location so that we can comply easily. Would prefer something in an IP packet so that I can just filter at the firewall. Doesn't even need to be implemented in a sophisticated way at clients. Can just have the urgent flag repurposed to mean "respond only if not geo-locked and unconcerned with regulatory" and then I can drop these directly, and regulated source locations could ensure that packet flags are correctly set at the widest peering location out of the UK and so on.
The right way is to serve a 451 page.
[0] https://tomsitcafe.com/2025/11/06/private-matrix-hosting-a-s...
> From our perspective, the matrix.org homeserver instance has never been a service aimed at children, which our terms of use reflect by making it clear that users need to be at least 18 years old to use the server. However, the various age-verification laws require stricter forms of age verification measures than a self-declaration. Our Safety team and DPO are evaluating options that preserve your privacy while satisfying the age verification requirements in the jurisdictions where we have users.
Which is actually more strict than Discord's upcoming policy which allows accounts to operate for free without any verification, with some limitations around adult-oriented servers and content.
There has been a lot of FUD about the Discord age verification, so a refresher: The upcoming changes do not actually require you to verify anything to use Discord. It just leaves the account in teen mode by default. This means the account can't join age-restricted channels, can't unblur images marked as sensitive, and incoming message requests from unknown users will go to a second inbox with a warning by default.
You can, of course, run your own Matrix server. Having been there before I would suggest reading up on some typical experiences in running one of these servers. Unless you have someone willing to spend a lot of time running the server and playing IT person for people using it, it can be a real headache. They also note that running a server doesn't actually get around any age requirements:
> Practically speaking, that means that people and organisations running a Matrix server with open registration must verify the ages of users in countries which require it.
> Last summer we announced a series of changes to the terms and conditions of the Matrix.org homeserver instance, to ensure UK-based users are handled in alignment with the UK’s Online Safety Act (OSA). Since then Australia, New Zealand and the EU have introduced similar legislation, with movement in the US and Canada too.
It's becoming increasingly apparent that if you don't use something truly free and open source and host it yourself, you're just setting yourself up for more of this sort of thing.
You can't trust anyone to properly handle the problem of "how the hell do we keep creeps the f*ck away from kids?" with any amount of common sense.
https://telegra.ph/why-not-matrix-08-07
There are even custom message/media types that people use to upload hidden content you can't see even if you're joined to the same channel using a typical client.
Edit: It seems I've suddenly been rate–limit banned.
19. "media downloads are unauthenticated by default" -> fixed in Jun 2024: https://matrix.org/blog/2024/06/26/sunsetting-unauthenticate...
20. "ask someone else’s homeserver to replicate media" -> also fixed by authenticated media
21. "media uploads are unverified by default" - for E2EE this is very much a feature; running file transfers through an antivirus scanner would break E2EE. (Some enterprisey clients like Element Pro do offer scanning at download, but you typically wouldn't want to do it at upload given by the time people download the AV defs might be stale). For non-encrypted media, content can and is scanned on upload - e.g. by https://github.com/matrix-org/synapse-spamcheck-badlist
22. "all it takes is for one of your users to request media from an undesirable room for your homeserver to also serve up copies of it" - yes, this is true. similarly, if you host an IMAP server for your friends, and one of them gets spammed with illegal content, it unfortunately becomes your problem.
In terms of "invisible events in rooms can somehow download abusive content onto servers and clients" - I'm not aware of how that would work. Clients obviously download media when users try to view it; if the event is invisible then the client won't try to render it and won't try to download the media.
Nowadays many clients hide media in public rooms, so you have to manually click on the blurhash to download the file to your server anyway.
Custom clients that do support uploading/viewing of the non-standard events. It's a known vector for sharing CSAM in channels.
And even if I was able to register, that "automated system" still randomly bans people whenever it feels like it. Search the r/discordapp subreddit or just google "discord random ban", it's a widespread problem with no solution and I have no idea how so many other people seem to have no issues, yet at the same time you can find lots of people just as frustrated as me.
A bug blocking functionality is an annoyance, but a Scarlet Letter branded onto a secret dossier is terrifying.
Unless we come up with a global, double blind cryptographic government ID system, this is the reality of the internet.
There should be no reason for a phone number and nor do I want to waste my time trying to buy pass it with internet provided single use numbers.
Unless it is a service I must use, then I will provide a phone number. If it is a service I get to choose to use then I will never provide a number.
Even times when I've given up and put in a real phone number that has never been used with Discord, it still just bans me immediately after verifying, so they basically just stole the number.
So… choose your poison? I’m sure Matrix/Element works for someone or they would be out of business, but it does not work for me.
Apparently my monopoly ISP rotates IPs fairly often and I am sharing them with people that have been doing bad things with them, so not only are many Matrix channels blocked but even large regular websites like etsy or locals are completely blocked for me as well. Anything with a CF captcha is also an infinite loop.
https://news.ycombinator.com/item?id=46982421
https://tech.yahoo.com/social-media/articles/now-bypass-disc...
The 3D model method might work on Persona, but that demo only shows it fooling K-IDs classifier.
https://piunikaweb.com/2026/02/12/discord-uk-age-verificatio...
Moderation and centralization while typically aren't independent, aren't necessarily dependent. One can imagine viewing content with one set of moderation actions and another person viewing the same content with a different set of moderation actions.
We sort of have this in HN already with viewing flagged content. It's essentially using an empty set for mod actions.
I believe it's technically viable to syndicate of mod actions and possibly solves the mod.labor.prpbl, but whether it's a socially viable way to build a network is another question.
It seems far too risky to sign up on a service for the purpose of intercommunication that is able (or even likely) to burn bridges with another for any reason at any time. In the end people will just accumulate on 2 or 3 big providers and then you have pseudo-federation anyway.
However all the LGBT+ friendly servers federate with each other and that's good enough for me. I like not having to see toxicity, there's too much of it in the world already.
But I have not seen that outside the scope of Germany.
I don't know pleroma though. I've always hated twitter for its short-form content (I feel like it stimulates stupid nonsense like "look at my run today" and "I just had dinner" and discourages actual interesting content. So I was never into twitter clones either. I do use lemmy more although it has its own specific attitude issues around its developers (tankies).
Pleroma is Mastodon server software. For a bunch of essentially random reasons, it was popular among right-wingers setting up their Mastodon instances, and some servers responded by blocking any Mastodon server running it outright. A subset of those would also block any server not blocking Pleroma like they do.
Yeah. In practice, Fediverse servers have formed clusters based on shared values. And since the second-largest cluster is (iirc) a Japanese CSAM distribution network, everyone is very glad that this sort of de facto censorship is possible. Do you have a viable alternative?
"People will just accumulate on 2 or 3 big providers" is far from an inevitable circumstance, but there are conditions that make it more likely. That, too, is largely down to negligence or malice (but less so than the abusive communications problem).
Is that still true? As the admin of a small instance, I find the abuse coming from mastodon.social has been really low for a few years. There is the occasional spammer, but they often deal with it as quickly as I do.
https://news.ycombinator.com/item?id=36231993
> It's neutral to this topic, it's about tech.
this thread began by xe bringing up failures in moderation affecting trans people
Asking trans people to ignore this is like asking Jews to be comfortable in a bar where only ten percent of the patrons are Nazis. Arguing that "well not everyone is a Nazi" doesn't help, an attitude of "we're neutral about Nazis, we serve drinks to anyone" still makes it a Nazi bar, just implicitly rather than explicitly.
We do discuss all kinds of different topics here. Despite what many people here want to believe, Hacker News isn't exclusively for tech and tech-related subjects.
>and pretty sure anything openly transphobic would be flagged or deleted pretty soon.
But not banned, that's the problem. The guidelines are extremely pedantic but nowhere is bigotry, racism, antisemitism or transphobia mentioned as being against those guidelines. You might say that shouldn't be necessary, but it's weird that so much effort is put into tone policing specific edge cases but the closest the guidelines come to defending marginalized groups is "Please don't use Hacker News for political or ideological battle. It tramples curiosity." Transphobia is treated as a mere faux pas on the same par as being too snarky, or tediously repetitive. The real transgression being not the bigotry but "trampling curiosity." Any trans person who posts here knows that bigots who hate them and want to do them harm aren't going to suffer meaningful consequences (especially if they just spin up a green account) and that the culture here isn't that concerned about their safety.
Read the green account just below me. That sort of thing happens all the time. Yes, the comment is [dead] but why should a trans person be comfortable here, or consider themselves welcome, knowing that this is the kind of thing they'll encounter?
No thanks, there are other services I can use.