Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
TikTok has a drug-like effect on the brain. Multiple studies show a clear link between excessive TikTok engagement and increased levels of anxiety, depression, and stress. Maybe it is time we regulate it like a drug?
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
> Monitoring children's DMs is the responsibility of the parents, not megacorps
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
I feel like this makes sense for a platform that targets teens. Plus, I wouldn't trust TikTok to implement E2E encryption properly—who knows what they've snuck into their client.
What kind of application is not targeted at both teens and adults?
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
I think it's very safe to assume that no major US based platform has 'real' E2E encryption. They're almost certainly all a part of PRISM by now, and it'd contradict their obligations to enable government surveillance. So the only thing that's different is not lying about it. Though I expect the other platforms are, like when denying they were part of PRISM, telling half truths and just being intentionally misleading. 'We provide complete E2E encryption [using deterministically generated keys which can be recreated on demand].'
Aside from the fact that you can get Metadata and that some communication frequently happens outside of E2EE - what US law do you believe mandates moderation? I'm curious.
Fun fact - there is a big correlation between World Wars and compulsory education. Of course governments and big corporations "care" about children. Of course!
Reminder, Larry “citizens shouldn’t get any privacy” Ellison now owns tik tok. If you’re still using it or have friends and family using it you should stop immediately. It WILL eventually be used against you if this regime gets its way.
As if. If people haven't stopped using TikTok with all of the other reasons for stopping, then because Ellison is damn sure not going to move the needle.
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
TikTok’s stance against end-to-end encryption is unsurprising but still concerning. TikTok is a source of information on many topics, such as the genocide in Gaza, which traditional media underreport and many governments try to suppress. The network effect of big social media platforms means many people will likely talk about these topics in TikTok DMs. No matter what legal controls TikTok claims to enforce, there is no substitute for technological barriers for preventing invasions of privacy and government overreach. This is yet another example where corporations and governments sacrifice people’s autonomy and privacy in the name of security.
It's a pretty terrifying world we live in now, where an unencrypted addictive short-form video platform is considered a source of information more than news agencies or even community-managed forums.
"The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk."
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
Yes. China gives a shit that user rdiddly, at 36 minutes before 00:55 UTC on March 4, 2026, said that China is spyihg to the point that they are going to be abducted for it.
> Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritising 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite
It is controversial.. amongst people who have concerns about private communications and society, from a regulatory and governance perspective.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
I wondered how it could be considered 'controversial', but they do quote at least a couple groups speaking against it. The NSPCC for instance, who incidentally also warned parents about a Harry Potter video game because their children might want to learn more about the game:
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
Calling something controversial is a favorite propaganda technique employed by "news" outlets. It's another form of selective reporting and framing. It carries negative connotations, and has really no objective standard by which it can be wrong since you'll always find somebody against any issue.
The UK government seems a lot more willing to embrace the panopticon in the name of protecting people from terrorists, child sex traffickers, human rights activists, Catholics, jaywalkers, you name it.
The core tension here isn’t really about encryption itself, it’s about moderation models.
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
ID please.
Seems entirely reasonable.
Possibility entirely ineffective, but then again I don’t often see children walking around with bottle of booze.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Hogwash.
Where are these mythical people who aren’t concerned with both?
Why?
> They already got so much data on their users
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
This is the next two steps into 1984.
Once you start mandating this, there's no going back.
The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?
Would it not be an undue burden to necessitate the release of the weights every time they change?
Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.
Wouldn't enforcing algorithmic interpretability additionally be an undue burden?
> They must be able to know why a content was served to them.
What if the authors of the code are unable to tell you why?
It's a good thing those human operators couldn't listen in to whichever conversation they wanted.
fake and scam AD.
they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.
It’s ok for a platform to not feature private conversations. They should just have no DM feature at all, then; make all messages publicly visible.
Private conversations are indeed not for all ages. Parents should be able to grant access to that on individual basis.
I'm mindful that it's less secure than other apps, but for a lot of chats it doesn't matter.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
And in a perfect world essentially shouldn’t have to be, at least inside expensive walled garden app stores.
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
If you run (say) a restaurant, you get big spikes in business from TikTok videos in ways you don't get from Facebook or Instagram or others.
TikTok is the platform everyone is one right now.
You can’t moderate an E2EE platform.
https://digitaldemocracynow.org/2025/03/22/the-troubling-imp...
It really depends on whether you think your government is more dangerous than, say, suicide trends, grooming, scamming.
I know the answer is pretty easy for US citizens to answer right now.
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
China will come to us.
Or should that be:
China will come to the US.
Voluntarily.
Means they read every message
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
After you notice it, you'll notice it everywhere.
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.