> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
If it results in a new billion-dollar penalty, maybe it would've saved money to move him quietly to a cushy rest-and-vest advisory position, in which he's not allowed to see, do, or say anything.
> In his whistleblower complaint, Baig is requesting reinstatement, [...]
I don't understand the "reinstatement" part. Does he actually want to go back, and think that it wouldn't be a toxic dynamic?
(He already talked about retaliation. And then by going public the way he did, I'd think he burned that bridge, salted the earth for a mile around bridge, and then nuked the entire metro area from orbit.)
Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
> Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
“Reinstatement” is usually a legal formality in whistleblower cases: lawyers ask for it because the law says the remedy for retaliation is to make the employee whole, and it strengthens the case even if nobody expects it to happen. In reality, returning to the job is almost never feasible, so the request mostly serves as leverage for a financial settlement.
That's rather surprising about the accessing user data bit. When I was at Meta, the quickest way to get fired as an engineer was to access user data/accounts without permission or business reason. Everything was logged/audited down to the database level. Can't imagine that changing and the rules are taught very early on in the onboarding/bootcamp process.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
As a customer I'm angry that business get to use "hope and pray" as their primary data protection measure without being forced to disclose it. "Motivators" only work on people who value their job more than the data they can access and I don't believe there's any organization on this planet where this is true for 100% of the employees, 100% of the time.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
Everything is logged, but no one really cares, and the "business reasons" are many and extremely generic.
That being said, maybe I'm dumb but I guess I don't see the huge risk here? I could certainly believe that 1500 employees had basically complete access with little oversight (logging and not caring isn't oversight imo). But how is that a safety risk to users? User information is often very important in the day to day work of certain engineering orgs (esp. the large number of eng who are fixing things based off user reports). So that access exists, what's the security risk? That employees will abuse that access? That's always going to be possible I think?
To the extent a random person's evidence on the Internet amounts to proof:
From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.
Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.
Whatever Meta says publicly about this topic, and whatever its internal policies may be, directly contradicts its behavior. So any attempt to excuse this is nothing but virtue signalling and marketing.
The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.
Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.
There’s a meaningful difference in a company wanting to exploit user data to enrich itself and allowing employees to engage in voyeurism. The latter doesn’t make the company money, and therefore can be penalised at no cost.
Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.
Given how WhatsApp is the de-facto way to communicate outside of the West and China, these security/data-handling "weaknesses" are most likely a feature, not a bug. An absolute bonanza for the certain intelligence services.
Remember, kids: End to end encryption is useless if the "ends" are fully controlled by an (untrustworthy) third party.
Yeah, huge in Latin America in the sense that a lot (most?) business only have a number that they use with Whatsapp (you can't call or even text them). Is it the same in Europe? Since I am from Latin America I never know if people from other continents use Whatsapp as much as we do, and if when I ask them to use Whatsapp I am imposing a new app or it's what they regularly use.
No. Here in Germany WhatsApp is not even that widespread for businesses. But WA is very big here for personal communication, though Signal comes in second (at least amongst older people, and amongst my circle)
It's definitely not the world's messaging market. For instance in Japan and many places in SEA, Line is the standard messenger - one many people probably haven't even heard of. Though it does have a nice play on words - are you on Line?
It’s not uncommon. Orkut back in the day was wildly popular in Latin America and India. WhatsApp is the same. I think users in NA have a lot of high quality options as against those in Asia and LatAm who don’t have much reliable options other than ones developed in NA.
You can get an android phone for about one tenth of what a new iPhone costs. That’s why android dominates lower income markets. Apple decided they just don’t want to be there.
I’m not sure that’s true. I’m fairly certain UK, France, AU, Canada WhatsApp is not vastly more popular than the blue bubble alternative. At least I believe this was the case a few years ago, based on data I’d seen.
> Blue bubble isn't really a thing ever mentioned in France either, not enough iPhone market share.
Nobody uses iMessage. People with iPhone use WhatsApp too.
The user experience of iMessage used to be subpar and now everyone has WhatsApp installed anyway, the feature set is the same and it works on all phone brands so nobody feels like switching.
I guess that it’s the iPhone’s messenger app? I heard that in that app, fellow iOS users have blue bubble messages and Android / other users have green bubble messages, and all the teens in the US /maybe Canada think it’s lame if you don’t have blue bubbles.
I can't tell if I'm being paranoid or just realistic, when I suspect that FBI/Apple fights over decrypting/unlocking iPhones or iMessage are just part of Apple's security theater.
If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
> If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
The FBI wants its investigations to go to court and lead to convictions. Any evidence gained in this way would be exposed as coming form Apple; notwithstanding parallel construction:
It's possible for it to be a facade, but also real.
Apple is a part of PRISM so there's approximately a 100% chance that anything you send to Apple via message, cloud, or whatever else, gets sent onto the NSA and consequently any agency that wants it. But the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing - nobody can prove it was used against them, so they don't have the legal standing to sue.
And the reason this is, is because its usage is never acknowledged in court. Instead there is parallel construction. [1] For instance imagine the NSA finds out somebody is e.g. muling some drugs. They tip off the police and then the police find the car in question and create some reason to pull it over - perhaps it was 'driving recklessly.' They coincidentally find the cache of drugs after doing a search of the car because the driver was 'behaving erratically', and then this 'coincidence' is how the evidence is introduced into court.
----
So getting back to Apple they probably want to have their cake and eat it too. By giving the NSA et al all they want behind the scenes they maintain those positive relations (and compensatory $$$ from the government), but then by genuinely fighting its normalization (which would allow it to be directly introduced) in court, they implicitly lie to their users that they're keeping their data protected. So it's this sort of strange thing where it's a facade, but simultaneously also real.
> the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing
It's kind of wild that this is the part of the deep state MAGA just forgot about.
Maybe. I think they'd have a hard time keeping that under wraps—governments aren't typically very careful (and the FBI is about as careful as a bull in a china shop) about not showing their hand when it comes to charging people. If you're strict about keeping certain info on certain channels, smart observers would notice if someone were snooping.
For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure. I don't think at least our government is that competent or careful.
But also, people wayyyy overhype how much apple tries to come off as privacy-forward. They sell ads and don't even allow you to deny apps access to the internet, and for the most part their phone security seems more focused on denying you control over your own phone rather than denying a third party access to it. I think they just don't want the hassle of complying with warrants. Stuff like pegasus would only be so easy to sell if you couldn't lean on the company to gain access, and I think it'd be difficult for hundreds of countries to conspire to obscure legal pressure. Finally Apple generally has little to gain from reading your data, unlike other tech giants with perverse incentives.
Of course this is all speculation, but I do trust imessages much more than I trust anything coming out of meta, and most of what comes out of google.
> For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure.
Corrupt investigators can use parallel construction to pretend that the key breakthrough in the case was actually something legal.
iMessage backups in the cloud are subject to warrants. Even if you don't use iCloud backups, can you be sure everyone you communicate with also abstains?
right, the ability to recover implies keys exist outside the device. even if they gossip keys to other devices you control, there are lots of people with only a single apple device.
I think Signal is the safest choice. If you want to be absolutely sure, host your own service, and hope you know how to make it have airtight security.
Makes you wonder if Meta got one or more of those secret national security letters, or foreign equivalents.
Also makes me wonder about Google's change wrt android security patches - under the guise of "making it easier for OEMs" by moving to quarterly is actually just so that Paragon and other nation state spyware has access to the vulnerabilities for at least 4 months before they get patched.
"He also claimed the company failed to remedy the hacking and takeover of more than 100,000 accounts each day, ignoring his pleas and proposed fixes and choosing instead to prioritize user growth."
There is no oversight of these monstrosities of any sort. I doubt anyone would have issues with the thesis that Meta would implement anything that might curb their user numbers unless it was mandated.
Why would they? They are beholden to their shareholders first. If it isn't illegal then it isn't illegal, immoral perhaps but that is not illegal, unless it is illegal.
My learned friends are going to have to really get their bowling arms warmed up for this sort of skit. For starters, you need a victim ... err complainant.
Didn't Hacker News feature an article on their home page at some point (10 years ago?) that at that time Facebook misconfigured something and users could observe their data being fed directly to some Israeli intelligence company? That was the day I deleted my FB account and never looked at anything they offer anymore.
At this point it’s best to assume that everything you communicate is being collected in some way.
There are very, very few apps I really trust. E.g. the only mechanism I trust for communicating passwords securely is GPG, I wouldn’t even use Signal for that.
Unless you owner of the app and what they are doing exactly you can’t trust anyone. You don’t know what they are going through or if they sold the app to someone or had a certain code implementation that leaks all of your data.
I stopped using Chrome when I had clear evidence of it leaking data - urls visited.
1) leave quietly and tell no one: con - no one on HN gets to talk about it. The next person needing money does it anyway.
2) leave loudly when you're still poor: con - you get blacklisted from tech and die from a preventable disease working at a gas station without insurance. The company implements the policy anyway.
3) leave loudly when your rich: con - people accuse you of selling out the users.
I have consistently told recruiters from Meta to leave me alone. It is a company that has knowingly done massive harm to our culture and our children, and I have no interest in ever working with or for them.
Unsurprising given it’s been an open secret for over a decade that Meta employees will (if you have the right contacts or amount of money), orchestrate banning or seizing long-standing active accounts with desirable usernames and giving them to their friends or the highest bidder.
A related scheme is the existence of brokers who will, for a fee, recover banned or locked accounts. User pays the broker $X, broker pays their contact at Meta $Y, and using internal tooling suddenly a ban or suspension that would normally put someone in an endless loop of automated vague bullshit responses gets restored.
If you haven't already: Signal is the strongest independent e2e encrypted consumer app that is driven by a non-profit organisation using a zero knowledge approach.
When it comes to e2e encryption it's important for the ends to be static (not web apps) and auditable (open source, reproducible builds) because the software running on the ends can trivially compromise anything going trough either of them. It can be as simple as a script being loaded from the server into a runtime such as Lua (closed source app). Or custom javascript delivered (web app).
When these conditions aren't met, any e2e encryption claim can be dismissed out of hand. This does not mean the service offers no value, it just means it cannot be trusted to keep anything confidential.
I've seen some people right here on HN say that Whatsapp was an inspired acquisition and Zuck is a great product guy, knows what to buy and who to hire
> In his whistleblower complaint, Baig is requesting reinstatement, back pay and compensatory damages, along with potential regulatory enforcement action against the company.
If the company is so bad (it is), why does he want back?!
'Just pay me the salaries I "missed", and keep them coming.' The regulatory action is just "potential".
Companies are not relationships where once they're your ex they are never worth interacting with ever again. If you are doing good work and then HR pushes you out, then it is reasonable to sue the company to get them to pay you damages and then go back to doing what you were before with the protection that they won't do it again.
The point I tried to make was not that he should be resentful about being kicked out, but that he doesn't really care that Meta is unethical and endangers billions.
Even if nothing changes (the regulatory action is optional), he's happy to contribute (he insists, in fact). Even among people who don't want him there.
The points you’re making are personal attacks about the whistleblower. They don’t focus on the substance of the accusations (insecurity). Instead, they focus on your idea of their career motivations and their personality.
Wasn’t using Whatsapp that got a bunch of people droned by Israel? You should just assume your metadata at the very least is getting leaked to all US friendly intelligence agencies if you are using a US based service.
> A Meta spokesperson, Andy Stone, wrote on Threads, the company’s text-based social network: “Sadly this is a familiar playbook in which a former employee is dismissed for poor performance and then goes public with distorted claims that misrepresent the ongoing hard work of our team.”
Skeletons keep piling up while PR try to dismiss them
Corporate communications has playbook damage control responses, and this quote seems to be suggesting that the quoted response is one of them (it's "familiar").
Whether "former employees" are sketchily operating from playbooks, who knows. Because PR playbook-sounding statements don't have a lot of credibility.
I hate Meta as much as the next person, but it feels like "endangering billions of users" is exagerating here. The complaint is pretty much that WhatsApp engineers can access metadata (NOT the content of the messages).
This said, WhatsApp is not open source, so it's impossible for users to verify how the encryption works, so users have to trust that it's properly end-to-end encrypted.
If you care about privacy (and you should), then you should use Signal instead of WhatsApp.
The metadata of someone's communications can be almost as damning as the content. I would guess that if the FBI could merely have a list of who their suspect contacted over an app, and when, they'd have 90% of what they wanted.
My understanding is that in the vast majority of investigations law enforcement will be satisfied in learning only who you're talking to, i.e. "just metadata" is fine, and dangerous.
It seems reasonable. Even those who are sloppy with their opsec probably do not detail the entirety of the plan via digital mechanisms. Being able to identify likely collaborators is probably sufficient to infer some specifics of an activity.
> The complaint is pretty much that WhatsApp engineers can access metadata (NOT the content of the messages).
I don't even take this statement at face value. It's trivially easy to include models on client side that can do some message classification and treat that as "metadata" that would give insight into the content of the message.
Seems just in line with all the other Meta Scandals: from providing a platform for genocide in Myanmar, harming the psychology of 100s of millions of teenagers (Instagram) to pushing extremist and fascists content while receiving big ad cash dollars for propaganda that lifts criminals and fascist politicians into the highest offices. Meta has no red lines, as long as it lines Zuckerberg's pockets.
I never trusted fecebook which is why I never created an account or used any of its products (old Instagram placeholder only), except last year, I made a small startup and wanted to use Instagram to promote it. Despite using the other old account to avoid potential false flagging as spam, immediately after creating it I got banned and had to submit a personal picture holding a book or whatever to verify I am real. I did that although it's not a personal account. Regardless, a few seconds after submitting the picture and verifying my number it got permanently banned. So far this is understandable, maybe it's all an automated process which is expected. However, I wanted to get in touch with support, in any form or shape, only to find out that there's none, and apparently the only way to actually fix something within fecebook is knowing someone who knows someone who works there. LOL, really big LOL!! A company that size operating like an underground syndicate is a total joke and totally untrustworthy.
Bottom line: Never trust anything from fecebook, no matter what they say, do not.
> WhatsApp engineers could “move or steal user data” including contact information, IP addresses and profile photos “without detection or audit trail”.
From enabling genocide in Myanmar, to interfering with elections, to giving user data to third parties in violation of its own daya policies, to straight up weird stuff like pirating/torrening books to train their steaming pile of garbage called llama, to having sex chatbots be weird to children.
And then there is the even weirder decisions of zuck, the biggest loser of all:
- VR didnt seem to catch on
- the metaverse is a giant smelly pile of poo and he sunk millions in it
- he is hiring AI engineers at absurd money in a rapidly cooling bubble market
- he immediately started ass kissing the orange stain that calls himself president
Is he purposefully trying to be a caricature cartoon vilain, a grotesque loser, and his company an emblem of evil? Or is it just cluelessness?
>the metaverse is a giant smelly pile of poo and he sunk millions in it
He sunk tens of billions.
Estimates (because we don't have "Reality Labs" broken out before 2019) put Zuck's Metaverse Misadventure & Boondoggle about $75B in the hole ($10B revenue on $85B spend) with no signs of a turnaround in revenue.
There are plans to turn things around with AR spectacles but decent ones are years off and will require entirely new investment with little re-use of that $75B Metaverse nonsense (Oculus acquisition, 5 generations of Quest R&D, Horizon Worlds, partnered and sponsored games and content, etc.)
The only real ROI will be the experience and staff gained. The rest will almost certainly land in the dustbin.
They managed to tap in to a seemingly unlimited ocean of uninformed useful idiots, paid shills, bots and psychopaths. Its how you get rich in social media.
Gang, who should we believe: a rando with 10 karma points who acts like he knows it all without any evidence or one of the last remaining journalistic institutions?
My man, Meta were caught torrenting/pirate books to train the garbage that is llama. Meta enabled a couple of genocides including the one in Myanmar. Meta suppressed reports on children safety (Washington Post probably is also activist journalism, right? https://www.washingtonpost.com/investigations/2025/09/08/met...).
We are not surprised at all that s company that has been consistently evil, is evil again.
Facebook doesn't give me a straight answer, when I ask them questions about their policies, even when my questions aren't answered by their policies. The job of the privacy team within Facebook is not privacy: it's reducing liability.
Obviously not: if I had, I'd have inside contacts I could ask, instead of having to bother their public relations people to beg for scraps of intel about what they're doing with my information, while they act
I don't believe they've lied to me – I'm not so uncharitable as to assume their incorrect "it's written in the policy!" claims were deliberate lies –, but they're certainly not forthcoming.
We don't really know that messages really are end-to-end encrypted though, do we? Is there a way to actually check that the messages in transit are encrypted in a way that only the other end can decrypt them? If not, we have to take Meta's word for it, which frankly doesn't carry much weight.
How can we call it "E2E encryption" in any meaningful sense of the term when the ends run proprietary code, and at least one of the ends has proven themselves unworthy of trust time and again.
Not sure this is correct - alaq said the messages are e2e, so not visible at all by anyone other that the participants of the conversation. The meta->data<- however IS visible by them and can and is likely to be used for advertising.
Of course the meta data is visible. Its probably more useful than the actual content of the conversation too. I mean from an ML perspective how would you even make features out of conversation that help with CTR ? That too without creeping the users out. I'd imagine its the same reason why meta doesnt (likely) listen in on mobile mics. Why go through the whole shebang of running always on transcription when simple features like who talked to who and at what times are more useful at establishing user similarities.
HN isn’t monolith, I personally never said WhatsApp is good, and I’m telling you from now avoid Signal too till they remove the phone number requirement AND you can deploy your own server.
This is unfortunately entirely seperate from that other article.
FTA:
> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
Why ? You think Meta removed the privacy layers or put backdoors in place ? I mean if that's the suspicion, maybe we should read the terms of service and see if they actually guarantee E2E encryption
The way Zuckerberg tricked Acton and Koum is by itself enough for me not to trust Whatsapp. Even from a hypothetical "their encryption works but that's really scummy" perspective
It was bought as a power play, consolidation of tech power. Why would I trust them to do the right thing?
If it results in a new billion-dollar penalty, maybe it would've saved money to move him quietly to a cushy rest-and-vest advisory position, in which he's not allowed to see, do, or say anything.
> In his whistleblower complaint, Baig is requesting reinstatement, [...]
I don't understand the "reinstatement" part. Does he actually want to go back, and think that it wouldn't be a toxic dynamic?
(He already talked about retaliation. And then by going public the way he did, I'd think he burned that bridge, salted the earth for a mile around bridge, and then nuked the entire metro area from orbit.)
Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
“Reinstatement” is usually a legal formality in whistleblower cases: lawyers ask for it because the law says the remedy for retaliation is to make the employee whole, and it strengthens the case even if nobody expects it to happen. In reality, returning to the job is almost never feasible, so the request mostly serves as leverage for a financial settlement.
Maybe he's just laying a foundation for an upcoming legal dispute?
Personally it doesn't matter if there are auditing systems in place, if the data is readable in any way, shape or form.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
That being said, maybe I'm dumb but I guess I don't see the huge risk here? I could certainly believe that 1500 employees had basically complete access with little oversight (logging and not caring isn't oversight imo). But how is that a safety risk to users? User information is often very important in the day to day work of certain engineering orgs (esp. the large number of eng who are fixing things based off user reports). So that access exists, what's the security risk? That employees will abuse that access? That's always going to be possible I think?
If you have a sister,imagine her being stalked by an employee?
If you have crypto, imagine an employee selling your information to a third party?
Different culture from the blue app, or whatever they call it?
From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.
Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.
The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.
Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.
Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.
Remember, kids: End to end encryption is useless if the "ends" are fully controlled by an (untrustworthy) third party.
you probably mean outside of the USA, it's huge in Europe/UK
(which doesn't contradict your main point)
USA is special because it is the (only?) country where iPhone has more users than Android.
Russia: Telegram
Taiwan: Line
Japan: Line
By contrast, WhatsApp is best known to me for being used in Europe, Australia, and India.
For business comms drop instagram and move WhatsApp to first.
For Singapore it seems LinkedIn messages are the go to IM for business.
Europe p2p: telegram number one by a huge margin, then WhatsApp. B2b: WhatsApp, period.
Blue bubble isn't really a thing ever mentioned in France either, not enough iPhone market share.
Nobody uses iMessage. People with iPhone use WhatsApp too.
The user experience of iMessage used to be subpar and now everyone has WhatsApp installed anyway, the feature set is the same and it works on all phone brands so nobody feels like switching.
YES!
If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
The FBI wants its investigations to go to court and lead to convictions. Any evidence gained in this way would be exposed as coming form Apple; notwithstanding parallel construction:
* https://en.wikipedia.org/wiki/Parallel_construction
As for other agencies, I'm sure many have exploits to attack these devices and get spyware on them, and so may not need Apple's assistance.
Apple is a part of PRISM so there's approximately a 100% chance that anything you send to Apple via message, cloud, or whatever else, gets sent onto the NSA and consequently any agency that wants it. But the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing - nobody can prove it was used against them, so they don't have the legal standing to sue.
And the reason this is, is because its usage is never acknowledged in court. Instead there is parallel construction. [1] For instance imagine the NSA finds out somebody is e.g. muling some drugs. They tip off the police and then the police find the car in question and create some reason to pull it over - perhaps it was 'driving recklessly.' They coincidentally find the cache of drugs after doing a search of the car because the driver was 'behaving erratically', and then this 'coincidence' is how the evidence is introduced into court.
----
So getting back to Apple they probably want to have their cake and eat it too. By giving the NSA et al all they want behind the scenes they maintain those positive relations (and compensatory $$$ from the government), but then by genuinely fighting its normalization (which would allow it to be directly introduced) in court, they implicitly lie to their users that they're keeping their data protected. So it's this sort of strange thing where it's a facade, but simultaneously also real.
[1] - https://en.wikipedia.org/wiki/Parallel_construction
It's kind of wild that this is the part of the deep state MAGA just forgot about.
For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure. I don't think at least our government is that competent or careful.
But also, people wayyyy overhype how much apple tries to come off as privacy-forward. They sell ads and don't even allow you to deny apps access to the internet, and for the most part their phone security seems more focused on denying you control over your own phone rather than denying a third party access to it. I think they just don't want the hassle of complying with warrants. Stuff like pegasus would only be so easy to sell if you couldn't lean on the company to gain access, and I think it'd be difficult for hundreds of countries to conspire to obscure legal pressure. Finally Apple generally has little to gain from reading your data, unlike other tech giants with perverse incentives.
Of course this is all speculation, but I do trust imessages much more than I trust anything coming out of meta, and most of what comes out of google.
Corrupt investigators can use parallel construction to pretend that the key breakthrough in the case was actually something legal.
* Recovery Keys
* Recovery Contact (someone who holds your recovery key in key escrow)
Also makes me wonder about Google's change wrt android security patches - under the guise of "making it easier for OEMs" by moving to quarterly is actually just so that Paragon and other nation state spyware has access to the vulnerabilities for at least 4 months before they get patched.
There is no oversight of these monstrosities of any sort. I doubt anyone would have issues with the thesis that Meta would implement anything that might curb their user numbers unless it was mandated.
Why would they? They are beholden to their shareholders first. If it isn't illegal then it isn't illegal, immoral perhaps but that is not illegal, unless it is illegal.
My learned friends are going to have to really get their bowling arms warmed up for this sort of skit. For starters, you need a victim ... err complainant.
And not every CEO begins life in their company with "if you need any info just ask, they trust me, dumb fucks"
There are very, very few apps I really trust. E.g. the only mechanism I trust for communicating passwords securely is GPG, I wouldn’t even use Signal for that.
1) leave quietly and tell no one: con - no one on HN gets to talk about it. The next person needing money does it anyway.
2) leave loudly when you're still poor: con - you get blacklisted from tech and die from a preventable disease working at a gas station without insurance. The company implements the policy anyway.
3) leave loudly when your rich: con - people accuse you of selling out the users.
4) Don't join Meta in the first place
I have consistently told recruiters from Meta to leave me alone. It is a company that has knowingly done massive harm to our culture and our children, and I have no interest in ever working with or for them.
https://www.cnbc.com/amp/2022/11/17/meta-disciplined-or-fire...
A related scheme is the existence of brokers who will, for a fee, recover banned or locked accounts. User pays the broker $X, broker pays their contact at Meta $Y, and using internal tooling suddenly a ban or suspension that would normally put someone in an endless loop of automated vague bullshit responses gets restored.
When these conditions aren't met, any e2e encryption claim can be dismissed out of hand. This does not mean the service offers no value, it just means it cannot be trusted to keep anything confidential.
Counterpoint: he's a monopolist and scummy person (https://news.ycombinator.com/item?id=1692122) who refuses to stop (https://arstechnica.com/tech-policy/2019/09/snapchat-reporte...) from the early days onwards (https://news.ycombinator.com/item?id=1169354)
https://news.ycombinator.com/item?id=15007454
If the company is so bad (it is), why does he want back?!
'Just pay me the salaries I "missed", and keep them coming.' The regulatory action is just "potential".
I have no sympathy for Meta, but this guy...
Even if nothing changes (the regulatory action is optional), he's happy to contribute (he insists, in fact). Even among people who don't want him there.
Skeletons keep piling up while PR try to dismiss them
Corporate communications has playbook damage control responses, and this quote seems to be suggesting that the quoted response is one of them (it's "familiar").
Whether "former employees" are sketchily operating from playbooks, who knows. Because PR playbook-sounding statements don't have a lot of credibility.
This said, WhatsApp is not open source, so it's impossible for users to verify how the encryption works, so users have to trust that it's properly end-to-end encrypted.
If you care about privacy (and you should), then you should use Signal instead of WhatsApp.
I don't even take this statement at face value. It's trivially easy to include models on client side that can do some message classification and treat that as "metadata" that would give insight into the content of the message.
Complaint:
https://storage.courtlistener.com/recap/gov.uscourts.cand.45...
So not messages.
From enabling genocide in Myanmar, to interfering with elections, to giving user data to third parties in violation of its own daya policies, to straight up weird stuff like pirating/torrening books to train their steaming pile of garbage called llama, to having sex chatbots be weird to children.
And then there is the even weirder decisions of zuck, the biggest loser of all:
- VR didnt seem to catch on
- the metaverse is a giant smelly pile of poo and he sunk millions in it
- he is hiring AI engineers at absurd money in a rapidly cooling bubble market
- he immediately started ass kissing the orange stain that calls himself president
Is he purposefully trying to be a caricature cartoon vilain, a grotesque loser, and his company an emblem of evil? Or is it just cluelessness?
He sunk tens of billions.
Estimates (because we don't have "Reality Labs" broken out before 2019) put Zuck's Metaverse Misadventure & Boondoggle about $75B in the hole ($10B revenue on $85B spend) with no signs of a turnaround in revenue.
There are plans to turn things around with AR spectacles but decent ones are years off and will require entirely new investment with little re-use of that $75B Metaverse nonsense (Oculus acquisition, 5 generations of Quest R&D, Horizon Worlds, partnered and sponsored games and content, etc.)
The only real ROI will be the experience and staff gained. The rest will almost certainly land in the dustbin.
My man, Meta were caught torrenting/pirate books to train the garbage that is llama. Meta enabled a couple of genocides including the one in Myanmar. Meta suppressed reports on children safety (Washington Post probably is also activist journalism, right? https://www.washingtonpost.com/investigations/2025/09/08/met...).
We are not surprised at all that s company that has been consistently evil, is evil again.
That, or you have a vested interested in making sure that your stake in Meta does not depreciate in value.
I don't believe they've lied to me – I'm not so uncharitable as to assume their incorrect "it's written in the policy!" claims were deliberate lies –, but they're certainly not forthcoming.
From the article: > including contact information, IP addresses and profile photos
I can confirm this, I used to work at WhatsApp.
FTA:
> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
I'm guessing there will be some tricky legal wording in their T&C that wouldn't rule them out from being an intermediate entity that can see messages.
It was bought as a power play, consolidation of tech power. Why would I trust them to do the right thing?