The first thing I have to point out is that this entire article is clearly LLM-generated from start to finish.
The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."
The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.
Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.
Spending months dealing with folks attempting to blackmail us over ridiculous non-issues has pretty much killed any sympathy I had for bug bounty hunters.
You probably need to improve your internal LLM detector then. This obviously reads as LLM generated text.
- "This isn't just a "status" bug. It's a behavioral tracker."
- "It essentially xxxxx, making yyyyyy."
- As you mentioned, the headings
- A lack of compound sentences that don't use "x, but y" format.
This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.
After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
My "LLM witch-hunt" got the prompter to reveal the reply they received, which we now learn is neither from Valve nor says "Won't Fix" but rather deems it not a security exploit by HackerOne's definition. It is more important than ever before to be critical of the content you consume rather than blindly believing everything you read on the internet. Learning to detect LLM writing which represents a new, major channel of misinformation is one aspect of that.
It does for me too. Especially the short parts with headings, the bold sentences in their own paragraph and especially formulations like "X isn't just... it's Y".
In other words, this website uses headings for sections, doesn't ramble, and has a single line of emphasis where you'd expect it. I wonder what style we'll have to adopt soon to avoid LLM witchhunt - live stream of consciousness ranting with transcript and typos?
Imagine being a person like me who has always been expressing himself like that. Using em dash, too.
LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.
And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?
I see a lot of these "this is LLM" comments; but they rarely add value, side track the discussion, and appear to come into direct conflict with several of HN's comment guidelines (at least my reading).
I think raising that the raw Valve response wasn't provided is a valid, and correct, point to raise.
The problem is that that valid point is surrounding by what seems to be a character attack, based on little evidence, and that seemingly mirrors many of these "LLM witch-hunt" comments.
Should HN's guidelines be updated to directly call out this stuff as unconstructive? Pointing out the quality/facts of an article is one thing, calling out suspected tool usage without even evidence is quite another.
Counterproposal: Let's update HN's guidelines to ban blatant misinformation generated by a narrative storyteller spambot. My experience using HN would be significantly better if these threads were killed and repeat offenders banned.
>Counterproposal: Let's update HN's guidelines to ban blatant misinformation generated by a narrative storyteller spambot.
This will inevitably get abused to shut down dissent. When there's something people vehemently disagree with, detractors come out of the woodwork to nitpick every single flaw. Find one inconsistency in a blog post about Gaza/ICE/covid? Well all you need to do is also find a LLM tell, like "it's not x, it's y", or an out of place emoji and you can invoke the "misinformation generated by a narrative storyteller spambot" excuse. It's like the fishing expedition for Lisa Cook, but for HN posts.
LLM generated comments aren't allowed on HN[0]. Period.
If any of the other instances whereby HN users have quoted the guidelines or tone policed each other are allowed then calling out generated content should be allowed.
It's constructive to do so because there is obvious and constant pressure to normalize the use of LLM generated content on this forum as there is everywhere else in our society. For all its faults and to its credit Hacker News is and should remain a place where human beings talk to other human beings. If we don't push back against this then HN will become nothing but bots posting and talking to other bots.
Stop worrying about whether articles are written by LLM or not and judge them by their content or provenance to sources that you can justifiably trust. If you weren't doing that before LLMs then you were getting fooled by humans writing incompetent or deceptive articles too. People have good reasons for using LLMs to write for them. If they wrote it themselves, it might cause you to judge them as being a teenager, uneducated, foreign, or whatever other unreliable proxies you use for trust.
You're right, but in this case I think some narrative liberty was justified, especially since Valve basically delegated triaging bug reports to HackerOne, but this relationship might not be immediately obvious to some readers. Suppose a nightclub contracts its bouncers from some security security firm. You get kicked out by the contract security guard. I think most people would think it's fair to characterize this situation as "the nightclub kicked me out" on a review or whatever.
No, you are correct, that is a HackerOne employee filtering the report before someone at Valve looks at it, a lot of companies have this set up and it's not great.
I would be surprised if responsible Valve staff would agree that this is not something they should fix at some point.
It's still on Valve though. They chose to delegate this and H1 basically becomes their voice here. I wish it was made more clear, but I don't think it's wrong.
That sounds to me like they're acknowledging that the feature doesn't work as advertised ("may not align with user expectations"), but also that it was reported as a exploit/security vulnerability, while it's actually a privacy leak. Maybe HackerOne isn't the right channel for reporting those issues?
Seems like a reasonable report to me. Offline mode intentionally hides you from friends in the UI, so you would assume it would keep you hidden.
I have a number of friends who, for various social reasons, keep their Steam status as "Offline" so their friends don't know they're still logging in. If "Offline" can be bypassed, it ruins the point
People should always consider the "abusive friend" scenario with regards to privacy.
Even marriages can be extremely abusive...
The assumption that people on your friend's lists, Steam or anywhere (even just people in the same household) should be able to see your personal information, such as computer use, is a bananas assumption. It is an assumption that I'm pleased to say has failed privacy reviews at at least one company larger than Steam.
I think it's a quite small demographic that have abusive friends on Steam that they can't simply unfriend for whatever reason, and it's not a reasonable expectation on Steam to design for that case. It'd be like a pencil company trying to prevent people from writing hurtful messages.
What about people who have their online friends on Steam just to play together with someone else? Worst case, this could leak a child’s daily schedule to a predator.
> Setting yourself to "Offline" is basically a UI illusion.
I always assume this is such in every case. Every "I'm offline" or "hide me" or "don't save this" or "delete this forever!" UI element is a facade until proven otherwise. "Temporary" chats with LLMs are also permanent and are likely eventually public via massive data leak in future year 20XX.
> Their logic: You have to be friends with the user to receive this packet. Therefore, a "trust relationship" exists.
That logic is acceptable. You could also DM an offline friend a tracking pixel to reconstruct their activity, a lot of this endpoint security is entirely up to the user.
I dunno, the ground condition here is "You're invisible/office and no one can see your activity" but that turns out to not actually be fully true. Maybe if it said "You're invisible/offline to the public, but mostly invisible to your friends" it'd be more true and setting the correct expectations. But of course, that's not how that feature is being sold.
Disagree, that trust relationship implicitly includes a "I can opt out of you seeing my status if I set my status to offline" contract, because that is my expectation of Steam.
True, but a tracking pixel is an active attack that leaves a visible trail. This leak is passive surveillance; I can silently graph the sleep cycles of 200 friends without ever interacting with them. Trust shouldn't imply consent for invisible, automated logging.
Do you really need an LLM to talk on HN? Genuinely, this research seems cool but its hard to trust your findings when there's clearly AI being used heavily in writing the article and in your comments here.
It's about when your friends were last signed-in in their account. From my understanding:
Invisible = Sign-in but do not broadcast the games you are playing (though your profile will show that you signed-in)
Offline = Stay offline and do not sign-in
Incorrect. "Invisible" is a privacy control, not just a UI filter. While the official client freezes the text, the backend still broadcasts live last_logon and last_logoff Unix timestamps in the ClientPersonaState packet. This leaks exact real-time sleep/wake cycles via the socket, completely bypassing the privacy toggle.
Nope, going into standby is the same as logging off, since your client doesn't send keep alive packets anymore. (Not sure if macOS is an exception, because I think my MBP doesn't go into proper sleep if I keep Steam running)
I got one from work that I don't use much outside of travel and haven't changed in any way past initial setup. It stays connected to WiFi and continuously broadcasts various discovery packets for the past month and a half since I last opened it up.
> You could also DM an offline friend a tracking pixel to reconstruct their activity, a lot of this endpoint security is entirely up to the user.
Only for as long as they have the steam chat window open and your tracking pixel/message is a recent enough message to be actually loaded. I don't use steam chat enough to remember if they do any of these, but your plan also ignores any possible automatic security/scanning/proxy shenanigans on steams part that will muddy your pixels tracking data or just break it.
> That logic is acceptable.
I completely disagree. I use invisible status all the time on steam. I very much have an expectation that when set to invisible my friends would not be able to track my online status.
I'm not saying any tracking is great, but a couple of things here. I cant remember when if ever I logged out os steam and this is just shared with friends right? Not sure if this is a nothing burger or not.
In this context, 'Logoff' triggers whenever the socket disconnects. So every time you shut down your PC or put it to sleep, that timestamp is updated and broadcast, even if you never explicitly clicked 'Sign Out'.
The second thing I have to point out is that bug bounty programs are inundated with garbage from people who don't know anything about programming and just blindly trust whatever the LLM says. We even have the 'author' reproducing this blind reinforcement in the article: "Tested Jan 2026. Confirmed working."
The third thing I have to point out is that the response from Valve is not actually shown. We, the reader, are treated to an LLM-generated paraphrasal of something they may or may not have actually said.
Is it possible this issue is real and that Valve responded the way they did? Perhaps, but the article alone leaves me extremely skeptical based on past experiences with LLM-generated bug bounty reports.
Is your LLM detector on a hairtrigger? At best the headings seem like LLM, but the rest don't look LLM generated.
- "This isn't just a "status" bug. It's a behavioral tracker."
- "It essentially xxxxx, making yyyyyy."
- As you mentioned, the headings
- A lack of compound sentences that don't use "x, but y" format.
This is clearly LLM generated text, maybe just lightly edited to remove some em dashes and stuff like that.
After you read code for a while, you start to figure out the "smell" of who wrote what code. It's the same for any other writing. I was literally reading a New Yorker article before this, and this is the first HN article I just opened today; the writing difference is jarring. It's very easy to smell LLM generated text after reading a few non-LLM articles.
LLMs didn’t randomly invent their own unique style, they learned it from books. This is just how people write when they get slightly more literate than nowadays texting-era kids.
And these suspicions are in vain even if happen to be right this one time. LLMs are champions of copying styles, there is no problem asking one to slap Gen Z slang all over and finish the post with the phrase “I literally can’t! <sad-smiley>”. “Detecting LLMs” doesn’t get you ahead of LLMs, it only gets you ahead of the person using them. Why not appreciate example of concise and on-point self-expression and focus on usefulness of content?
I think raising that the raw Valve response wasn't provided is a valid, and correct, point to raise.
The problem is that that valid point is surrounding by what seems to be a character attack, based on little evidence, and that seemingly mirrors many of these "LLM witch-hunt" comments.
Should HN's guidelines be updated to directly call out this stuff as unconstructive? Pointing out the quality/facts of an article is one thing, calling out suspected tool usage without even evidence is quite another.
This will inevitably get abused to shut down dissent. When there's something people vehemently disagree with, detractors come out of the woodwork to nitpick every single flaw. Find one inconsistency in a blog post about Gaza/ICE/covid? Well all you need to do is also find a LLM tell, like "it's not x, it's y", or an out of place emoji and you can invoke the "misinformation generated by a narrative storyteller spambot" excuse. It's like the fishing expedition for Lisa Cook, but for HN posts.
If any of the other instances whereby HN users have quoted the guidelines or tone policed each other are allowed then calling out generated content should be allowed.
It's constructive to do so because there is obvious and constant pressure to normalize the use of LLM generated content on this forum as there is everywhere else in our society. For all its faults and to its credit Hacker News is and should remain a place where human beings talk to other human beings. If we don't push back against this then HN will become nothing but bots posting and talking to other bots.
[0]https://news.ycombinator.com/item?id=45077654
You point about Valve's response is valid though.
I would be surprised if responsible Valve staff would agree that this is not something they should fix at some point.
Certainly, public pressure is another way :)
I have a number of friends who, for various social reasons, keep their Steam status as "Offline" so their friends don't know they're still logging in. If "Offline" can be bypassed, it ruins the point
Even marriages can be extremely abusive...
The assumption that people on your friend's lists, Steam or anywhere (even just people in the same household) should be able to see your personal information, such as computer use, is a bananas assumption. It is an assumption that I'm pleased to say has failed privacy reviews at at least one company larger than Steam.
I always assume this is such in every case. Every "I'm offline" or "hide me" or "don't save this" or "delete this forever!" UI element is a facade until proven otherwise. "Temporary" chats with LLMs are also permanent and are likely eventually public via massive data leak in future year 20XX.
All I can think of is Megaman.
Yes, if the target gets on their PC every day after they wake up.
Still, it’s a bug that should be fixed.
Another example: if the user turns off "Turn on when Windows starts up" or whatever equivalent, this would also be a non-issue.
That logic is acceptable. You could also DM an offline friend a tracking pixel to reconstruct their activity, a lot of this endpoint security is entirely up to the user.
On the profile of a friend you can see the last time they signed-in to their account:
https://preview.redd.it/can-anyone-beat-my-last-online-frien...
Before it was public, and now restricted (for a couple of years already) to friends only.
I guess this is why they won't change it, since it's a feature.
Because from the fields in the protobuf I somewhat suspect it's the same, but I get your point of view as well
EDIT: If it's not, then my bad
I got one from work that I don't use much outside of travel and haven't changed in any way past initial setup. It stays connected to WiFi and continuously broadcasts various discovery packets for the past month and a half since I last opened it up.
e.g. FB Messenger & WhatsApp have their own web scraping infrastructure to provide server side link previews & thereby mitigate tracking links.
Not sure if Steam does the same currently.