Prior to the rise of LLM-written posts and the natural reaction of hair-trigger suspicion, I used to em and en dash fairly often in posts on HN. No reason really other than being a bit of a typography geek who happens to have always used dashes in casual writing instead of semicolons. So when I was setting up a modifier-key keyboard layer with AHK many years ago I put the em dash on modifier+dash just because I could - which made it easy.
Now someone may search old posts without a time cutoff and assume I'm an LLM. That combined with the fact I sometimes write longer posts and naturally default to pretty good punctuation, spelling and grammar, is basically a perfect storm of traits. I've already had posts accused twice in the past year of being an LLM.
Kind of sad some random quirk of LLM training caused a fun little typography thing I did just for myself (assuming no one else would even notice) to become something negative.
My teenager recently asked me why I write like a chatbot, apparently unaware that some human beings prefer to write in complete sentences with attention to details like spelling, punctuation, grammar, and capitalization, and that LLMs were trained on this sort of writing.
This makes me think of the fad where people on youtube will hold a microphone up in frame, because it somehow connotes authenticity. I'm sure some people are already embracing a bit of sloppiness in their writing as a signal of humanity; I'm equally sure that future chatbots will learn to do the same.
It really is unfortunate that such a fun piece of punctuation has been effectively gutted. This isn't even really limited to just the em-dash, but I don't know if there's another example of a corporation (or set of them) having such a massive impact on grammar and writing as OpenAI and their ilk have.
Entire sentence structures have been effectively blacklisted from use. It's repulsive.
I use “-“ because I thought the amount of parentheticals I was using was a bit unhinged. In these times of TLDR, I sometimes move the aside to the bottom as an afterthought instead of leaving it inline.
I dunno this en versus em dash stuff, I just use the minus sign on my keyboard.
My rage–induced habit of ignoring typos caused by the iPhone autocorrect and general abuse of English is suddenly authentic and not lazy and slightly obnoxious (ok maybe it's still those things too)
It's funny - some months ago I noticed that I use the word "actually" lot, and started trying to curb it from my writing. Not for any AI-related reason, but because it is almost always a meaningless filler word, and I find that being concise helps get my points across more clearly.
e.g. "The body of the template is parsed, but not actually type-checked until the template is used." -> "but not typechecked until the template is used." The word "actually" here has a pleasant academic tone, but adds no meaning.
I try to curb my usage of 'actually' too. Like you I came to think of it as an indirect, fluffy discourse marker that should be replaced with more direct language.
I'm totally fine with the word itself, but not with overuse of it or placing it where it clearly doesn't belong. And I did that a lot, I think. I suspect if you reviewed my HN comments, it's littered with 'actually' a ton. Also "I think...", "I feel like..." and other kind of... Passive, redundant, unnecessary noise.
Like, no kidding I think the thing I'm expressing. Why state that?
Another problem with "actually" is that it can seem condescending or unnecessarily contradictory. While I'm often trying to fluff up prose to soften disagreement (not a great habit), I'm inadvertently making it seem more off-putting than direct yet kind statements would. It can seem to attempt to shift authority to the speaker, if somewhat implicitly. Rather than stating that you disagree along with what you believe or adding information to discourse, you're suggesting that what you're saying somehow deviates from what the person you're speaking to would otherwise believe or expect. That's kind of weird to do, in my opinion. I'm very guilty of it, though I never had the intent of coming across this way.
It can also seem kind of re-directive or evasive at times, like you don't want to get to the point, or you want to avoid the cost of disagreement. It's often used to hedge statements that shouldn't be hedged. This is mainly what led me to realize I should use it less. I hedge just about everything I say rather than simply state it and own it. When you're a hedger and you embed the odd 'actually' in there, you get a weird mix of evasive or contradictory hedging going on. That's poor and indirect communication.
Like, no kidding I think the thing I'm expressing. Why state that?
One reason might be to acknowledge that you're not being prescriptive, but leaving room for a subjective POV in situations that call for it.
Likewise, the GP's use of "actually" acknowledges the contrast between what one might expect (that some preliminary type-checking might happen during initial parsing) and what in fact happens (no type checks occur until the template is used.) It doesn't seem out of line in that case.
Absolutely, I was being overly reductive. Both "I think" and "actually" do serve useful purposes, and I'm being critical of redundant or over-use of them (which I tend to do).
Such data analysis of HN related things are always so fun to read. Thanks for making this!
I have a quick question but can you please tell me by what's the age of "new" accounts in your analysis?
Because, I have been called AI sometimes and that's because of the "age" of my comments sometimes (and I reasonably crash afterwards) but for context, I joined in 2024.
It's 2026 now, Almost gonna be 2 years. So would my account be considered new within your data or not?
Another minor point but "actually"/"real" seems to me have risen in usage over 5 times. All of these words look like the words which would be used to defend AI, I am almost certain that I saw the sentence "Actually, AI hype is real and so on.." definitely once, maybe even more than once.
Now for the word real, I can't say this for certain and please take it with a grain of salt but we gen-z love saying this and I am certain that I have seen comments on reddit which just say "real" and OpenAI/other models definitely treat reddit-data as some sort of gold for what its worth so much so that they have special arrangements with reddit.
So to me, it seems that the data has been poised with "real". I haven't really observed this phenomenon but I will try to take a close look if chatgpt is more likely to say "real" or not.
Fwiw, I asked Chatgpt to "defend the position, AI hype sucks" and it responded with the word "real"/"reality" in total 3 times.
(another side fact but real is so used in Gen-z I personally watch channel shorts sometimes https://www.youtube.com/@litteralyme0/shorts which has thousands of videos atp whose title is only "real", this channel is sort of meme of "ryan gosling literally me" and has its own niche lore with metroman lol)
You've built an interesting statistic from gathering data across the project. The real answer: ai models and agentic apps make building spam tools more simple than ever. All you actually need is just some trivial api automation code.
I bet every single AI-startup dude who does it thinks they've stumbled on a brilliant, original, gold-mine of an idea to use AI to shill their product/service on internet forums, or to astroturf against "AI Haters".
I'm still salty that I can't use em-dashes anymore for fear of my writing being flagged as AI generated. Been using them for years—it's just `alt+shift+-` on a Mac keyboard and I find them more legible in many fonts compared to the simple dash on the typical numpad.
It's so sad to me that good typographical conventions have been co-opted by the zeitgeist of LLMs.
LLM fatigue is real. It's not just em-dash — it's the overall tone of the writing that clues people in. But if your viewpoints and approach are unique, your typesetting won't raise suspicion of machine-generation, except in the most dull of readers. Just be you and it will be fine.
If you'd like more tips on writing I'd be happy to help.
I'm exactly the opposite. It'd been on my todo list for years to one day learn the difference between the different dashes. I kept putting not doing it.
Then came LLMs, and there was so much talk of them using em dashes. A few weeks ago, I finally decided it's time and learned the difference. (Which took all of 2 minutes, btw.) Now I love em dashes and am putting them everywhere I can! Even though most people now assume I'm using AI to write for me.
Magical signal panacea searching is ultimately fruitless. Other ways to make bot interactions more difficult, there are policy and technological obstacles that could be introduced. For example, require an official desktop or mobile app for interaction. And then for any text copy-pasted, demarcate it. And throw an error message for any input typed inhumanly-fast. Require a micropayment of like $0.10 to comment. While these things would break the interaction style and flexibility for a lot of innocent human users, these would throw big wrenches into some but not all vulnerabilities of bot interactions.
In a lot of ways, it feels like this is simply a fight for recognition that the Mac keyboard supports emdashes.
This wouldn't be an issue if mobile users or Windows users were exercising it too, but it's just Mac owners and LLMs. And Mac owners are probably the minority of instances where it is used.
i've always used double dashes -- because i once i setup a osx shortcut to change those into em-dashes, but i never bother to setup this again in other computers.
so now, i just use double dashes for everything.
(shit, i wonder when llms will start doing this instead of normal em)
I switched to semicolons... They look similar enough in use to string things together. I'm sure AI is coming for those too though, and that will be a grim day because those are my last stand.
People will accuse of all types of stuff, regardless if you use em-dashes or not. The way I write apparently is familiar to some as LLM-jargon they've told me, I'm guessing because I've spewed my views and writings on the internet for decades, the LLMs were trained on the way I write, so actually the LLMs are copying me! And others like me.
But anyways, you can't really control how people see your stuff, if you're human I think the humanness will come through anyways, even if you have some particular structure or happen to use em-dashes sometimes. They're so easy to prompt around anyways, that the real tricky LLM stuff to detect by sense and reading is the stuff where the prompter been trying to sneakily make them more human.
I read a text from the 60s by my grandfather this week and seeing an emdash made the LLM alarm in my head go off... Had to really stop myself before I went all "and you" on him
I feel the same way. I've used em-dashes in my writing forever, and I was always particular about making sure they were used properly (from a typography standpoint with no surrounding spaces).
But now, I have to be so picky about when I use them, even when I think it's the perfect punctuation mark. I'll often just resort to a single hyphen with spaces around. It's wrong, but it doesn't signal someone to go "AI AI AI!!"
I totally agree. When I use em-dashes in my /family iMessage thread/ I get accused of having used ChatGPT to write my reply—my one-sentence reply about dinner plans. Dear Lord.
Funnily enough I've actually started using them a little — it made me realise how much more legible/likable I find them.
(Until a few years ago I probably mostly only saw them in print, and I suppose it just never occurred to me that I liked them in particular vs. just the whole book being professionally typeset generally.)
LLM adopting conventions (typographical or otherwise) is what they do, right? The idea that anyone should then have to change their behaviour is ridiculous, as is the whole conversation, really.
The issue is that LLMs adopt a very particular style that is a mix of being very polished (em-dash, lists-of-three, etc) that is reminiscent of marketing copy, and some quirks picked up from the humans curating the training data somewhere in Africa
If AI was writing like everyone else we wouldn't be talking about this. But instead it writes like a subset of people write, many of them just some of the time as a conscious effort. An effort that now makes what they write look like lower quality
I think this is interesting in that I feel, grammatically and structurally, LLMs often generate _higher quality_ text than most humans do. What tends to be lower quality is the meaning of said texts.
Say what you want about marketing-isms of your typical LLM, they have been trained and often succeed at making legible, easy to scan blobs of text. I suspect if more LLM spam was curated/touched up, most people would be unable to distinguish it from human discourse. There are already folks commenting on this article discussing other patterns they use to detect or flag bots using LLMs.
I mean, yes, LLMs write grammatically perfect, well-structured English (and many other languages prevalent in their training sets). That's exactly why many people are now suspicious of anyone who writes neat, professional-style English on the internet.
are there really places that a comma, super-comma; or (parenthesis) dont work roughly as well? I find the em-dash mildly abhorrent, even before this all.
This is the first time I've ever heard the character ";" referred to as such. It's always been "semi-colon" to me, is this a region/culture difference?
I'm not saying you're wrong, I find it interesting.
A poster commented that he read parenthetical remarks in an old-timey voice (I’d guess the trans-Atlantic accent). I love that idea. But for me they read almost as if you’re saying them under your breath (or a character is breaking the fourth wall and talking to the camera quietly). I read them but my brain assigns them less importance.
Em-dashes keep everything on the same level of importance in my brain.
Commas don’t feel as powerful. To be fair to the comma I’d probably do this:
Em-dash matches how I speak and think: A halt, then push onto the digression stack, then pop. So I use them like that.
Edit: I accidentally used an em-dash in the word em-dash. Interestingly HN didn’t consider changing the dash to be a change in my text so didn’t update it. I had to make a separate change and take that change out for my dash change to stick.
I picked it up from Salinger. I find that if I can't eradicate parenthesis by some other means, or if it's more effort to do so than I want to spend, em-dashes usually replace them without doing any harm and aren't quite so ugly, aside from being useful in other cases. In particular, parenthesis at the end of a sentence are awful, while a single em-dash does a similar job much more neatly and looks totally natural.
I still call voodoo on this. I use an iPhone, iPad, Mac to comment here—all of them autocorrect to em dashes at one point or another. Same goes for ellipsis.
I doubt it explains any reasonable fraction of this, but github moving from early adopter techies to general population "normies" would be a reason for the shift. I would expect it explains at least some increase in the use of em-dashes.
You can remove em dashes from the analysis and the trend is still there: newly created accounts are still 6X more likely to use the remaining LLM indicators (arrows and bullets, p = 0.00027).
It's worth remembering that you can argue that the use of the word is acceptable now, but can you guarantee that in 30 years time the future world will agree with you to the extent that they let you hold a position of responsibility after using the word 30 years ago.
The reason we look harshly on past word usage is because of what it represents. The use of slurs 30 years ago isn’t a problem because of the word but because it suggests an association with a specific behavior.
If you look back to the 90s and see someone using a racist slur, you fill in the gaps and assume they were using it because they were racist.
Will people in 30 years look back to today and judge those who showed disdain for people who rely on AI to write for them?
Even if clanker becomes a no-no word 30 years from now, it seems beyond the realm of possibility that people who hated clankers in 2026 will be looked upon harshly. Clankers aren’t a marginalized group today, they aren’t a class that needs protection.
What words are you thinking of when you say that there is precedent?
I just saw a video on instagram which basically portrayed a rich racist southerner using all the same phrases they used to use for slaves, but for their robot.
"We treat this one better because it's a house clanker instead of a field clanker"
"If the clanker acts up it knows that it gets stuck in the box"
It was meant to be funny but definitely highlighted exactly what you are saying.
If it wasn't for them misconfiguring their bot and having it post so quickly, these would go by undetected and most people would engage with them. The comments themselves seem "normal" at first glance.
Downstream of this I used to cycle my accounts pretty regularly but have stopped since generative AI. Don't want people thinking I'm an LLM spam bot. My stupid comments are entirely my own.
One pattern I've noticed recently is sort of formulaic comments that look okish on their own, maybe a bit abstract/vague/bland, and not taking a particular side on good/bad in the way people like to do, but really obviously AI when you look at the account history and they're all the same formula:
>this is [summary]
>not just x, it's y
>punchy ending, maybe question
Once you know it's AI it's very obvious they told it to use normal dashes instead of em dashes, type in lowercase, etc., but it's still weirdly formal and formulaic.
"this is the underreported second-order risk. Micron, Samsung, SK Hynix all allocated HBM capacity based on hyperscaler capex projections. NAND fabs are similarly committed. a 57% reduction in projected OpenAI spend (.4T -> B) doesn't just affect NVIDIA orders -- it ripples into the memory suppliers who shifted capacity to HBM and away from commodity DRAM/NAND. if multiple hyperscalers revise down simultaneously you get a situation similar to the 2019 crypto ASIC overhang: companies tooled up for demand that evaporated. not predicting that, but the purchasing commitments question is real."
The user [1] you've mentioned has 160 points being a poster of total four bland messages. This goes against a normal statistical distribution. And this gives away why they do it: the long-term aim is to cultivate voting rings to influence the narratives and rankings in the future. For now, this is only my theory but it may be a real monetization strategy for them.
I'd be interested to know why those comments were flagged actually. They don't scream AI and no-one has replied calling them out as AI, etc. But the vast majority are dead.
That's why. Boring, bland, etc. That account's M.O. is basically "write a paragraph that says nothing." Fwiw, I do think AI can be indistinguishable from dumb, boring people, but usually those kinds of people won't be on HN.
The only practical purpose I can think of for farming karma on HN with an LLM would be to amass an army of medium-low karma accounts over time and use the botnet for targeted astroturfing or other mass-manipulation. Eek.
I'll actually post a comment or question and I'll get a reply with a bit of a paragraph of what feels like a very "off" (not 'wrong' but strangely vague) summary of the topic ... and then maybe an observation or pointed agenda to push, but almost strangely disconnected from what I said.
One of the challenges is that yeah regular users don't get each other's meaning / don't read well as it is / language barriers. Yet the volume of posts I see where the other user REALLY isn't responding to the other person seems awfully high these days.
AI generated content routinely takes sides. Their pretense of neutrality is no deeper than a typical homo sapien's. This is necessarily so in an entity that derives its values from a set of weights that distill human values. Maybe reasoning AI can overcome that some day, but to me that sounds like an enormous problem that may never be solved. If AI doesn't take sides like people do they still take sides in their own way. That only becomes obscure to the extent that their value judgments conflict with ours, and they are very good at aligning with the zeitgeist values, so can hide their biases better than we can.
I wonder if it is neural networks that are inherently biased, but in blind spots, and that applies to both natural and artificial ones. It may be that to approximate neutrality we or our machines have to leave behind the form of intelligence that depends on intrinsically biased weights and instead depend on logically deriving all values from first principles. I have low confidence that AI's can accomplish that any time soon, and zero confidence that natural intelligence can. And it's difficult to see how first principles regarding human values can be neutral.
I'm also skeptical that succeeding at becoming unbiased is a solution, and that while neutrality may be an epistemic advance, it also degrades social cohesion, and that neutrality looks like rationality, but bias may be Chesterson's Fence and we should be very careful about tearing it down. Maybe it's a blessing that we can't.
It's wierd because the barrier to not have that in is so low, you can just tack on 'talk like me not AI, dont use em dashes, don't use formulaic structures, be concice' and itll get rid of half of those signals.
> First impression: I need to dive into this hackernews reply mockup thing thoroughly without any fluff or self-promotion. My persona should be ..., energetic with health/tech insights but casual and relatable.
> Looking at the constraints: short, punchy between 50-80 characters total—probably multiple one-sentence paragraphs here to fit that brevity while keeping it engaging.
> User specified avoiding "Hey" or "absolutely."
Lots more in its other comments (you need [showdead] on).
I don't understand why someone would go through the effort to prompt that when the comments it suggested are total garbage, and it seems like would take similar effort to produce a low quality human written comment.
In some cases, it's probably to establish aged accounts that are more trusted by users and spam algorithms. There's a market for old Reddit accounts, for example.
I receive multiple offers a year to participate in spam rings with the 20 year old high-karma reddit account. I usually just ignore them or report them. I could be making so much money /s
I went through a phase where I milled responses through grinding plates of LLMs. Whether my reasons are shared with others remains unknown.
My relationship with writing, while improved, has been a difficult one. Part of me has always felt that there was a gap in my writing education. The choices other writers seem to make intuitively - sentence structure, word choice, and expression of ideas - do not come naturally to me. It feels like everyone else received the instructions and I missed that lesson.
The result was a sense of unequal skill. Not because my ideas are any less deserving, but because my ability to articulate them doesn't do them justice. The conceit is that, "If I was able to write better, more people would agree with me." It's entirely based on ego and fear of rejection.
Eventually, I learned that no matter how polished my writing is, even restructured by LLMs, it won't give me what I craved. At that moment, the separation of writer and words widened to a point where it wasn't about me anymore and more about them, the readers. This distance made all the difference and now I write with my own voice however awkward that may be.
Same as Reddit. Accumulate enough points via posting shallow and uninteresting—yet popular—dialogue to earn down voting and flagging abilities, which can be used (via automation) to manipulate discussions and suppress viewpoints.
Slashdot's system was superior because mod points were finite and randomly dispensed. This entropy discouraged abuse by design—as opposed to making it a key feature of the site.
It's the Achilles' heel of Reddit and every site that attempts to emulate it.
Critically, Slashdot also had a meta-moderation system, where users were asked to judge moderation activity to confirm whether it was sensible, fair, and so on. I'd like to believe that system played a vital role in stopping abuse of the moderation system. It was way ahead of its time.
I've been advocating for a while now that HN could use meta-moderation at least on flagging activity, so it can stop giving flagging powers to users who are using it for reasons other than flagging rulebreaking.
Scams (romance scams or convincing people to run some code on their machine), influence operations by an intelligence agency, or advertising a product.
tirreno guy here, we develop an open-source fraud prevention / security platform (1).
Sometimes there is no clear explanation for fake account registration. Perhaps they were registered to be actively used in the future, as most fraud prevention techniques target new account registration and therefore old, aged accounts won't raise suspicion.
Slightly off-topic, but there are relatively new `services` that offer native brand mentions in reddit comments. Perhaps this will soon be available for HN as well, and warming up accounts might be needed for this purpose.
Some of the AI comments end with a link to something they're plugging. "If you'd like to learn more about this I have a free guide at my website here". Those get flagged quickly.
Other accounts might be trying to age accounts and dilute their eventual coordinated voting or commenting rings. It's harder to identify sockpuppet accounts when they've been dutifully commenting slop for months before they start astroturfing for the chosen topic.
I'd expect everything. HN ain't some local forum but place where opinions form and spread, and these reach many influential and powerful (now or in future) people. Heck there are sometimes major articles in general news about whats happening here.
To reverse the argument - it would be amateurish and plain stupid to ignore it. Barrier to entry is very low. Politics, ads, swaying mildly opinions of some recent clusterfuck by popular megacorp XYZ, just spying on people, you have it all here.
I dont know how dang and crew protects against this, I'd expect some level of success but 100% seems unrealistic. Slow and steady mild infiltration, either by AI bots or humans from GRU and similar orgs who have this literally in their job description.
Ynow what's fucked up? I knew within the first few sentences that you're doing that on purpose, but still found myself wondering if you're a an LLM. I mean I knew you weren't, but the question is already so deeply ingrained at this point - and then you use the pullet points to boot...
This loss of trust is getting tiresome. Depending on context we've likely all wondered if something is astro turfed, but with the frequency increase from llms it's never really possible to not have it somewhere in mind
I'm proud? to say I've gotten the 'are you using an LLM' question in a meeting when doing off the cuff fluent corpo jargon too.
To date, I've never used an LLM directly. I find them deeply repellant, and I've yet to be convinced that there exists a sufficiently tuned prompt that will make me not hate their literally 'mid' output.
Loss of trust though, that's a societal issue of this gilded age of grifters and scammers. Until we have a system of accountability and consequences for serial lying, we're gonna drown in this shit. LLMs are jet fuel for our existing environment of impunity.
I worked for GitHub for a time. There was a cultural abhorrence of the diaeresis, it was considered reader-hostile and elitist. I refused to coöperate with that edict internally, although I grant that every company has the right to micro-manage communications with the public.
It exists to indicate how a word is pronounced. Naïve is a better example IMO, cooperation feels too familiar.
Non-native speakers might see something like "nave" instead of "nigh-eve" unless it is clear that there is a stress that breaks out of the diphthong.
I don't think style guides are (usually) about absolute correctness, but relative correctness. A question is asked, a decision needs making, someone makes it, and now a team of individuals can speak with a consistent voice because there's a guideline to minimize variation.
IIRC it's use is to distinguish vowels that belong to separate syllables with vowels which form a diphthong. I think this could be beneficial to language learners, to give them a hint that cooperate is pronounced "ko ah puh rayt" instead of "ku puh rayt", and likewise naïve as "nah eev" than "nayv" or "nighv".
Yes. To be fair, I was always a barbarian who just typed a hyphen in-place of an emdash and figured that was good enough. The only REAL em-dashes in my pre-AI writing are the result of autocorrect.
I was going to say that I respect it, but find it utterly absurd that they do that. But your comment made me look it up again—I had no idea it was just obsolete/archaïc (except in the New Yorker), I'd thought it was a language feature their 'style' guide had invented.
Dutch does this. Idea is idee, with the e doubled to show it's a long vowel. We make plurals by adding "en". One idee, two... ideeen? Idewhat? So the dots differentiate where the sound changes (long e to short e): ideeën. Approximate pronunciation could be "ID an"
Fun fact: if you have the audacity to correctly write an SMS, you can fit about 70 characters in an SMS. It converts the whole message into multibyte instead of only adding dots to the one character. Or if you use classic spelling for naïve in English, same issue. (We don't dots-ize that in Dutch because ai is not a single sound like ee is, so there's no confusion possible. This is purely English.) I believe in Hanlon's razor so it's probably a coincidence that whoever cooked up this terrible encoding scheme made carriers a lot of money, but I do wonder if this had anything to do with the bug still existing to this day!
I noticed a similar trend a couple of weeks ago so I auto-hide green comments now. I also autohide all top 1000 user accounts but it strikes me that perhaps I should also choose a “user signed up on $date” filter that precedes OpenClaw.
Does this comment break HN for anyone else? I can press "next" on any other post, but not this one. And in the next post, pressing "prev" does not scroll to this one. It does nothing. Prev works fine when pressed on this (or any other) post
It's rendering visibly narrower than the big dash up thread for me, on FF on Android. (Maybe HN's stripping one or more of the combining chars though, so it's not actually showing what you meant in full?)
The use of em dashes is a human right. I ask that people not discriminate against em-dash users—we should be a protected class—and I refuse to abandon them. Perhaps I’ll have one engraved on my tombstone. He died doing what he loved—dashing.
I encourage people to discriminate against me because I write like an educated African who works annotating AI training material.
Why not? I am a descendant of Africans. I am a mildly successful author by tech nerd standards. I was educated in the British Public School tradition, right down to taking Latin in high school and cheering on our Rugby* and Cricket teams.
If someone doesn't want to read my words or employ me because I must be AI, that's their problem. The truth is, they won't like what I have to say any more than they like the way I say it.
I have made my peace with this.
———
Speaking of Rugby, in 1973 another school's Rugby team played ours, and almost the entire school turned out to watch a celebrity on the other school's team.
His name was Andrew, and he is very much in the news today.
Funny thing is I started using them in the last 5 or 6 years myself in place of commas where I wanted to interject some extra info. Of course I'm lazy and don't bother typing the actual em dash, I just use a regular dash. Now I feel gross using them because I don't want people thinking I turned my brain off.
I have always used double-dashes instead of emdashes, and it annoys me when software "auto-corrects" them into emdashes. Moreso since emdashes became an AI tell.
I also see AIs use emdashes in places where parentheses, colons, or sentence breaks are simply more appropriate.
(2) I do recommend taking one minute to dash a note off to [email protected] if you see suspicious patterns. Dang and our other intrepid mods are preturnatually responsive, and appear to appreciate the extra eyeballs on the problem.
I have sent them an email a few days ago about the state of /noobcomments.
This wasn't really a intended as an "wow, dang is sure sleeping on the job", more than an interesting observation on the new bot ecosystem.
I also feel like there's a missing discussion about the comment quality on HN lately. It feels like it's dropped like crazy. Wanted to see if I could find some hard data to show I haven't gone full Terry Davis.
Is there even an incentive to optimize for such signals, though? Em-dashes have been a known indicator of AI-generated text for a good while, and are still extremely prevalent. While someone who doesn't like AI slop and knows and what to look out for will notice and call out obvious AI comments, the unfortunate truth is that the majority of people simply cannot tell, and even among those who can, many don't care.
Obvious AI-generated posts and articles make it to the front page on a daily basis, and I get the impression that neither the average user nor the moderation team see that as a problem at all anymore.
If I see an em-dash in a comment I stop reading and I've seriously considered setting up a filter across multiple sites to remove any comments containing one.
I know there are legitimate usecases for the em-dash, but a few paragraphs (at most) of text in an HN/Reddit comment? Into the trash it goes.
Actually I love the — ever since my first Mac, I have enjoyed the finer characters of typography. It’s much easier to access on a Mac keyboard. Not saying the proliferation of AI has that as a signature, like the weird phrasing, but at least allow for the few mammals who likes to indulge.
(author) I saw a 32:1 rate of EM-dashes last night when I just eyeballed the first 3 pages of /newcomments and /noobcomments. So I'm not sure how stable this is over over time.
This is probably the time to add some invitation system like GMail had in the beginning. Or make a shade for accounts <1yr. Or something else, before things get too mixed.
The issue with creating some hidden maturity heuristic for accounts is that it will be gamed just the same as any other, except that using age alone is the simplest heuristic to game. You can simply do nothing for incrimental periods of time and then begin testing aged accounts to roughly determine what the minimum age an account must reach to become "trusted".
Bot prevention is a very difficult constant game of cat and mouse, and a lot of bot operators have become very skilled at determining the hidden metrics used by platforms to bless accounts; that's their job, after all. I've become a big fan of lobste.rs' invitation tree approach, where the reputation of new accounts rides on the reputation of older accounts, and risks consequence up the chain. It also creates a very useful graph of account origin, allowing for scorched earth approaches to moderation that would otherwise require a serious (and often one-off) machine learning approach to connect accounts.
I just took a look at /noobcomments and wow, there's ever a comment where a person argues with AI instead of, you know, using their own brain. It was abivous it was ai since it was formatted with markdown
I wanted to point out that em dashes are autocompleted by the iOS keyboard. So the false positives and true negatives might have some overlaps without more details. I think a better indicator would be to only detect em dashes with preceding and following whitespace characters, and general unicode usage of that user.
Additionally, lots of Chinese and Russian keyboard tools use the em dash as well, when they're switching to the alternative (en-US) layout overlay.
There's also the Chinese idiom symbol in UTF8 which gets used as a dot by those users a lot, so that could be a nice indicator for legit human users.
edit: lol @ downvotes. Must have hit a vulnerable spot, huh?
I think there is a baseline number of human users that for one reason or another uses em-dashes, but this doesn't explain why they 10x more prevalent in green accounts.
> I think there is a baseline number of human users that for one reason or another uses em-dashes, but this doesn't explain why they 10x more prevalent in green accounts.
I'm not trying to negate the fact. I'm just pointing out that a correlation without another indicator is not evidence enough that someone is a bot user, especially in the golden age of rebranded DDoS botnets as residential proxy services that everyone seems to start using since ~Q4 2024.
I’ve had this sense that HN has gotten absolutely innundated with bots last few months.
Is it possible to differentiate between a bot, and a human using AI to 'improve' the quality of their comment where some of the content might be AI written but not all? I don't think it is.
> human using AI to 'improve' the quality of their comment
I want to hear people in their own voice, their own ideas, with their own words. I have no interest in reading AI generated comments with the same prose, vocabulary, and grammar.
I don't care if your writing is bad.
Additionally, I am sceptical that using AI to write comments on your behalf creates opportunities for self-improvement. I suspect this is all leading to a death of diversity in writing where comments increasingly have an aura of sameness.
AI post "improvements" are the most annoying thing. I see more and more people doing it, especially when posting reviews/experiences with things, and they always get called out for it. They always justify it with "AI helped me organize what I wanted to say." Like man, you're having an AI write about an experience it didn't have and likely didn't even proofread it. Who knows what BS it added to the story. Even disorganized and misspelled stories are better than AI fantasy renditions that are 20 times longer than they need to be.
I find the bigger problem with online comments are that people repeat the same comments and "jokes" over and over and over again. Sure we had those with YouTube 15 years ago when people always spammed "first!" and "who is listening in <year>?" but now it's gotten worse and every single comment is now just some meme (especially on Reddit) or some kind of "gotcha"...
Not exactly, bot farms can still be made with poor people IDs through black market. I don't know what the solution is going to be, but at some point we might forced to accept the reality that on the internet humans and AI won't be distinguishable anymore and adjust our services independently on the client being a person or a machine.
I just assume if any comment sounds like an ad it's a bot. All the comments like "I'm 10x faster with Claude Opus 4.6!" or "Have you tried Codex with ChatGPT 5.X? What a time to be alive!" can be lumped in the bot bin.
I was thinking of how to create a UX around quantifying or qualifying AI use. If products revealed that users had used in-app AI to compose their responses, they might respond by doing it outside the app and pasting it in. If you then labeled pasted text as AI they might use tools to imitate typing. And after all that, you might face a user backlash from the users who rely on AI to write.
I don't personally care about the distinction especially since AI usually 'improves' things by making it more verbose. Don't waste tokens to force me to read more useless words about your position - just state it plainly.
If you are suspicious, look at comment history. It's usually fairly obvious because all comments made by LLM spambots look the same, have very similar structure and length. Skim ten of them and it becomes pretty clear if the account is genuine.
I'm more worried about how many people reply to slop and start arguing with it (usually receiving no replies — the slop machine goes to the next thread instead) when they should be flagging and reporting it; this has changed in the last few months.
I'm never suspicious though. One of the strange, and awesome, and incredibly rare things about HN is that I put basically zero stock in who wrote a comment. It's such a minimal part of the UI that it entirely passes me by most of the time. I love that about this site. I don't think I'm particularly unusual in that either; when someone shared a link about the top commenters recently there were quite a few comments about how people don't notice or how they don't recognize the people in the top ranks.
The consequence of this is that a bot could merrily post on here and I'd be absolutely fine not knowing or caring if it was a bot or not. I can judge the content of what the bot is posting and upvote/downvote accordingly. That, in my opinion, is exactly how the internet should work - judge the content of the post, not the character of the poster. If someone posts things I find insightful, interesting, or funny I'll upvote them. It has exactly zero value apart from maybe a little dopamine for a human, and actually zero for a robot, but it makes me feel nice about myself that I showed appreciation.
I don't understand what is the purpose of these bots? Nihilism? Vandalism? At first I doubted when people were saying that such and such comments was AI generated, I didn't understand the goal, the motives so I thought it couldn't be ; but lately I understood how dead wrong I was, we are submerged, I came to realize that we are eaten by a sea of these useless comments.
the motive is probably more depressing. a normal human who just wants human interaction. people interacting with something "you" wrote just feels nice and people like that stuff.
The part that doesn't make sense to me is: Why? As in what are the incentives to use AI to write comments on HN? This is not a platform like Youtube or X where views get you money. Is this just for internet karma?
I think it's just people experimenting with conversational bots. If you can get your bot to participate in a conversation on HN without being identified as a bot then it's better than those that do.
People have posted their blogs here before and gotten the HN hug of death plus a few hundred comments. It's 2026 and not 2016; HN is a much larger platform than people seem to think it is and HN has significant eyes to be shifted if your posts reach the front page. And given how cheap it is to throw bots at whatever site has open registration it doesn't surprise me to see manipulation here.
I know on reddit since basically the very beginning there has been a market for accounts with authentic but anodyne histories. It ends up being easier to make them yourself and then just occasionally use them for whatever guerilla marketing, astroturf campaign, or propaganda operation is your actual goal. But still until recently you pretty much had to pay people to sit and post on social media on a bunch of accounts to generate these histories.
This use was one of the first things that occurred to me when LLMs started getting genuinely good at summarizing texts and conversations. And I assume a fair bit of this has always happened on HN too. I've never moderated here obviously so I have no first hand insight but the social conventions here are uniquely ripe for it and it has a disproportionate influence on society through the dominance of the tech industry, making it a good target.
You can turn off iOS automatically converting dashes to em-dashes. It also turns off smart-quotes which when used converts any sms you send from normal GSM-7 (7-bit) encoding to utf-8 which doubles the number of sms messages you're sending in the background (even though they're stitched together to look like a single message)
To turn off Smart Punctuation: Home > Settings > General > Keyboard > Smart Punctuation > Off.
My truth is that the LLM usage of em-dashes doesn’t seem excessive. If anything, the kind of text generated by LLMs (somewhat informal, expressive) calls for em-dashes at a higher frequency.
There is one thing I am the most scared off and that is believing a comment, video, picture is AI generated while it wasnt.
There is no real AI detection tool that works.
When we see something like emd-ashes its simply the average of the used text the models trained on. If you fall into one the averages of a model you basically part of the model ouput. Yikes.
I had a past life of drumming up community comments for engagment: The only thing that's changed is that humans are getting lazy and using AI. Fake comments have always been a thing.
I'm sure you can't share details but would be cool to hear more about it generally speaking, what worked and not etc. Especially if it involved HN.
Our company is being attacked rn in tech media and at least some of it, gut feeling wise, seems obviously sponsored / promoted by competitors. I know that's not surprising, but never watched it happen from this side before.
If we are ok with flooding the world with AI generated software. I find it funny to reject the increase of comments or even articles written by AI. Can't have the cake and eat it too or something like that
Listen, I fully support your right to buy and use whatever you want from Priscilla’s or Adam & Eve. Keep it consensual and not in public view though, okay?
AI use is similar. Ask it to do whatever writing or text wrangling you want, but please show the public the sanitized version.
700 is actually a pretty good sample size unless you are looking at some tiny crosstab, or there’s some skew (which you won’t naively scale your way out of anyway).
It is also interesting to note that the comparison is between recent comments and recent comments by new users. So, I guess this would take care of the objection that em-dashes (a perfectly fine piece of punctuation) have just been popularized by bots, and now are used more often by humans as well.
Maybe there is a bot problem. Seems almost impossible to fix for a site like this…
I think what a larger sample size would do would be to help capture changes over time. Humans tend to be more active certain times of days, whereas bots don't tend to do that.
I used to love using em-dashes in my texts, especially in titles. Now I am way too afraid of appearing as using an LLM while I do my best to redact everything by myself :')
I had no idea what I was using were called “EM-dashes” until the AI bubble. I just used them to reflect pauses in my speech for tangents - an old habit from my IRC days.
Incidentally, some folks reported my stuff for potential AI generation and I had to respond to the mods about it. So that was kinda funny, if also sad to hear that some folks thought I was a bot.
I’m a dinosaur, not a robot dinosaur. I’m nowhere near that cool, alas.
But the em-dash is a different character. I think even those that use a pause would just opt for - on their keyboard, whereas the em-dash — requires additional work on most (all?) keyboard layouts. It's _not_ more work for an AI though hence why it's a tell.
No, there are actually four different punctuation marks, all which look remarkably similar to the untrained eye.
1. We have the hyphen, which is most commonly used to create multi-part words, such as one-and-one-thousand.
2. We have the EN-DASH, which is most commonly used to denote spans of ranges. As an example, Barack Obama was President 2009–2017.
3. Then we have the recently maligned EM-DASH, which can be used in place of a variety of other punctuation marks, such as commas, colons, and parentheses. Very frequently, AI will use the em-dash as a way to separate two clauses and provide forward motion. AI uses it for the same reason that writers do: the em-dash is just a nicer punctuation mark compared to the colon.
4. Lastly, we have the minus sign, which is slightly different than the hyphen, though on most keyboards they're combined into the hyphen-minus.
By the by, they're called the em-dash and the en-dash because they match the length of an uppercase M or N, respectively.
It is probably even a hyphen-minus, so called because on most early keyboards one character had to do to represent both a hyphen and a minus. In Unicode, there is a separate code point for an unambiguous hyphen. There is also a non-breaking hyphen as well as the various dashes discussed here.
And "--" is absolutely just two hyphen-minuses, not an em-dash (—).
As a typography nerd, I’m upset that my pedantism may get me labelled as a bot. (Yes, I just used a typographic apostrophe instead of a straight single quote.)
Yeah, same. I use an extended keyboard layout on my PC. I'm so used to it I have to actively decide against using proper quotes and dashes and whatnot. I don't bother on mobile, though.
Every time someone states they stop reading when they encounter proper typography, I feel attacked.
I learned just right now that this isn't the default. I set my bookmark to HN in like 2011 before making an account, and apparently it's that one. I didn't realize that wasn't just the basic homepage but with a weird address for some reason.
It makes it much more fun to imagine a room full of robots in overcoats trying to pass off as human, but doing a terrible job due to the audible "clanks" betraying them from beneath the coat.
Spaces like HN then become a cacophony of clankers clanking as their numbers increase
It has been obvious since ChatGPT that the internet, including HN, will be flooded with AI generated commentary, drowning out real peoples' voices (soon undetectable). How this is surprising to anyone is a mystery.
This user [0] is clearly a bot and has been shadowbanned but some of it's comments get vouched because they're pretty good. I don't see how you solve that problem!
The fear is that AI-generated comments will collectively promote an agenda, often a political or exploitative agenda, on a scale that humans can't match or hope to counter.
What could help is a careful clique hunting algorithm to accurately identify and delete the entire clique.
doesn't really mean anything, Mac randomly autocorrects dashes to em-dashes (caused me a world of pain once when it did that in a GUID in a config file)
Karma aside, flooding the comments with a chosen narrative via army of bots seems like it's already happening. I suppose the bots can also do voting rings, but they don't necessarily need to.
Yeah, right? Not one ever actually turned out to be true!
That conspiracy about billionaires, who supposedly own all of western media, having deliberately created an environment in which anyone who expresses even the remote idea of a conspiracy, gets discreditted, is also not true!
Would be interesting to see "fastest growing accounts in last N months" or something similar. I'm guessing the ones that are actually humans would be closer to the top than the bottom, but maybe HN users aren't better than the average person to detect AI or not.
One solution is to get rid of anonymity online, enforce validation of identity. Every human only gets 1 account. And then we still ban people that use AI.
Might take a bit but eventually we'll have filtered out all the grifters.
Getting rid of anonymity is in time going to lead to getting rid of the platform, so do it if you're feeling suicidal. People seek real anonymity for good reason. Not everything should follow them in life or for life.
I've been wondering too, what the solution would be. IF the bots were actually helpful, I wouldn't care, but they always push an agenda, create noise, or derail discussions instead.
For now maybe all forums should require some bloody swearing in each comment to at least prove you've got some damn human borne annoyance in you? It might even work against the big players for a little bit, because they have an incentive to have their LLMs not swearing. The monetary reward is after all in sounding professional.
Easy enough for any groups to overcome of course, but at least it'd be amusing for a while. Just watching the swear-farms getting set up in lower paid countries, mistakes being made by the large companies when using the "swearing enabled" models and all that.
It can crank proof of work schemes to maximum, something like you need to burn 15-20 minutes 16 core cpu to post a single comment. It will be infuriating for users, but not cheap for bots
Something about correlation and causation of magic gotcha signals. Text may appear generated to a reader but there's no smoking gun evidence that can disambiguate fact from hypothesis. Even intuition isn't evidence.
Perhaps there needs to be some sort of voluntary ethical disclosure practice to disclaim text as AI-generated with some sort of unusual signifiers. „Lower double quotes perhaps?„
> How many of those are bots and how many of those are "fuck you, clankers" humans—like me?
Maybe the em dash is the self censorship/deletion mechanism that we've all been waiting for. Better than having to write pill subscription ads, I suppose.
We don't ban accounts for criticizing AI (or anything else). We ban them for breaking HN's rules, which you have a long history of creating accounts to do.
Now someone may search old posts without a time cutoff and assume I'm an LLM. That combined with the fact I sometimes write longer posts and naturally default to pretty good punctuation, spelling and grammar, is basically a perfect storm of traits. I've already had posts accused twice in the past year of being an LLM.
Kind of sad some random quirk of LLM training caused a fun little typography thing I did just for myself (assuming no one else would even notice) to become something negative.
This makes me think of the fad where people on youtube will hold a microphone up in frame, because it somehow connotes authenticity. I'm sure some people are already embracing a bit of sloppiness in their writing as a signal of humanity; I'm equally sure that future chatbots will learn to do the same.
Entire sentence structures have been effectively blacklisted from use. It's repulsive.
I dunno this en versus em dash stuff, I just use the minus sign on my keyboard.
>I put the em dash on modifier+dash
This is the default on Macs
I also like …
This is like ruining swastikas and loading rainbows
e.g. "The body of the template is parsed, but not actually type-checked until the template is used." -> "but not typechecked until the template is used." The word "actually" here has a pleasant academic tone, but adds no meaning.
I'm totally fine with the word itself, but not with overuse of it or placing it where it clearly doesn't belong. And I did that a lot, I think. I suspect if you reviewed my HN comments, it's littered with 'actually' a ton. Also "I think...", "I feel like..." and other kind of... Passive, redundant, unnecessary noise.
Like, no kidding I think the thing I'm expressing. Why state that?
Another problem with "actually" is that it can seem condescending or unnecessarily contradictory. While I'm often trying to fluff up prose to soften disagreement (not a great habit), I'm inadvertently making it seem more off-putting than direct yet kind statements would. It can seem to attempt to shift authority to the speaker, if somewhat implicitly. Rather than stating that you disagree along with what you believe or adding information to discourse, you're suggesting that what you're saying somehow deviates from what the person you're speaking to would otherwise believe or expect. That's kind of weird to do, in my opinion. I'm very guilty of it, though I never had the intent of coming across this way.
It can also seem kind of re-directive or evasive at times, like you don't want to get to the point, or you want to avoid the cost of disagreement. It's often used to hedge statements that shouldn't be hedged. This is mainly what led me to realize I should use it less. I hedge just about everything I say rather than simply state it and own it. When you're a hedger and you embed the odd 'actually' in there, you get a weird mix of evasive or contradictory hedging going on. That's poor and indirect communication.
One reason might be to acknowledge that you're not being prescriptive, but leaving room for a subjective POV in situations that call for it.
Likewise, the GP's use of "actually" acknowledges the contrast between what one might expect (that some preliminary type-checking might happen during initial parsing) and what in fact happens (no type checks occur until the template is used.) It doesn't seem out of line in that case.
"The body of the template is parsed, but, contrary to popular belief, not actually type-checked until the template is used."
One can omit the "contrary to popular belief", but the "actually" would still need to stay, as it hints at the "contrary to popular belief".
It's not as simple as "it's not needed there".
The lack of recognition of perceived Noise as an actual part of the Signal, eventually destroys the Signal.
Lately "I mean" has been jumping out at me.
It really only bothers me when I notice I've used it for multiple comments in the same thread or, worse, multiple times in the same comment.
I've also pretty much dropped just from my vocabulary when I'm talking about an alternative way to do something.
I have a quick question but can you please tell me by what's the age of "new" accounts in your analysis?
Because, I have been called AI sometimes and that's because of the "age" of my comments sometimes (and I reasonably crash afterwards) but for context, I joined in 2024.
It's 2026 now, Almost gonna be 2 years. So would my account be considered new within your data or not?
Another minor point but "actually"/"real" seems to me have risen in usage over 5 times. All of these words look like the words which would be used to defend AI, I am almost certain that I saw the sentence "Actually, AI hype is real and so on.." definitely once, maybe even more than once.
Now for the word real, I can't say this for certain and please take it with a grain of salt but we gen-z love saying this and I am certain that I have seen comments on reddit which just say "real" and OpenAI/other models definitely treat reddit-data as some sort of gold for what its worth so much so that they have special arrangements with reddit.
So to me, it seems that the data has been poised with "real". I haven't really observed this phenomenon but I will try to take a close look if chatgpt is more likely to say "real" or not.
Fwiw, I asked Chatgpt to "defend the position, AI hype sucks" and it responded with the word "real"/"reality" in total 3 times.
(another side fact but real is so used in Gen-z I personally watch channel shorts sometimes https://www.youtube.com/@litteralyme0/shorts which has thousands of videos atp whose title is only "real", this channel is sort of meme of "ryan gosling literally me" and has its own niche lore with metroman lol)
It's so sad to me that good typographical conventions have been co-opted by the zeitgeist of LLMs.
If you'd like more tips on writing I'd be happy to help.
Edit: I take that back. I'm going to print and frame this comment. It stands on its own well enough, and I'm the only one who's going to see it.
Well, I haven't always—just for maybe 20 years.
Then came LLMs, and there was so much talk of them using em dashes. A few weeks ago, I finally decided it's time and learned the difference. (Which took all of 2 minutes, btw.) Now I love em dashes and am putting them everywhere I can! Even though most people now assume I'm using AI to write for me.
I defer to Merriam-Webster and/or Harbrace (rather than TCMoS) on punctuation usage.
https://www.merriam-webster.com/grammar/em-dash-en-dash-how-...
Magical signal panacea searching is ultimately fruitless. Other ways to make bot interactions more difficult, there are policy and technological obstacles that could be introduced. For example, require an official desktop or mobile app for interaction. And then for any text copy-pasted, demarcate it. And throw an error message for any input typed inhumanly-fast. Require a micropayment of like $0.10 to comment. While these things would break the interaction style and flexibility for a lot of innocent human users, these would throw big wrenches into some but not all vulnerabilities of bot interactions.
This wouldn't be an issue if mobile users or Windows users were exercising it too, but it's just Mac owners and LLMs. And Mac owners are probably the minority of instances where it is used.
so now, i just use double dashes for everything.
(shit, i wonder when llms will start doing this instead of normal em)
It's like being named Michael Bolton and watching a singer rise in fame named Michael Bolton.
Why should I change my style?
For those who don’t know the reference:
https://www.youtube.com/watch?v=qI1NfFExOSo
https://en.wikipedia.org/wiki/Office_Space
But anyways, you can't really control how people see your stuff, if you're human I think the humanness will come through anyways, even if you have some particular structure or happen to use em-dashes sometimes. They're so easy to prompt around anyways, that the real tricky LLM stuff to detect by sense and reading is the stuff where the prompter been trying to sneakily make them more human.
But now, I have to be so picky about when I use them, even when I think it's the perfect punctuation mark. I'll often just resort to a single hyphen with spaces around. It's wrong, but it doesn't signal someone to go "AI AI AI!!"
(Until a few years ago I probably mostly only saw them in print, and I suppose it just never occurred to me that I liked them in particular vs. just the whole book being professionally typeset generally.)
If AI was writing like everyone else we wouldn't be talking about this. But instead it writes like a subset of people write, many of them just some of the time as a conscious effort. An effort that now makes what they write look like lower quality
Say what you want about marketing-isms of your typical LLM, they have been trained and often succeed at making legible, easy to scan blobs of text. I suspect if more LLM spam was curated/touched up, most people would be unable to distinguish it from human discourse. There are already folks commenting on this article discussing other patterns they use to detect or flag bots using LLMs.
This is the first time I've ever heard the character ";" referred to as such. It's always been "semi-colon" to me, is this a region/culture difference?
I'm not saying you're wrong, I find it interesting.
i call it a super comma when its separating a list with commas within the sets.
so if i am listing colors like green, blue, red; foods like apple, orange, strawberry; and seasons like winter, summer, fall.
it's one use case for an em-dash, because whatever you have inside it has commas in the phrase.
square and rectangle situation. a supercomma is a subset of semicolon.
Em-dash matches how I speak and think-- frequently a halt, then push onto the digression stack, then pop-- so I use them like that.
Em-dash matches how I speak and think (frequently a halt, then push onto the digression stack, then pop) so I use them like that.
Em-dash matches how I speak and think, a halt, then push onto the digression stack, then pop, so I use them like that.
Em-dashes keep everything on the same level of importance in my brain.
Commas don’t feel as powerful. To be fair to the comma I’d probably do this:
Em-dash matches how I speak and think: A halt, then push onto the digression stack, then pop. So I use them like that.
Edit: I accidentally used an em-dash in the word em-dash. Interestingly HN didn’t consider changing the dash to be a change in my text so didn’t update it. I had to make a separate change and take that change out for my dash change to stick.
You can explore the underlying data using SQL queries in your browser here: https://lite.datasette.io/?url=https%253A%252F%252Fraw.githu... (that's Datasette Lite, my build of the Datasette Python web app that runs in Pyodide in WebAssembly)
Here's a SQL query that shows the users in that data that posted the most comments with at least one em dash - the top ones all look like legitimate accounts to me: https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubuserc...
> select user, source, count(*), ...
it's clear that every single outlier in em-dash use in the data set is a green account.
Ellipses were never part of the analysis.
There is precedent here.
If you look back to the 90s and see someone using a racist slur, you fill in the gaps and assume they were using it because they were racist.
Will people in 30 years look back to today and judge those who showed disdain for people who rely on AI to write for them?
Even if clanker becomes a no-no word 30 years from now, it seems beyond the realm of possibility that people who hated clankers in 2026 will be looked upon harshly. Clankers aren’t a marginalized group today, they aren’t a class that needs protection.
What words are you thinking of when you say that there is precedent?
"We treat this one better because it's a house clanker instead of a field clanker"
"If the clanker acts up it knows that it gets stuck in the box"
It was meant to be funny but definitely highlighted exactly what you are saying.
[1] https://www.instagram.com/p/DVH32tTCbuT/?hl=en
For example, here's an active bot that posted 30 mins ago (as of this comment):
https://news.ycombinator.com/threads?id=aplomb1026
Examine the last two detailed comments it made and you'll see the timestamps show they were posted < 30 seconds apart:
https://news.ycombinator.com/item?id=47155655
https://news.ycombinator.com/item?id=47155648
If it wasn't for them misconfiguring their bot and having it post so quickly, these would go by undetected and most people would engage with them. The comments themselves seem "normal" at first glance.
---
Other bots:
https://news.ycombinator.com/threads?id=dirtytoken7
https://news.ycombinator.com/threads?id=fdefitte
>this is [summary]
>not just x, it's y
>punchy ending, maybe question
Once you know it's AI it's very obvious they told it to use normal dashes instead of em dashes, type in lowercase, etc., but it's still weirdly formal and formulaic.
For example from https://news.ycombinator.com/threads?id=snowhale
"this is the underreported second-order risk. Micron, Samsung, SK Hynix all allocated HBM capacity based on hyperscaler capex projections. NAND fabs are similarly committed. a 57% reduction in projected OpenAI spend (.4T -> B) doesn't just affect NVIDIA orders -- it ripples into the memory suppliers who shifted capacity to HBM and away from commodity DRAM/NAND. if multiple hyperscalers revise down simultaneously you get a situation similar to the 2019 crypto ASIC overhang: companies tooled up for demand that evaporated. not predicting that, but the purchasing commitments question is real."
[1] https://news.ycombinator.com/threads?id=snowhale
EDIT to correct: most are not [flagged], but [dead] anyway, so probably manual moderator action or an automated anti-bot measure.
That's why. Boring, bland, etc. That account's M.O. is basically "write a paragraph that says nothing." Fwiw, I do think AI can be indistinguishable from dumb, boring people, but usually those kinds of people won't be on HN.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I'll actually post a comment or question and I'll get a reply with a bit of a paragraph of what feels like a very "off" (not 'wrong' but strangely vague) summary of the topic ... and then maybe an observation or pointed agenda to push, but almost strangely disconnected from what I said.
One of the challenges is that yeah regular users don't get each other's meaning / don't read well as it is / language barriers. Yet the volume of posts I see where the other user REALLY isn't responding to the other person seems awfully high these days.
I wonder if it is neural networks that are inherently biased, but in blind spots, and that applies to both natural and artificial ones. It may be that to approximate neutrality we or our machines have to leave behind the form of intelligence that depends on intrinsically biased weights and instead depend on logically deriving all values from first principles. I have low confidence that AI's can accomplish that any time soon, and zero confidence that natural intelligence can. And it's difficult to see how first principles regarding human values can be neutral.
I'm also skeptical that succeeding at becoming unbiased is a solution, and that while neutrality may be an epistemic advance, it also degrades social cohesion, and that neutrality looks like rationality, but bias may be Chesterson's Fence and we should be very careful about tearing it down. Maybe it's a blessing that we can't.
https://news.ycombinator.com/item?id=45322362
> First impression: I need to dive into this hackernews reply mockup thing thoroughly without any fluff or self-promotion. My persona should be ..., energetic with health/tech insights but casual and relatable.
> Looking at the constraints: short, punchy between 50-80 characters total—probably multiple one-sentence paragraphs here to fit that brevity while keeping it engaging.
> User specified avoiding "Hey" or "absolutely."
Lots more in its other comments (you need [showdead] on).
Is it ideological?
Is it product marketing in those relevant threads where someone is showcasing?
Or is it pure technical testing, playing around?
So far it hasn't happed here, but we'll see!
Incidentally, how much do they pay for a HN account that is a few years old and accumulated a few thousand Internet points?
Asking for a friend.
My relationship with writing, while improved, has been a difficult one. Part of me has always felt that there was a gap in my writing education. The choices other writers seem to make intuitively - sentence structure, word choice, and expression of ideas - do not come naturally to me. It feels like everyone else received the instructions and I missed that lesson.
The result was a sense of unequal skill. Not because my ideas are any less deserving, but because my ability to articulate them doesn't do them justice. The conceit is that, "If I was able to write better, more people would agree with me." It's entirely based on ego and fear of rejection.
Eventually, I learned that no matter how polished my writing is, even restructured by LLMs, it won't give me what I craved. At that moment, the separation of writer and words widened to a point where it wasn't about me anymore and more about them, the readers. This distance made all the difference and now I write with my own voice however awkward that may be.
Because it looks completely adequate for me. Maybe you're not the bad writer you think you are.
Slashdot's system was superior because mod points were finite and randomly dispensed. This entropy discouraged abuse by design—as opposed to making it a key feature of the site.
It's the Achilles' heel of Reddit and every site that attempts to emulate it.
I've been advocating for a while now that HN could use meta-moderation at least on flagging activity, so it can stop giving flagging powers to users who are using it for reasons other than flagging rulebreaking.
Sometimes there is no clear explanation for fake account registration. Perhaps they were registered to be actively used in the future, as most fraud prevention techniques target new account registration and therefore old, aged accounts won't raise suspicion.
Slightly off-topic, but there are relatively new `services` that offer native brand mentions in reddit comments. Perhaps this will soon be available for HN as well, and warming up accounts might be needed for this purpose.
1. https://github.com/tirrenotechnologies/tirreno
Other accounts might be trying to age accounts and dilute their eventual coordinated voting or commenting rings. It's harder to identify sockpuppet accounts when they've been dutifully commenting slop for months before they start astroturfing for the chosen topic.
They don't have anything worth saying but want people to think they do
To reverse the argument - it would be amateurish and plain stupid to ignore it. Barrier to entry is very low. Politics, ads, swaying mildly opinions of some recent clusterfuck by popular megacorp XYZ, just spying on people, you have it all here.
I dont know how dang and crew protects against this, I'd expect some level of success but 100% seems unrealistic. Slow and steady mild infiltration, either by AI bots or humans from GRU and similar orgs who have this literally in their job description.
Oh, would you look at that?
https://news.ycombinator.com/item?id=47134072
This loss of trust is getting tiresome. Depending on context we've likely all wondered if something is astro turfed, but with the frequency increase from llms it's never really possible to not have it somewhere in mind
To date, I've never used an LLM directly. I find them deeply repellant, and I've yet to be convinced that there exists a sufficiently tuned prompt that will make me not hate their literally 'mid' output.
Loss of trust though, that's a societal issue of this gilded age of grifters and scammers. Until we have a system of accountability and consequences for serial lying, we're gonna drown in this shit. LLMs are jet fuel for our existing environment of impunity.
If AI starts use the New Yorker style diaeresis (umlaut-looking thing when there are two vowels in words like coöperate) I swear I'm gonna lose it.
Is there any good argument in favor of it, or any other house style quirks for that matter, other than in-group signaling?
Non-native speakers might see something like "nave" instead of "nigh-eve" unless it is clear that there is a stress that breaks out of the diphthong.
I don't think style guides are (usually) about absolute correctness, but relative correctness. A question is asked, a decision needs making, someone makes it, and now a team of individuals can speak with a consistent voice because there's a guideline to minimize variation.
Join me in double-dash em proximates. Shows you manually typed it out with total disregard token count and technical correctness.
I was going to say that I respect it, but find it utterly absurd that they do that. But your comment made me look it up again—I had no idea it was just obsolete/archaïc (except in the New Yorker), I'd thought it was a language feature their 'style' guide had invented.
Fun fact: if you have the audacity to correctly write an SMS, you can fit about 70 characters in an SMS. It converts the whole message into multibyte instead of only adding dots to the one character. Or if you use classic spelling for naïve in English, same issue. (We don't dots-ize that in Dutch because ai is not a single sound like ee is, so there's no confusion possible. This is purely English.) I believe in Hanlon's razor so it's probably a coincidence that whoever cooked up this terrible encoding scheme made carriers a lot of money, but I do wonder if this had anything to do with the bug still existing to this day!
I present ⸻ the U+2E3B dash.
There is nothing to fear, MY HUMAN FRIEND!
apparently used like ellipses … to indicate part of a quote was removed.
No one wants to read your ChatGPT outputs.
...except ChatGPT fans.
Don’t mind me, just skewing the results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — results. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Could be an argument made for aggregating by user instead however, if some bots are found to be particularly active and skewing the data.
Shhh!
:)
Why not? I am a descendant of Africans. I am a mildly successful author by tech nerd standards. I was educated in the British Public School tradition, right down to taking Latin in high school and cheering on our Rugby* and Cricket teams.
If someone doesn't want to read my words or employ me because I must be AI, that's their problem. The truth is, they won't like what I have to say any more than they like the way I say it.
I have made my peace with this.
———
Speaking of Rugby, in 1973 another school's Rugby team played ours, and almost the entire school turned out to watch a celebrity on the other school's team.
His name was Andrew, and he is very much in the news today.
I also see AIs use emdashes in places where parentheses, colons, or sentence breaks are simply more appropriate.
Often lean slightly pro-AI, but otherwise avoid saying much about anything.
(1) I don't recommend focusing disproportionately on one signal. They'll change, and are incredibly easy to optimize for. https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
(2) I do recommend taking one minute to dash a note off to [email protected] if you see suspicious patterns. Dang and our other intrepid mods are preturnatually responsive, and appear to appreciate the extra eyeballs on the problem.
I support this dashing recommendation.
This wasn't really a intended as an "wow, dang is sure sleeping on the job", more than an interesting observation on the new bot ecosystem.
I also feel like there's a missing discussion about the comment quality on HN lately. It feels like it's dropped like crazy. Wanted to see if I could find some hard data to show I haven't gone full Terry Davis.
Obvious AI-generated posts and articles make it to the front page on a daily basis, and I get the impression that neither the average user nor the moderation team see that as a problem at all anymore.
I know there are legitimate usecases for the em-dash, but a few paragraphs (at most) of text in an HN/Reddit comment? Into the trash it goes.
trying to remember last time I used it
Bot prevention is a very difficult constant game of cat and mouse, and a lot of bot operators have become very skilled at determining the hidden metrics used by platforms to bless accounts; that's their job, after all. I've become a big fan of lobste.rs' invitation tree approach, where the reputation of new accounts rides on the reputation of older accounts, and risks consequence up the chain. It also creates a very useful graph of account origin, allowing for scorched earth approaches to moderation that would otherwise require a serious (and often one-off) machine learning approach to connect accounts.
Additionally, lots of Chinese and Russian keyboard tools use the em dash as well, when they're switching to the alternative (en-US) layout overlay.
There's also the Chinese idiom symbol in UTF8 which gets used as a dot by those users a lot, so that could be a nice indicator for legit human users.
edit: lol @ downvotes. Must have hit a vulnerable spot, huh?
That’s why the analysis was performed over time. All of those em dash sources you mentioned were present before LLM written content became popular.
I'm not trying to negate the fact. I'm just pointing out that a correlation without another indicator is not evidence enough that someone is a bot user, especially in the golden age of rebranded DDoS botnets as residential proxy services that everyone seems to start using since ~Q4 2024.
Is it possible to differentiate between a bot, and a human using AI to 'improve' the quality of their comment where some of the content might be AI written but not all? I don't think it is.
I want to hear people in their own voice, their own ideas, with their own words. I have no interest in reading AI generated comments with the same prose, vocabulary, and grammar.
I don't care if your writing is bad.
Additionally, I am sceptical that using AI to write comments on your behalf creates opportunities for self-improvement. I suspect this is all leading to a death of diversity in writing where comments increasingly have an aura of sameness.
hm, the whole internet really, youtube, reddit, twitter, facebook, blog posts, food recipes, news articles, it's getting more and more obvious
And bots reposting a trending post from like 12 years ago to farm internet points... with other bots reposting the top comments of the initial post
lets bring back Chrome's WEI while we're at it
/s
Brevity is the soul of wit.
I'm more worried about how many people reply to slop and start arguing with it (usually receiving no replies — the slop machine goes to the next thread instead) when they should be flagging and reporting it; this has changed in the last few months.
I'm never suspicious though. One of the strange, and awesome, and incredibly rare things about HN is that I put basically zero stock in who wrote a comment. It's such a minimal part of the UI that it entirely passes me by most of the time. I love that about this site. I don't think I'm particularly unusual in that either; when someone shared a link about the top commenters recently there were quite a few comments about how people don't notice or how they don't recognize the people in the top ranks.
The consequence of this is that a bot could merrily post on here and I'd be absolutely fine not knowing or caring if it was a bot or not. I can judge the content of what the bot is posting and upvote/downvote accordingly. That, in my opinion, is exactly how the internet should work - judge the content of the post, not the character of the poster. If someone posts things I find insightful, interesting, or funny I'll upvote them. It has exactly zero value apart from maybe a little dopamine for a human, and actually zero for a robot, but it makes me feel nice about myself that I showed appreciation.
What we think others around us think has a big effect on our own behavior
The incentives to use bots are many.
This use was one of the first things that occurred to me when LLMs started getting genuinely good at summarizing texts and conversations. And I assume a fair bit of this has always happened on HN too. I've never moderated here obviously so I have no first hand insight but the social conventions here are uniquely ripe for it and it has a disproportionate influence on society through the dominance of the tech industry, making it a good target.
To turn off Smart Punctuation: Home > Settings > General > Keyboard > Smart Punctuation > Off.
There is no real AI detection tool that works.
When we see something like emd-ashes its simply the average of the used text the models trained on. If you fall into one the averages of a model you basically part of the model ouput. Yikes.
Our company is being attacked rn in tech media and at least some of it, gut feeling wise, seems obviously sponsored / promoted by competitors. I know that's not surprising, but never watched it happen from this side before.
AI use is similar. Ask it to do whatever writing or text wrangling you want, but please show the public the sanitized version.
It is also interesting to note that the comparison is between recent comments and recent comments by new users. So, I guess this would take care of the objection that em-dashes (a perfectly fine piece of punctuation) have just been popularized by bots, and now are used more often by humans as well.
Maybe there is a bot problem. Seems almost impossible to fix for a site like this…
Not sure which is scarier
Bye bye em-dash, we had a nice run together.
I might start using that⸻one (a bit long...)
Incidentally, some folks reported my stuff for potential AI generation and I had to respond to the mods about it. So that was kinda funny, if also sad to hear that some folks thought I was a bot.
I’m a dinosaur, not a robot dinosaur. I’m nowhere near that cool, alas.
The tell here is that you used a hyphen, not an em-dash.
This `-` is a hyphen, which I love, even if I'm fairly sure I'm not using it correctly in grammar a lot of the time.
This `--` is an EM-Dash, apparently, which is also what I never use but I also thought was just a hyphen in a different context (incorrect!).
1. We have the hyphen, which is most commonly used to create multi-part words, such as one-and-one-thousand.
2. We have the EN-DASH, which is most commonly used to denote spans of ranges. As an example, Barack Obama was President 2009–2017.
3. Then we have the recently maligned EM-DASH, which can be used in place of a variety of other punctuation marks, such as commas, colons, and parentheses. Very frequently, AI will use the em-dash as a way to separate two clauses and provide forward motion. AI uses it for the same reason that writers do: the em-dash is just a nicer punctuation mark compared to the colon.
4. Lastly, we have the minus sign, which is slightly different than the hyphen, though on most keyboards they're combined into the hyphen-minus.
By the by, they're called the em-dash and the en-dash because they match the length of an uppercase M or N, respectively.
And "--" is absolutely just two hyphen-minuses, not an em-dash (—).
- Generate age so spamming a product/service is easier and the account appears more trustworthy
- Influence discussions in a particular direction for monetary gain, i.e. "I got rich on bitcoin, you'd be crazy not to invest".
- Influence discussions in a particular direction for political gain, i.e. "I went to Xinjiang and the Uyghurs couldn't be happier!"
Every time someone states they stop reading when they encounter proper typography, I feel attacked.
Show HN: Hacker News em dash user leaderboard pre-ChatGPT - https://news.ycombinator.com/item?id=45071722 - Aug 2025 (266 comments)
... which I'm proud to say originated here: https://news.ycombinator.com/item?id=45046883.
even though I used to like pointing out the difference between a hyphen and a period.
Spaces like HN then become a cacophony of clankers clanking as their numbers increase
[0] https://news.ycombinator.com/user?id=octoclaw
I just hope my writing carries enough voice and perspective that people respond, even if there's an em dash or two.
What could help is a careful clique hunting algorithm to accurately identify and delete the entire clique.
Of course, all of the above can be replaced by AI, but it would not significantly alter the status quo.
https://practicaltypography.com/hyphens-and-dashes.html
I will not allow my good practices to get co-opted as AI "smoke tests".
again with the conspiracy theories
But who knows, maybe even 17 year old accounts are being hijacked by AI now too.
Yeah, right? Not one ever actually turned out to be true!
That conspiracy about billionaires, who supposedly own all of western media, having deliberately created an environment in which anyone who expresses even the remote idea of a conspiracy, gets discreditted, is also not true!
None of them are true!
Not. A. Single. One.
*noms cheese pizza*
What will/can HN do about it?
If that's worth the cost... probably not?
For now maybe all forums should require some bloody swearing in each comment to at least prove you've got some damn human borne annoyance in you? It might even work against the big players for a little bit, because they have an incentive to have their LLMs not swearing. The monetary reward is after all in sounding professional.
Easy enough for any groups to overcome of course, but at least it'd be amusing for a while. Just watching the swear-farms getting set up in lower paid countries, mistakes being made by the large companies when using the "swearing enabled" models and all that.
Perhaps there needs to be some sort of voluntary ethical disclosure practice to disclaim text as AI-generated with some sort of unusual signifiers. „Lower double quotes perhaps?„
I hate myself for saying this, but HN should consider closing new registrations for a while until we figure out what to do with this.
Maybe the em dash is the self censorship/deletion mechanism that we've all been waiting for. Better than having to write pill subscription ads, I suppose.