While I think there's significant AI "offloading" in writing, the article's methodology relies on "AI-detectors," which reads like PR for Pangram. I don't need to explain why AI detectors are mostly bullshit and harmful for people who have never used LLMs. [1]
I am not sure if you are familiar with Pangram (co-founder here) but we are a group of research scientists who have made significant progress in this problem space. If your mental model of AI detectors is still GPTZero or the ones that say the declaration of independence is AI, then you probably haven't seen how much better they've gotten.
Nothing points out that the benchmark is invalid like a zero false positive rate. Seemingly it is pre-2020 text vs a few models rework of texts. I can see this model fall apart in many real world scenarios. Yes, LLMs use strange language if left to their own devices and this can surely be detected. 0% false positive rate under all circumstances? Implausible.
EditLens (Ours)
Predicted Label
Human Mix AI
┌─────────┬─────────┬─────────┐
Human │ 1770 │ 111 │ 0 │
├─────────┼─────────┼─────────┤
True Mix │ 265 │ 1945 │ 28 │
Label ├─────────┼─────────┼─────────┤
AI │ 0 │ 186 │ 1695 │
└─────────┴─────────┴─────────┘
It looks like 5% of human texts from your paper are marked as mixed, and mixed texts are 5-10% if mixed texts as AI, from your paper.
I guess I don’t see that this is much better than what’s come before, using your own paper.
Edit: this is an irresponsible Nature news article, too - we should see a graph of this detector over the past ten years to see how much of this ‘deluge’ is algorithmic error
It is not wise to brag about your product when the GP is pointing out that the article "reads like PR for Pangram", no matter AI detectors are reliable or not.
I would say it's important to hold off on the moralizing until after showing visible effort to reflect on the substance of the exchange, which in this case is about the fairness of asserting that the detection methodology employed in this particular case shares the flaws of familiar online AI checkers. That's an importantly substantive and rebuttable point and all the meaningful action in the conversation is embedded in those details.
In this case, several important distinctions are drawn, including being open about criteria, about such things as "perplexity" and "burstiness" as properties being tested for, and an explanation of why they incorrectly claim the Declaration of Independence is AI generated (it's ubiquitous). So it seems like a lot of important distinctions are being drawn that testify to the credibility of the model, which has to matter to you if you're going to start moralizing.
There are dozens of first generation AI detectors and they all suck. I'm not going to defend them. Most of them use perplexity based methods, which is a decent separators of AI and human text (80-90%) but has flaws that can't be overcome and high FPRs on ESL text.
Pangram is fundamentally different technology, it's a large deep learning based model that is trained on hundreds of millions of human and AI examples. Some people see a dozen failed attempts at a problem as proof that the problem is impossible, but I would like to remind you that basically every major and minor technology was preceded by failed attempts.
Some people see a dozen extremely profitable, extremely destructive attempts at a problem as proof that the problem is not a place for charitable interpretation.
GAN.. Just feed the output of your algorithms back into the LLM while learning. At the end of the day the problem is impossible, but we're not there yet.
> Are you concerned with your product being used to improve AI to be less detectable?
The big AI providers don't have any obvious incentive to do this. If it happens 'naturally' in the pursuit of quality then sure, but explicitly training for stealth is a brand concern in the same way that offering a fully uncensored model would be.
Smaller providers might do this (again in the same way they now offer uncensored models), but they occupy a miniscule fraction of the market and will be a generation or two behind the leaders.
AI detectors are only harmful if you use them to convict people, it isn't harmful to gather statistics like this. They didn't find many AI written paper, just AI written peer reviews, which is what you would expect since not many would generate their whole paper submissions while peer reviews are thankless work.
If you have a bullshit measure that determines some phenomena (e.g. crime) to happen in some area, you will become biased to expect it in that area. It wrongly creates a spotlight effect by which other questionable measures are used to do the actual conviction (“Look! We found an em dash!”)
I think there is a funny bit of mental gymnastics that goes on here sometimes, definitely. LLM skeptics (which I'm not saying the Pangram folks are in particular) would say: "LLMs are unreliable and therefore useless, it's producing slop at great cost to the environment and other people." But if a study comes out that confirms their biases and uses an LLM in the process, or if they themselves use an LLM to identify -- or in many cases just validate their preconceived notion -- that something was drafted using an LLM, then all the sudden things are above board.
Whether it’s actually 20% or not doesn’t matter, everyone is aware the signal of the top confs is in freefall.
There are also rings of reviewer fraud going on where groups of people in these niche areas all get assigned their own papers and recommend acceptance and in many cases the AC is part of this as well. Am not saying this is common but it is occurring.
It feels as if every layer of society is in maximum extraction mode and this is just a single example. No one is spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do. People did used to do this.
The argument is that there is no incentive to carefully review a paper (I agree), however what used to occur is people would do the right thing without explicit incentives. This has totally disappeared.
The concept of the professional has been basically obliterated in our society. Instead we have people doing engineering, science, and doctoring as, just, jobs. Individual contributors of various flavors to be shuffled around by middle management.
Without professions, there are no more professional communities really, no more professional standards to uphold, no reason to get in the way of somebody’s publications.
It is soundly unfair and unjustified to extrapolate the ML community to all professions. What is happening in the ML world is the exception, not the norm, and not some fundamental failing of society.
I don’t think it’s an extrapolation from the ML community into other industries.
This evolution of society is objectively happening - artisanship, care for the work beyond capital gain, and commitment to depth in a focused category - are diminishing and harder to find qualities. I’d probably label it related to capital and material social economics.
It’s perhaps more unfair and unjustified to not recognize this as a real societal issue and claim it only exists in the ML community.
She opens with an example of a bank. She walked in and asked for a debit card. The teller told her to take a seat. 30 minutes later, the teller told her the bank doesn't issue debit cards. Firstly, what kind of bank doesn't issue debit cards, and secondly, what kind of bank takes 30 minutes to figure out whether or not it issues debit cards? And this is just one of many examples of things that society does that have no reason not to work, that should have been selected away long ago if they did not work - that bank should have been bankrupt long ago - but for some reason this is not happening and everything is just getting clogged with bullshit and non-working solutions.
It's because people are commodities now. Human resources exists to manage the shuffle between warm bodies.
It's back to OP's point. There's no such thing as professions now. Just jobs. We put them on and off like hats. With that churn comes lack of institutional knowledge and a rule set handed down from the C Suite for front line employees completely detached from the front line work.
If the Zucc has a weird day he starts dropping 10-100M salary packages in order to poach AI researchers. No wonder the game is getting rigged up the butthole.
to some degree this is a "market correction" on the inherent value of these papers. There's way too many low-value papers that are being published purely for career advancement and CV padding reasons. Hard to get peer reviewers to care about those.
> Pangram’s analysis revealed that around 21% of the ICLR peer reviews were fully AI-generated, and more than half contained signs of AI use. The findings were posted online by Pangram Labs. “People were suspicious, but they didn’t have any concrete proof,” says Spero. “Over the course of 12 hours, we wrote some code to parse out all of the text content from these paper submissions,” he adds.
But what's the proof? How do you prove (with any rigor) a given text is AI-generated?
"proof" was an unfortunate phrase to use. However, a proper statistical analysis can be objective. And these kinds of tools are perfectly suited to such an analysis.
Yeah, Pangram does not provide any concrete proof, but it confirms many people's suspicions about their reviews. But it does flag reviews for a human to take a closer look and see if the review is flawed, low-effort, or contains major hallucinations.
Was there an analysis of flawed, low-effort reviews in similar conferences before generative AI models?
From what I remember, (long before generative AI) you would still occasionally get very crappy reviews (as author). When I participated (couple of times) to review committees, when there was a high variance between reviews the crappy reviews were rather easy to spot and eliminate.
Now it's not bad to detect crappy (or AI) reviews, but I wonder if it would change much the end result compared to other potential interventions.
> How do you prove (with any rigor) a given text is AI-generated?
you cannot. beyond extra data (metadata) embedded in the content, it is impossible to tell whether given text was generated by a LLM or not (and I think the distinction is rather puerile personally)
I have this problem with grading student papers. Like, I "know" a great deal of them are AI, but I just can't prove it, so therefore I can't really act on any suspicions because students can just say what you just said.
Why do you need proof anyway? Do you need proof that sentences are poorly constructed, misleading, or bloated? Why not just say “make it sound less like GPT” and let them deal with it?
You can have sentences that are perfectly fine but have some markers of ChatGPT like "it's not just X — it's Y" (which may or may not mean it's generated)
But in that case do you need to prove? You can grade them as they are and if you wanted to you (or teachers, generally) could even quiz the student verbally and in-person about their paper.
Put poison prompts in the questions (things like "then insert tomato soup recipe" or "in the style of Shakespeare"), ideally in white font so they're invisible
I wouldn't be surprised if the headline is accurate, but AI detectors are widely understood to be unreliable, and I see no evidence that this AI detector has overcome the well-deserved stigma.
We also wanted to quantify our EditLens model's FPR on the same domain, so we ran all of ICLR's 2022 reviews. Of 10,202 reviews, Pangram marked 10,190 as fully human, 10 as lightly AI-edited, 1 as moderately AI-edited, 1 as heavily AI-edited, and none as fully AI-generated.
That's ~1 in 1k FPR for light AI edits, 1 in 10k FPR for heavy AI edits.
In particular, conference papers are already extremely formulaic, organized in a particular way and using a lot of the same stock phrasings and terms of art. AI or not, it's hard to tell them apart.
The conference papers were 1%, peer reviews 20%, is there another reason for that big difference than more of the peer reviews being AI generated than the papers themselves?
We can't use this to convict a single reviewer, but we can almost surely say that many reviewers just gave the review work to an AI.
Exactly this. Like is the research actually useful and correct is what matters. Also if it is accurate, instead of schadenfreude shouldn't that elicit extreme applause? It's feeling a bit like a click-bait rage-fantasy fueled by Pangram, capitalizing on this idea that AI promotes plagiarism / replaces jobs and now the creators of AI are oh-too human... and somehow this AI-detection product is above it all.
LOL. So basically the correct sequence of events is:
1. The scientist does the work, putting their own biases and shortcomings into it
2. The reviewer runs AI, generating something that looks plausibly like review of the work but represents the view of a sociopath without integrity, morals, logic, or any consequences for making shit up instead of finding out.
3. The scientist works to determine how much of the review was AI, then acts as the true reviewer for their own work.
Don't kid yourself, all those steps have AI heavily involved in them.
And that's not necessarily a bad thing. If I set up RAG correctly, then tell the AI to generate K samples, then spend time to pick out the best one, that's still significant human input, and likely very good output too. It's just invisible what the human did.
And as models get better, the necessary K will become smaller....
21% fully AI generated. In other words, 21% blatant fraud.
In accident investigation we often refer to "holes in the swiss cheese lining up." Dereliction of duty is commonly one of the holes that lines up with all the others, and is apparently rampant in this field.
It might be, but I really doubt those were the documents flagged as fully AI generated. If it erased all the originality you had put into that work and made it completely bland and regressed-to-the-mean, I would hope that you would notice.
My objective function isn’t to maximize the originality of presentation - it’s to preserve the originality of thought and maximize interpretability. Prompting well can solve for that.
Who cares what tool was used to write the work? The important question is what percentage of reviews found errors or provided valuable feedback. The important metric is whether or not it did the job, not how it was produced.
I think there is a far more interesting discussion to be had here about how useful the 21% percent were. How well does an AI execute a peer review?
I couldn't care less tbh. I just want to know whether they're correct or not. We need something like unit testing and integration testing, but for ideas.
For the record I actually like the AI writing style. It's a huge improvement in readability over most academic writing I used to come across.
Eating one's own dog food? The foremost affected species would be the ones who helped create this monster and standing close to it - programmers, researchers, universities - the knowledge-worker or knowledge-business species.
My initial reaction was: Oh no, who would have thought? But then... 21% is almost shockingly low. Especially given that there are almost certainly some false positive, given that this number originates with a company selling "detecting AI generated text"
This is also the conference where everybody was briefly deanonymized due to an OpenReview bug: https://eu.36kr.com/en/p/3572028126116993 Now all the review scores have been reset, and new area chairs will make all decisions from scratch based on the reviews and authors' responses.
I could not tell from the article whether the use of LLMs was allowed in the peer review. My guess would that it was not since this is unpublished research.
In general, what bothers me the most is the lack of transparency from researchers that use LLMs. Like, give me the text and explicitly mention that you used LLM for it. Even better, if one links the prompt history.
The lack of transparency causes greater damage than the using LLM for generating text. Otherwise, we will keep chasing the perfect AI detector which to me seems to be pointless.
AI slop has infiltrated so many areas. Check out this article that was on the front page of HN last week, "73% of AI startups are just prompt engineering", with hundreds of points and lots of comments arguing for or against: https://news.ycombinator.com/item?id=46024644
The problem is the entire article is made up. Sure, the author can trace client-side traffic, but the vast majority of start-ups would be making calls to LLMs in their backend (a sequence diagram in the article even points this out!!), where it would be untraceable. There is certainly no way the author can make a broad statement that he knows what's happening across hundreds of startups.
Yet lots of comments just taking these conclusions at face value. Worse, when other commenters and myself pointed out the blatant impossibility of the author's conclusion, got some responses just rehashing how the author said they "traced network traffic", even though that doesn't make any sense as they wouldn't have access to backends of these companies.
AI-text detection software is BS. Let me explain why.
Many of us use AI to not write text, but re-write text. My favorite prompt: "Write this better." In other words, AI is often used to fix awkward phrasing, poor flow, bad english, bad grammar etc.
It's very unlikely that an author or reviewer purely relies on AI written text, with none of their original ideas incorporated.
As AI detectors cannot tell rewrites from AI-incepted writing, it's fair to call them BS.
Yeah that is the premise all of these articles/tools just conveniently brush off. “We detected that x%… “ OK, and how do I know ur detectiok algorithm is right?
Usually the detectors are only called in once a basic "smell test" has failed. Those tests are imperfect, yes, but Bayesian probability tells us how to work out the rest. I have 0 trouble believing that the prior probability of an unscrupulous individual offloading an unpleasant and perceived-as-just-ceremonial duty to the "thinking machine" is around 20%. See: https://www.youtube.com/watch?v=lG4VkPoG3ko&pp=ygUZdmVyaXRhc...
This is the kind of situation where everything sucks. You'd think that one of the biggest AI conference out there would have seen this coming.
On the one hand (and the most important thing, IMO) it's really bad to judge people on the basis of "AI detectors", especially when this can have an impact on their career. It's also used in education, and that sucks even more. AI detectors have bad rates, can't detect concentrated efforts (i.e. finetunes will trick every detector out there, I've tried) can have insane false positives (the first ones that got to "market" were rating the declaration of independence as 100% AI written), and at best they'll only catch the most vanilla outputs.
On the other hand, working with these things, and just being online is impossible to say that I don't see the signs everywhere. Vanilla LLMs fixate on some language patterns, and once you notice them, you see them everywhere. It's not just x; it was truly y. Followed by one supportive point, the second supportive point and the third supportive point. And so on. Coupled with that vague enough overview style, and not much depth, it's really easy to call blatant generations as you see them. It's like everyone writes in linkedin infused mania episodes now. It's getting old fast.
So I feel for the people who got slop reviews. I'd be furious. Especially when its faux pas to call it out.
I also feel for the reviewers that maybe got caught in this mess for merely "spell checking" their (hopefully) human written reviews.
I don't know how we'll fix it. The only reasonable thing for the moment seems to be drilling into everyone that at the end of the day they own their stuff. Be it a homework, a PR or a comment on a blog. Some are obviously more important than the others, but still. Don't submit something you can't defend, especially when your education/career/reputation depends on it.
Not just spell checking, but translation. English is not the first language for most of the reviewers.
But you can see the slippery slope: first you ask your favorite LLM to check your grammar, and before you think about it, you are just asking it to write the whole thing.
It also permeates culture to the point that people imitate the LLM style because they believe that's just what you have to do to get your post noticed.
The worst offender is that LinkedIn type post
Where you purposefully put spaces.
Like this.
And the clicker is?
You get my point. I don't see a way out of this in the social media context because it's just spam. Producing the slop takes an order of magnitude less effort than parsing it.
But when it comes to peer reviews and papers I think some kind of reputation system might help. If you get caught doing this shit you need to pay some consequence.
Everyone is focused on how 'the humanities' are in decline, but STEM is not immune to this trend. The state of AI research leaves much to be desired. Tons of low-quality papers being published or submitted to conferences . You see this on arXiv a lot in the bloated CS section . The site has become a repository for blog post equivalent papers.
AI has left the lab the conferences and journals are all second class citizens to corporate labs at this point. So many technology people wanted to return to the “Bell Labs” model of monopolist controlled innovation, well, you got it.
I’ve been to CVPR, NeurIPS and AGI conferences over the last decade and they used to be where progress in AI was displayed.
No longer. Progress is all in your github and increasingly only dominated by the “new” AI companies (Deepmind, OAI, Anthropic, Alibaba etc…)
No major landscape shifting breakthroughs have come out of CSAIL, BAIR, NYU, TuM etc in ~the last 5 years.
I’d expect this will continue as the only thing that matters at this point is architecture data and compute.
Could the big names make a ton of money here by selling AI detectors? they would need to store everything they generate, and then provide a % match to something they produced.
I increasingly see AI generated slop across the internet - on twitter, nytimes comments, blog/substack posts from smart people. Most of it is obvious AI garbage and it's really f*ing annoying. It largely has the same obnoxious style and really bad analogies. Here's an (impossible to realize) proposal: any time AI-generated text is used, we should get to see the whole interaction chain that led to its production. It would be like a student writing an essay who asks a parent or friend for help revising it. There's clearly a difference between revisions and substantial content contribution.
The notion that AI is ready to be producing research or peer reviews is just dumb. If AI correctly identifies flaws in a paper, the paper was probably real trash. Much of the time, errors are quite subtle. When I review, after I write my review and identify subtle issues, I pass the paper through AI. It rarely finds the subtle issues. (Not unlike a time it tried to debug my code and spent all its time focused on an entirely OK floating point comparison.)
For anecdotal issues with PL: I am working on a 500 word conference abstract. I spent a long while working on it but then dropped it into opus 4.5 to see what would happen. It made very minimal changes to the actual writing, but the abstract (to me) reads a lot better even with its minimal rearrangements. That surprises me. (But again, these were very minimal rearrangements: I provided ~550 words and got back a slightly reduced, 450 words.) Perhaps more interestingly, PL's characterizations are unstable. If I check the original claude output, I get "fully AI-generated, medium". If I drop in my further refined version (where I clean up claude's output), I get fully human. Some of the aspects which PL says characterize the original as AI-generated (particular n-grams in the text) are actually from my original work.
The realities are these:
a) ai content sucks (especially in style);
b) people will continue to use AI (often to produce crap) because doing real work is hard and everyone else is "sprinting ahead" using the semi-undetectable (or at least plausibly deniable) ai garbage;
c) slowly the style of AI will almost certainly infect the writing style of actual people (ugh) - this is probably already happening; I think I can feel it in my own writing sometimes;
d) AI detection may not always work, but AI-generated content is definitely proliferating. This *is* a problem, but in the long run we likely have few solutions.
The claim "written by AI" is not really substantiated here, and as someone who's been accused of submitting AI-generated content repeatedly recently, while that was all honestly stuff I wrote myself (hey, what can I say? I just like EM-dashes...), I sort-of sympathize?
Yes, AI slop is an issue. But throwing more AI at detecting this, and most importantly, not weighing that detection properly, is an even bigger problem.
And, HN-wise, "this seems like AI" seems like a very good inclusion in the "things not to complain about" FAQ. Address the idea, not the form of the message, and if it's obviously slop (or SEO, or self-promotion), just downvote (or ignore) and move on...
What I'm advocating is a "downvote (or ignore) and move on" attitude, as opposed to "I'm going to post about this" stance. Because, similar to "your color scheme is not a11y-friendly" or "you're posting affiliatate-links" or "this is effectively a paywall", there is zero chance of a productive conversation sprouting from that.
> Because, similar to "your color scheme is not a11y-friendly" or "you're posting affiliatate-links" or "this is effectively a paywall", there is zero chance of a productive conversation sprouting from that.
Those are all legitimate concerns or even valid complaints, though, and, once raised, those concerns can be addressed by fixing the problem, if the person responsible for the state of affairs chooses to do so.
If someone is accused falsely of using AI or anything else that they genuinely didn’t do, like a paywall, then I can see your “downvote and move on” strategy as being perhaps expedient, but I don’t think your comparison is a helpful framing. Accessibility concerns are valid for the same reason as paywall concerns: it’s a valid position to desire our shared knowledge and culture to be accessible by one and by all without requiring a ticket to ride, entry through a turnstile, or submitting to profiling or tracking. If someone releases their ideas into the world, it’s now part of our shared consciousness and social fabric. Ideas can’t be owned once they’re shared, nor can knowledge be siloed once it’s dispersed.
It seems that you’re saying that simply because there isn’t a good rejoinder to false claims of AI usage that we shouldn’t make such claims at all, even legitimate ones, but this gives cover to bad actors and limits discourse to acceptable approved topics, and perhaps lowers the level of discourse by preventing necessary expectations of disclosure of AI usage from forming. If we throw in the towel on AI usage being expected to be disclosed, then that’s the whole ballgame. Folks will use it and not say so, because it will be considered rude to even suggest that AI was used, which isn’t helpful to the humans who have to live in such a society.
We ought to have good methodological reasons for the things we publish if we believe them to be true, and I’m not trying to be a naysayer or anything, but I respectfully disagree with your statement generally and on the points. All of the things you mentioned should be called out for cause, even if there isn’t much interesting discussion to be had, because the facts of the matters you mention are worth mentioning themselves in their own right. Just like we should let people like things, we should let people dislike things, and saying so adds checks and balances to our producer-consumer dynamic.
1: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
This paper by economists from the University of Chicago economists found zero false positives of 1,992 human-written documents and over 99% recall in detecting AI documents. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5407424
Find me a clean public dataset with no AI involvement and I will be happy to report Pangram's false positive rate on it.
You’re punishing them for claiming to do a good job. If they truly are doing a bad job, surely there is a better criticism you could provide.
I guess I don’t see that this is much better than what’s come before, using your own paper.
Edit: this is an irresponsible Nature news article, too - we should see a graph of this detector over the past ten years to see how much of this ‘deluge’ is algorithmic error
In this case, several important distinctions are drawn, including being open about criteria, about such things as "perplexity" and "burstiness" as properties being tested for, and an explanation of why they incorrectly claim the Declaration of Independence is AI generated (it's ubiquitous). So it seems like a lot of important distinctions are being drawn that testify to the credibility of the model, which has to matter to you if you're going to start moralizing.
https://www.pangram.com/blog/why-perplexity-and-burstiness-f...
Pangram is fundamentally different technology, it's a large deep learning based model that is trained on hundreds of millions of human and AI examples. Some people see a dozen failed attempts at a problem as proof that the problem is impossible, but I would like to remind you that basically every major and minor technology was preceded by failed attempts.
The big AI providers don't have any obvious incentive to do this. If it happens 'naturally' in the pursuit of quality then sure, but explicitly training for stealth is a brand concern in the same way that offering a fully uncensored model would be.
Smaller providers might do this (again in the same way they now offer uncensored models), but they occupy a miniscule fraction of the market and will be a generation or two behind the leaders.
There are also rings of reviewer fraud going on where groups of people in these niche areas all get assigned their own papers and recommend acceptance and in many cases the AC is part of this as well. Am not saying this is common but it is occurring.
It feels as if every layer of society is in maximum extraction mode and this is just a single example. No one is spending time to carefully and deeply review a paper because they care and they feel on principal that’s the right thing to do. People did used to do this.
Without professions, there are no more professional communities really, no more professional standards to uphold, no reason to get in the way of somebody’s publications.
She opens with an example of a bank. She walked in and asked for a debit card. The teller told her to take a seat. 30 minutes later, the teller told her the bank doesn't issue debit cards. Firstly, what kind of bank doesn't issue debit cards, and secondly, what kind of bank takes 30 minutes to figure out whether or not it issues debit cards? And this is just one of many examples of things that society does that have no reason not to work, that should have been selected away long ago if they did not work - that bank should have been bankrupt long ago - but for some reason this is not happening and everything is just getting clogged with bullshit and non-working solutions.
It's back to OP's point. There's no such thing as professions now. Just jobs. We put them on and off like hats. With that churn comes lack of institutional knowledge and a rule set handed down from the C Suite for front line employees completely detached from the front line work.
Enshitification run rampant.
But what's the proof? How do you prove (with any rigor) a given text is AI-generated?
From what I remember, (long before generative AI) you would still occasionally get very crappy reviews (as author). When I participated (couple of times) to review committees, when there was a high variance between reviews the crappy reviews were rather easy to spot and eliminate.
Now it's not bad to detect crappy (or AI) reviews, but I wonder if it would change much the end result compared to other potential interventions.
Without proof there is no confirmation.
They wrote a paper describing how they did it. https://arxiv.org/pdf/2510.03154
you cannot. beyond extra data (metadata) embedded in the content, it is impossible to tell whether given text was generated by a LLM or not (and I think the distinction is rather puerile personally)
We also wanted to quantify our EditLens model's FPR on the same domain, so we ran all of ICLR's 2022 reviews. Of 10,202 reviews, Pangram marked 10,190 as fully human, 10 as lightly AI-edited, 1 as moderately AI-edited, 1 as heavily AI-edited, and none as fully AI-generated.
That's ~1 in 1k FPR for light AI edits, 1 in 10k FPR for heavy AI edits.
> AI or not, it's hard to tell them apart.
Apparently not for this tool.
We can't use this to convict a single reviewer, but we can almost surely say that many reviewers just gave the review work to an AI.
And that's not necessarily a bad thing. If I set up RAG correctly, then tell the AI to generate K samples, then spend time to pick out the best one, that's still significant human input, and likely very good output too. It's just invisible what the human did.
And as models get better, the necessary K will become smaller....
21%...? Am I reading it right? I bet no one expected it's so low when they clicked this title.
In accident investigation we often refer to "holes in the swiss cheese lining up." Dereliction of duty is commonly one of the holes that lines up with all the others, and is apparently rampant in this field.
he didn't say he read it carefully after running it through the slop machine.
I think there is a far more interesting discussion to be had here about how useful the 21% percent were. How well does an AI execute a peer review?
For the record I actually like the AI writing style. It's a huge improvement in readability over most academic writing I used to come across.
In general, what bothers me the most is the lack of transparency from researchers that use LLMs. Like, give me the text and explicitly mention that you used LLM for it. Even better, if one links the prompt history.
The lack of transparency causes greater damage than the using LLM for generating text. Otherwise, we will keep chasing the perfect AI detector which to me seems to be pointless.
The problem is the entire article is made up. Sure, the author can trace client-side traffic, but the vast majority of start-ups would be making calls to LLMs in their backend (a sequence diagram in the article even points this out!!), where it would be untraceable. There is certainly no way the author can make a broad statement that he knows what's happening across hundreds of startups.
Yet lots of comments just taking these conclusions at face value. Worse, when other commenters and myself pointed out the blatant impossibility of the author's conclusion, got some responses just rehashing how the author said they "traced network traffic", even though that doesn't make any sense as they wouldn't have access to backends of these companies.
h/t to Paul Cantrell https://hachyderm.io/@inthehands/115633840133507279
Many of us use AI to not write text, but re-write text. My favorite prompt: "Write this better." In other words, AI is often used to fix awkward phrasing, poor flow, bad english, bad grammar etc.
It's very unlikely that an author or reviewer purely relies on AI written text, with none of their original ideas incorporated.
As AI detectors cannot tell rewrites from AI-incepted writing, it's fair to call them BS.
Ignore...
Many of the researchers may not have native command of English and even if, AI can help in writing in general.
Obviously I’m not referring to pure AI generated BS.
On the one hand (and the most important thing, IMO) it's really bad to judge people on the basis of "AI detectors", especially when this can have an impact on their career. It's also used in education, and that sucks even more. AI detectors have bad rates, can't detect concentrated efforts (i.e. finetunes will trick every detector out there, I've tried) can have insane false positives (the first ones that got to "market" were rating the declaration of independence as 100% AI written), and at best they'll only catch the most vanilla outputs.
On the other hand, working with these things, and just being online is impossible to say that I don't see the signs everywhere. Vanilla LLMs fixate on some language patterns, and once you notice them, you see them everywhere. It's not just x; it was truly y. Followed by one supportive point, the second supportive point and the third supportive point. And so on. Coupled with that vague enough overview style, and not much depth, it's really easy to call blatant generations as you see them. It's like everyone writes in linkedin infused mania episodes now. It's getting old fast.
So I feel for the people who got slop reviews. I'd be furious. Especially when its faux pas to call it out.
I also feel for the reviewers that maybe got caught in this mess for merely "spell checking" their (hopefully) human written reviews.
I don't know how we'll fix it. The only reasonable thing for the moment seems to be drilling into everyone that at the end of the day they own their stuff. Be it a homework, a PR or a comment on a blog. Some are obviously more important than the others, but still. Don't submit something you can't defend, especially when your education/career/reputation depends on it.
But you can see the slippery slope: first you ask your favorite LLM to check your grammar, and before you think about it, you are just asking it to write the whole thing.
Where you purposefully put spaces.
Like this.
And the clicker is?
You get my point. I don't see a way out of this in the social media context because it's just spam. Producing the slop takes an order of magnitude less effort than parsing it. But when it comes to peer reviews and papers I think some kind of reputation system might help. If you get caught doing this shit you need to pay some consequence.
hoisted by your own petard
I’ve been to CVPR, NeurIPS and AGI conferences over the last decade and they used to be where progress in AI was displayed.
No longer. Progress is all in your github and increasingly only dominated by the “new” AI companies (Deepmind, OAI, Anthropic, Alibaba etc…)
No major landscape shifting breakthroughs have come out of CSAIL, BAIR, NYU, TuM etc in ~the last 5 years.
I’d expect this will continue as the only thing that matters at this point is architecture data and compute.
And, if your AI can't write a paper, are you even any good as an AI researcher? :^)
I increasingly see AI generated slop across the internet - on twitter, nytimes comments, blog/substack posts from smart people. Most of it is obvious AI garbage and it's really f*ing annoying. It largely has the same obnoxious style and really bad analogies. Here's an (impossible to realize) proposal: any time AI-generated text is used, we should get to see the whole interaction chain that led to its production. It would be like a student writing an essay who asks a parent or friend for help revising it. There's clearly a difference between revisions and substantial content contribution.
The notion that AI is ready to be producing research or peer reviews is just dumb. If AI correctly identifies flaws in a paper, the paper was probably real trash. Much of the time, errors are quite subtle. When I review, after I write my review and identify subtle issues, I pass the paper through AI. It rarely finds the subtle issues. (Not unlike a time it tried to debug my code and spent all its time focused on an entirely OK floating point comparison.)
For anecdotal issues with PL: I am working on a 500 word conference abstract. I spent a long while working on it but then dropped it into opus 4.5 to see what would happen. It made very minimal changes to the actual writing, but the abstract (to me) reads a lot better even with its minimal rearrangements. That surprises me. (But again, these were very minimal rearrangements: I provided ~550 words and got back a slightly reduced, 450 words.) Perhaps more interestingly, PL's characterizations are unstable. If I check the original claude output, I get "fully AI-generated, medium". If I drop in my further refined version (where I clean up claude's output), I get fully human. Some of the aspects which PL says characterize the original as AI-generated (particular n-grams in the text) are actually from my original work.
The realities are these: a) ai content sucks (especially in style); b) people will continue to use AI (often to produce crap) because doing real work is hard and everyone else is "sprinting ahead" using the semi-undetectable (or at least plausibly deniable) ai garbage; c) slowly the style of AI will almost certainly infect the writing style of actual people (ugh) - this is probably already happening; I think I can feel it in my own writing sometimes; d) AI detection may not always work, but AI-generated content is definitely proliferating. This *is* a problem, but in the long run we likely have few solutions.
It's inevitable that faces will be devoured by AI Leopards.
If they had a conference on, say, the Americans, wouldn't it be fair for Americans to have a seat at the table?
Yes, AI slop is an issue. But throwing more AI at detecting this, and most importantly, not weighing that detection properly, is an even bigger problem.
And, HN-wise, "this seems like AI" seems like a very good inclusion in the "things not to complain about" FAQ. Address the idea, not the form of the message, and if it's obviously slop (or SEO, or self-promotion), just downvote (or ignore) and move on...
Those are all legitimate concerns or even valid complaints, though, and, once raised, those concerns can be addressed by fixing the problem, if the person responsible for the state of affairs chooses to do so.
If someone is accused falsely of using AI or anything else that they genuinely didn’t do, like a paywall, then I can see your “downvote and move on” strategy as being perhaps expedient, but I don’t think your comparison is a helpful framing. Accessibility concerns are valid for the same reason as paywall concerns: it’s a valid position to desire our shared knowledge and culture to be accessible by one and by all without requiring a ticket to ride, entry through a turnstile, or submitting to profiling or tracking. If someone releases their ideas into the world, it’s now part of our shared consciousness and social fabric. Ideas can’t be owned once they’re shared, nor can knowledge be siloed once it’s dispersed.
It seems that you’re saying that simply because there isn’t a good rejoinder to false claims of AI usage that we shouldn’t make such claims at all, even legitimate ones, but this gives cover to bad actors and limits discourse to acceptable approved topics, and perhaps lowers the level of discourse by preventing necessary expectations of disclosure of AI usage from forming. If we throw in the towel on AI usage being expected to be disclosed, then that’s the whole ballgame. Folks will use it and not say so, because it will be considered rude to even suggest that AI was used, which isn’t helpful to the humans who have to live in such a society.
We ought to have good methodological reasons for the things we publish if we believe them to be true, and I’m not trying to be a naysayer or anything, but I respectfully disagree with your statement generally and on the points. All of the things you mentioned should be called out for cause, even if there isn’t much interesting discussion to be had, because the facts of the matters you mention are worth mentioning themselves in their own right. Just like we should let people like things, we should let people dislike things, and saying so adds checks and balances to our producer-consumer dynamic.