Their design approach wasn’t particularly unusual, so I’m not sure what that sentence means.
I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.
It's annoying because it is a super common widget and it is interesting work, the first draft or literally even prompt they gave the AI probably would've been a great post, all they had to do was not ensloppify it...
Yeah it’s basically the prose equivalent of getting too much radio play - hilarious how the breakthrough of LLM content has ‘ruined’ “it’s not X—it’s Y” for so many of us now
Maybe, like overplayed pop songs, in 20 years or so we’ll come around to viewing the phrase fondly.
> "Not just X -- it's Y" is one of the more irritatingly common signs ...
It's a bit of a "Karen AI" telltale sign. It's probably been trained on a lot of "I-know-it-all-Karen" posts and as a result we're bombarded with Karen-slop.
I remember back I think around 2011, CF was new and I was testing it on some vbulletin forum, all the email communication were with the cofounder if I recall correctly, the UI had only the dns settings back then. Now they make a whole article on some text redesign, time flies.
That's why I say most AI content isn't just slop—it's fundamentally about deception. It's about tricking someone into believing that a text was written by a human, or that a photo or video is a true recording of a real event.
Like this, its purpose is to fly under the radar unless your figurative ears are pricked up and primed to detect the telltale signs. Fuck this shit.
Am I reading it right, the widget is seen 5B times per day, and they recruited 8 people for testing to make sure their “redesign would work for everyone”…?
Why? Genuinely, who cares? Is some demographic group not caught in the 8 going to be offended by basic checkbox screen? Is someone with a niche form of colorblindness going to have difficulty navigating the UI?
How can you seriously pretend to do any study with only eight people involved? Especially when your company is worth billion. It just calls for bad press and criticism of amateurism.
I mean, yes? A very broad spectrum of people need to use the internet, and cloudflare has inserted themselves in the middle of it.
I don't necessarily find a problem with them, but its weird how they boasted about massive scale and importance of this, but then only just went with 8 tests.
The process described in the article is literally just checking the boxes blindly for what passes for a design process these days. The guru's say interview customers so they have done just that without really understanding why. Given it's AI it's also possible the whole thing is entirely made up and someone just tweaked the design over an afternoon and shipped it.
As a user of an unsigned Firefox fork, Turnstile has ruined a moderate portion of the Internet for me. The way Cloudflare doesn’t think twice about eroding user freedoms, for the sake of a gate that can be trivially bypassed with solvarr or similar, is deeply disturbing. They are no longer a force for good on the web.
As bad as cloudflare is there is a reason people use it.
If you try and run a site that has content that LLMs want or expensive calls that require a lot of compute and can exhaust resources if they are over used the attack is relentless. It can be a full time job trying to stop people who are dedicated to scrapping the shit out of your site.
Even CF doesnt even really stop it any more. The agent run browsers seem to bypass it with relative ease.
One of the things that a lot of LLM scrapers are fetching are git repositories. They could just use git clone to fetch everything at once. But instead, they fetch them commit by commit. That's about as static as you can get, and it is absolutely NOT a non-issue.
No... Basically all git servers have to generate the file contents, diffs etc. on-demand because they don't store static pages for every single possible combination of view parameters. Git repositories also typically don't store full copies of all versions of a file that have ever existed either; they're incremental. You could pre-render everything statically, but that could take up gigabytes or more for any repo of non-trivial size.
I see people saying that a lot, but I use Zen which is a fork of Firefox and I don't think I've ever had an issue with Turnstile, at least not noticeably more than I had on mobile Chrome.
Isn't it the opposite? They allow you to still use it when it would almost certainly be better for cloudflare and the website behind then to just block you.
Will this also be accompanied by a global Turnstile outage like all the other Cloudflare services that get touched? If they end up vibeslopping the redesign like they did with this article, it may just happen.
If this truly was written with AI it's really quite poor. Some of the employees at Cloudflare seem to be negligent tbh based off the fact they've been down so many times recently
> We recruited 8 participants across 8 different countries, deliberately seeking diversity in age, digital savviness, and cultural background.
> 5 out of 8 points versus just 3 for "I am human." For the verifying state, it was even more dramatic — 7.5 versus 0.5.
n × p >= 5? (Sample size and margins of errors. Is 5:3 even meaningful or is this rather random personal preference?) Apparent splitting of missing or inconclusive data points? (7.5 vs. 0.5 out of a total of 8 subjects.) What kind of (social) research is this supposed to be?
I'm not reading this.
I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.
The Wikipedia article on detecting AI writing is a big help if you need to calibrate your sensors: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Maybe, like overplayed pop songs, in 20 years or so we’ll come around to viewing the phrase fondly.
It's a bit of a "Karen AI" telltale sign. It's probably been trained on a lot of "I-know-it-all-Karen" posts and as a result we're bombarded with Karen-slop.
[0]: https://news.ycombinator.com/item?id=46781516
Like this, its purpose is to fly under the radar unless your figurative ears are pricked up and primed to detect the telltale signs. Fuck this shit.
I don't necessarily find a problem with them, but its weird how they boasted about massive scale and importance of this, but then only just went with 8 tests.
If you try and run a site that has content that LLMs want or expensive calls that require a lot of compute and can exhaust resources if they are over used the attack is relentless. It can be a full time job trying to stop people who are dedicated to scrapping the shit out of your site.
Even CF doesnt even really stop it any more. The agent run browsers seem to bypass it with relative ease.
This is wrong. Git does store full copies.
Prebuild statically the most common commits (last XX) and heavily rate limit deeper ones
It doesnt .. look very new?
I can't be the only one.
It's slow and annoying, AI overview is good enough for me most of the times so that added time I bet makes websites lose a lot of visits.
> 5 out of 8 points versus just 3 for "I am human." For the verifying state, it was even more dramatic — 7.5 versus 0.5.
n × p >= 5? (Sample size and margins of errors. Is 5:3 even meaningful or is this rather random personal preference?) Apparent splitting of missing or inconclusive data points? (7.5 vs. 0.5 out of a total of 8 subjects.) What kind of (social) research is this supposed to be?