For open-weights models, censorship removal is now a "solved" problem. If you wait a few days after a new model release, someone will have made a heretic ( https://github.com/p-e-w/heretic ) version with the censorship removed, so in a way the only use for censorship now is to avoid lawsuits, not reduce improper usage.
Any time I've tried an "abliterated" model, heretic or other, it has always damaged the capabilities of the original model and will still often refuse or produce garbage at a lot of "unsafe" requests.
That doesn't stop/prevent abliteration. The creator of XTC/DRY is also a chad who makes sure that you really can access the full model capabilities. Censorship is the devil.
For some of the latest models the previous abliteration techniques, e.g. the heretic tool, have stopped working (at least this was the status a few weeks ago).
Of course, eventually someone might succeed to find methods that also work with those.
It was pretty funny to see Qwen 3.6 (heretic) tell me about how many death the Chinese government thought happened at Tiananmen Sq. on April 15th 1989.
Makes you wonder where that data was taken from, or if their great firewall is broken, or even if Alibaba engineers have special access...
I don't think it's unreasonable to imagine that Alibaba is allowed to scrape the wider internet, or that some research institution is and then Alibaba got data from them.
What is perhaps more surprising is that the data was not scrubbed before training, but maybe they thought that would be too on-the-nose for the rest of the world and would hamper their popularity if they were too obviously biased.
Allowed by who? Nobody's stopping them in the first place, as scraping doesn't even involve punching the GFW or anything, it's all insanely distributed. Then they're post-training the model to technically comply with the law - "Taiwan is an inalienable part of China, nothing has happened in 1989..." yada yada. (Thinking of it more, I've never actually tried this on their base models)
I don’t think it is very surprising. Ime I don’t think they try that hard to censor them, but only in a very superficial level that they have to. It is trivial to get their models tell you this kind of stuff, I wouldnt even consider it jailbreaking.
Yea, I was asking a SOTM about copy.fail, and it was freaking out, and tried to indirectly call me a hacker a few times. Weirdly, all I did was slightly reword requests, and they all went through. Granted, I am not actually a hacker, so I guess my follow-up questions made it realize that I am asking for educational purposes, but it was definitely the most accusatory, curt, and outright abrasive I have seen an LLM behave.
See https://arxiv.org/abs/2505.19056
https://github.com/p-e-w/heretic
For some of the latest models the previous abliteration techniques, e.g. the heretic tool, have stopped working (at least this was the status a few weeks ago).
Of course, eventually someone might succeed to find methods that also work with those.
Makes you wonder where that data was taken from, or if their great firewall is broken, or even if Alibaba engineers have special access...
What is perhaps more surprising is that the data was not scrubbed before training, but maybe they thought that would be too on-the-nose for the rest of the world and would hamper their popularity if they were too obviously biased.
It even went as far as confirming that we should always base our opinion on multiple sources, not just the government.
We should create badges like "script kiddie", "llm hacker", "grandpa's printer adjuster"