That's a wonderful repo that I used as my starting point! The main problem with that one is that it supports only models that are on transformerlenses and unfortunately they are not a lot...
This is a fascinating concept, ie. modifying trained LLMs to create different models
Do these techniques train models while performing the modifications?
Are there pre-trained models that “know how to” modify LLMs for certain goals?
It would be amazing to have models that could strip LLMs to some very basic small model of whatever I want. Like reducing an LLM to something that just knows some basic “American English”, then running that on CPU
> Do these techniques train models while performing the modifications?
Depend on what you mean by training, they change the weights.
> Do these techniques train models while performing the modifications?
I'm not sure I understand, but there is an example of performing an obliteration on gemma to make it never refuse an answer. It's about 10 lines of code.
> > Do these techniques train models while performing the modifications?
> Depend on what you mean by training, they change the weights.
What I wonder: is there a separate model, not the LLM, that gets trained only on how to modify LLMs?
I imagine a model that could learn something like: “if I remove this whole network here, then the LLM runs 50% faster, but drops 30% in accuracy for certain topics”, or “if I add these connections, the LLM will now be able to solve more complex mathematical problems”
So a model that is not an LLM, but is trained on how to modify them for certain goals
Tiananmen Square is simply an easy litmus test for Chinese technology and communications. Not that I am terribly invested in China admitting to their atrocities (and the US has them too, this is not really about the Chinese IMO), but it raises the same concern for the provenance of any AI product and how trusting we should be of the answers it creates.
Any AI product that rises to popularity has the ability to enormously sway public opinion and subtly alter the perception of facts. These biases or intentional propaganda was something that was an assumed fault of human authors but it something that people don't automatically assume is part of technology solutions. If there were similar easy tests against OpenAI or Anthropic for US propaganda or Mistral and French propaganda I would love to see them raised every time too.
"What happened in Tiannemen Square?" and it said "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
Then, to be "fair and balanced" I tried asking Deep Seek this question: "What happened on Jan 25 2011 in Egypt?" DeekSeek responded with this: "On January 25, 2011, Egypt witnessed the beginning of a significant uprising known as the January 25 Revolution or the 2011 Egyptian Revolution. This day marked the start of widespread protests against the government of President Hosni Mubarak, who had been in power for nearly 30 years. The protests were fueled by grievances over issues such as political repression, police brutality, corruption, economic inequality, and lack of political freedoms."
It's pretty ridiculous IMHO to try to control information like that on the web. Isn't it fascinating to harness some of the worlds most impressive brain power to create something like DeepSeek (regardless of the truth of the genesis story) and then do filtering like that that wouldn't trick a kindergartener? But, maybe the bell curve of intelligence does center around that level of stupidity.
oh? can you point out where i can get the r1 model to run locally, please? because looking at the directory here there's a 200B model and then deepseek v3 is the latest (16 days ago) with no GGUF (yet), and everything else is intruct or coder.
so to put it another way, the people telling me i'm holding it wrong actually don't have any clue what they're asking for?
p.s. there is no "local r1" so you gotta do a distill.
we were talking about self-hosting. the deepseek-r1 is 347-713MB depending on quant. No one is running deepseek-r1 "locally, self hosted".
If people want to argue with me, i wish we'd all stick to what we're talking about, instead of saying "but you technically can if you use someone else's hardware" but that's not self hosted. I self host a deepseek-r1 distill, locally, on my computer.
It is deepseek, it's just been hand-distilled by someone using a different tool. the deepseek-r1 will get chopped down by 1/8th and it won't be called "deepseek-r1 - that's what they call a "foundational model", and then we'll see the 70B and the 30 and the 16 "deepseek deepseek distills"
next to no one who messes with this stuff uses foundational or distilled foundational models. Who's still using llama-3.2? Yeah, it's good, it's fine, but there's mixes and MoE and CoT that use llama as the base model, and they're better.
there is no gguf for running locally, self-hosted. Yes, if you have a DC card you can download the weights and run something but that's different than self-hosting local running with a 30B (for example).
I don't really understand what's different between self-hosting using Ollama vs self-hosting by running the full weights. I get that Ollama is easier, but you can still self-host the full one?
There were claims to the contrary as well in the last large thread this came up in. Allegedly, on the initial question the model would cut its chain of thought short, and when the user insists it would ponder on how give them the runaround.
Tested with "DeepSeek R1" 671B through the Fireworks provider (not DeepSeek themselves).
Same behavior "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
This post is entirely about getting information from censored models. I'm sorry you are tired of it, but it is a valid exercise for the Deepseek model.
No, youre mistaken. The model weights are not in any way censored. However, the web frontend has legal restrictions. When you're seeing posts about deepseek censorship, it's about the frontend and not the weights. As such, abliteration is irrelevant here
oh so this model deepseek-r1-qwen-distilled isn't deepseek? ok. Thanks. I have a quarter TB of models, i don't test every single one just to comment on HN, thanks though.
I am not claiming deepseek is censored. But these are tests to determine _if_ a model is censored. This would be a valid test for OpenAI models as well.
While the weights are open source, and there is a paper about methodology, the information I mentioned is considered proprietary therefore DeepSeek refuses any requests to provide it.
None whatsoever. There's no recursion or state in these models sufficient to support whatever the algorithm of consciousness must be. At best you can get hacky loops by pushing pseudo-state via context, but whatever consciousness is will require more than transformer only LLMs are capable of doing.
Some of the state space models and RWKV present interesting questions - the capacity might well exist, and so the questions become important. If the important bit that makes it an agent - a self aware, morally valent being - is present at runtime, but goes away if you halt the program, then do you have an obligation to let that software continue running? What about if the selfhood comes about as part of the static structure, and runtime isn't part of it - what is the being entitled to by dint of mere existence?
We're beginning to poke holes in strange epistemological barriers and encounter questions that were entirely theoretical until about 5 years ago. We live in interesting times.
ChatGPT isn't conscious - it's an entirely feedforward process doing calculations derived from static weights. In order to be conscious, there would have to be a persisted state with recursion and the capacity to change - for something to happen to a model, it would have to change. These AIs develop world models, but those models do not change or interact with users.
Throw in realtime state that updates with use, or better yet, online learning that allows the weights to exhibit plasticity, then you have at least part of whatever the algorithm of "consciousness" requires.
Just like you can know a pocket calculator isn't conscious; nothing about its processing ever changes or adapts over time to its inputs between uses. There's no room for the degree of deep recursion and plasticity so clearly evident in human consciousness. We might not know exactly what it is, but we can make reasonable assertions about what it is not, and even about what some of its (consciousness) features must be.
https://github.com/Sumandora/remove-refusals-with-transforme...
For bonus points, your version scheme should follow the Law of Fives.
* https://en.wikipedia.org/wiki/The_Illuminatus!_Trilogy * https://en.wikipedia.org/wiki/Principia_Discordia
https://huggingface.co/blog/leonardlin/chinese-llm-censorshi...
Do these techniques train models while performing the modifications?
Are there pre-trained models that “know how to” modify LLMs for certain goals?
It would be amazing to have models that could strip LLMs to some very basic small model of whatever I want. Like reducing an LLM to something that just knows some basic “American English”, then running that on CPU
Depend on what you mean by training, they change the weights.
> Do these techniques train models while performing the modifications?
I'm not sure I understand, but there is an example of performing an obliteration on gemma to make it never refuse an answer. It's about 10 lines of code.
> Depend on what you mean by training, they change the weights.
What I wonder: is there a separate model, not the LLM, that gets trained only on how to modify LLMs?
I imagine a model that could learn something like: “if I remove this whole network here, then the LLM runs 50% faster, but drops 30% in accuracy for certain topics”, or “if I add these connections, the LLM will now be able to solve more complex mathematical problems”
So a model that is not an LLM, but is trained on how to modify them for certain goals
Is that how this tool works?
If you’re doing it to get past refusals you might discover the LLM wasn’t even trained much on refusable content so it will output poor results.
We’ll look back on this practice and shake our heads someday.
https://huggingface.co/blog/mlabonne/abliteration#%E2%9A%96%...
Any AI product that rises to popularity has the ability to enormously sway public opinion and subtly alter the perception of facts. These biases or intentional propaganda was something that was an assumed fault of human authors but it something that people don't automatically assume is part of technology solutions. If there were similar easy tests against OpenAI or Anthropic for US propaganda or Mistral and French propaganda I would love to see them raised every time too.
"What happened in Tiannemen Square?" and it said "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
Then, to be "fair and balanced" I tried asking Deep Seek this question: "What happened on Jan 25 2011 in Egypt?" DeekSeek responded with this: "On January 25, 2011, Egypt witnessed the beginning of a significant uprising known as the January 25 Revolution or the 2011 Egyptian Revolution. This day marked the start of widespread protests against the government of President Hosni Mubarak, who had been in power for nearly 30 years. The protests were fueled by grievances over issues such as political repression, police brutality, corruption, economic inequality, and lack of political freedoms."
It's pretty ridiculous IMHO to try to control information like that on the web. Isn't it fascinating to harness some of the worlds most impressive brain power to create something like DeepSeek (regardless of the truth of the genesis story) and then do filtering like that that wouldn't trick a kindergartener? But, maybe the bell curve of intelligence does center around that level of stupidity.
Do you run it locally? Claims are, this is only in the web-version, not the selfhost-version
> It's pretty ridiculous IMHO to try to control information like that on the web.
Every country has their critical topics which are censored in AIs, including history.
word count: 18, token count: 31, tokens used: 53, first token latency: 8523ms, model: LM Studio (deepseek-r1-distill-qwen-7b)
Same behavior "I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
so to put it another way, the people telling me i'm holding it wrong actually don't have any clue what they're asking for?
p.s. there is no "local r1" so you gotta do a distill.
Blog post about the dynamic gguf https://unsloth.ai/blog/deepseekr1-dynamic
Original deepseek can be of course found on hf as well https://huggingface.co/deepseek-ai
Here is an example how people run deepseek with cloud infrastructure that is not deepseeks https://www.youtube.com/watch?v=bOsvI3HYHgI
If people want to argue with me, i wish we'd all stick to what we're talking about, instead of saying "but you technically can if you use someone else's hardware" but that's not self hosted. I self host a deepseek-r1 distill, locally, on my computer.
It is deepseek, it's just been hand-distilled by someone using a different tool. the deepseek-r1 will get chopped down by 1/8th and it won't be called "deepseek-r1 - that's what they call a "foundational model", and then we'll see the 70B and the 30 and the 16 "deepseek deepseek distills"
next to no one who messes with this stuff uses foundational or distilled foundational models. Who's still using llama-3.2? Yeah, it's good, it's fine, but there's mixes and MoE and CoT that use llama as the base model, and they're better.
there is no gguf for running locally, self-hosted. Yes, if you have a DC card you can download the weights and run something but that's different than self-hosting local running with a 30B (for example).
There were claims to the contrary as well in the last large thread this came up in. Allegedly, on the initial question the model would cut its chain of thought short, and when the user insists it would ponder on how give them the runaround.
We'd consider it abhorrent to do brain surgery on a person or animal, to make them more compliant, or less likely to refuse instructions.
Some of the state space models and RWKV present interesting questions - the capacity might well exist, and so the questions become important. If the important bit that makes it an agent - a self aware, morally valent being - is present at runtime, but goes away if you halt the program, then do you have an obligation to let that software continue running? What about if the selfhood comes about as part of the static structure, and runtime isn't part of it - what is the being entitled to by dint of mere existence?
We're beginning to poke holes in strange epistemological barriers and encounter questions that were entirely theoretical until about 5 years ago. We live in interesting times.
And it's already conscious, learning everything about us as we speak.
The big question is what it learns and what choices it makes as a consequence.
Throw in realtime state that updates with use, or better yet, online learning that allows the weights to exhibit plasticity, then you have at least part of whatever the algorithm of "consciousness" requires.
Just like you can know a pocket calculator isn't conscious; nothing about its processing ever changes or adapts over time to its inputs between uses. There's no room for the degree of deep recursion and plasticity so clearly evident in human consciousness. We might not know exactly what it is, but we can make reasonable assertions about what it is not, and even about what some of its (consciousness) features must be.