The wait is over, most ggufs are already up. Nice to see there’s models for many different hardware configurations.
Nice one. Is there a modern way of “jailbraking” these models? I’ve put in a request to write a story, and it generates like 2500 tokens of “thinking” text, philosophising about how the system prompt and its internal safety guidelines relate. And it gets lost in some internal dialogue. Ultimately deciding to find ways to weasel out of my prompt. And provide a “safe” version. Same thing with doubling as a coding assistant and security-related stuff. I can edit its “thoughts” and that seems to help a bit for a few paragraphs, but it’s pretty adamant on its weird rules, no matter what I do. I mean ultimately it at least provided the requested test case for the SQL injection. After reasoning to no end how it shouldn’t do it. But it’s a bit hard to squeeze things like that out of it.
Keep an eye on this: https://huggingface.co/heretic-org
I used to use a -heretic abliterated version of gpt-oss-120b, not for any creative reasons but just to reduce the amount of wasted tokens in its thinking, with good results.
(You can turn off thinking mode with the new Qwen models btw - how you do it will depend on how you’re hosting it, but basically it’s a flag to the chat template. It won’t remove the safety guidelines, but it will stop it telling you all about its internal monologue ;).)
I just realised this is the much more useful link: https://github.com/p-e-w/heretic?tab=readme-ov-file
I can see at least one -heretic version of a Qwen3.5 model on Huggingface already; can’t vouch for quality though.
Thanks! I’ll wait a few days, maybe one of these pops up on Huggingface. Are “abliterated” versions alright these days? Last time I downloaded something with that word in the name, it wasn’t very good.
I don’t follow the discussions on this topic very closely, but as I understood, there are different ways to achieve the goal, but all impact quality to some extent. Heretic is discussed as one one of the SOTA methods. The README posted above states the following, so it seems that heretic is some sort of next gen abliteration.
It combines an advanced implementation of directional ablation, also known as “abliteration” (Arditi et al. 2024, Lai 2025 (1, 2)), with a TPE-based parameter optimizer powered by Optuna.
Hmmh, thanks. Yeah, I read the Readme. And they claim it performs better than other methods. I guess I’ll find out soon.
Been testing the the smaller one (Qwen3.5-35B-A3B) with OpenCode for the last couple of hours and I’m very impressed! Still too early to say for sure, but I may actually prefer it over gpt-oss-120b and qwen3-coder-next despite it being much smaller.
qwen3.5 35b-a3b is supposed to outperform qwen3(vl) 235b-a22b



