The wait is over, most ggufs are already up. Nice to see there’s models for many different hardware configurations.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 hours ago

    Nice one. Is there a modern way of “jailbraking” these models? I’ve put in a request to write a story, and it generates like 2500 tokens of “thinking” text, philosophising about how the system prompt and its internal safety guidelines relate. And it gets lost in some internal dialogue. Ultimately deciding to find ways to weasel out of my prompt. And provide a “safe” version. Same thing with doubling as a coding assistant and security-related stuff. I can edit its “thoughts” and that seems to help a bit for a few paragraphs, but it’s pretty adamant on its weird rules, no matter what I do. I mean ultimately it at least provided the requested test case for the SQL injection. After reasoning to no end how it shouldn’t do it. But it’s a bit hard to squeeze things like that out of it.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Keep an eye on this: https://huggingface.co/heretic-org

      I used to use a -heretic abliterated version of gpt-oss-120b, not for any creative reasons but just to reduce the amount of wasted tokens in its thinking, with good results.

      (You can turn off thinking mode with the new Qwen models btw - how you do it will depend on how you’re hosting it, but basically it’s a flag to the chat template. It won’t remove the safety guidelines, but it will stop it telling you all about its internal monologue ;).)

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Thanks! I’ll wait a few days, maybe one of these pops up on Huggingface. Are “abliterated” versions alright these days? Last time I downloaded something with that word in the name, it wasn’t very good.

          • robber@lemmy.mlOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 hour ago

            I don’t follow the discussions on this topic very closely, but as I understood, there are different ways to achieve the goal, but all impact quality to some extent. Heretic is discussed as one one of the SOTA methods. The README posted above states the following, so it seems that heretic is some sort of next gen abliteration.

            It combines an advanced implementation of directional ablation, also known as “abliteration” (Arditi et al. 2024, Lai 2025 (1, 2)), with a TPE-based parameter optimizer powered by Optuna.

  • zaidka@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 hours ago

    Been testing the the smaller one (Qwen3.5-35B-A3B) with OpenCode for the last couple of hours and I’m very impressed! Still too early to say for sure, but I may actually prefer it over gpt-oss-120b and qwen3-coder-next despite it being much smaller.