The Qwen3.5 models are still the best local models I’ve used, so I’m excited to see how this updated version performs.

  • venusaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    Thanks! That sounds expensive. Hopefully 24GB VRAM gets cheaper or models get more efficient soon.

      • venusaur@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Thanks! I’m hoping to run at least 20B. Idk if I can do that fast enough without 24GB. Seems to be the sweet spot.