GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts (MoE) architecture but with a more compact parameter size. GLM-4.5-Air also supports hybrid inference modes, offering a “thinking mode” for advanced reasoning and tool use, and a “non-thinking mode” for real-time interaction. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

Blog post: https://z.ai/blog/glm-4.5

Hugging Face:

https://huggingface.co/zai-org/GLM-4.5

https://huggingface.co/zai-org/GLM-4.5-Air

  • doodlebob@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    Just read the L1 post and I’m just now realizing this is mainly for running quants which I generally avoid

    I guess I could spin it up just to mess around with it but probably wouldn’t replace my main model

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 days ago

      Just read the L1 post and I’m just now realizing this is mainly for running quants which I generally avoid

      ik_llama.cpp supports special quantization formats incompatible with mainline llama.cpp. You can get better performance out of them than regular GGUFs.

      That being said… are you implying you run LLMs in FP16? If you’re on a huge GPU (or running a small model fast), you should be running sglang or vllm instead, not llama.cpp (which is basically designed for quantization and non-enterprise hardware), especially if you are making parallel calls.

      • doodlebob@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        yeah, im currently running the gemma 27b model locally I recently took a look at vllm but the only reason i didnt want to switch is because it doesnt have automatic offloading (seems that it’s a manual thing right now)

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          8 days ago

          Gemma3 in particular has basically no reason to run unquantized since Google did a QAT (quantization aware training) finetune of it. The Q4_0 is almost, objectively, indistinguishable from the BF16 weights. Llama.cpp also handles its SWA (sliding window attention) well (whereas last I checked vllm does not).

          vllm does not support CPU offloading well like llama.cpp does.

          …Are you running FP16 models offloaded?

          • doodlebob@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            omg, I’m retarded. Your comment made me start thinking about things and…I’ve been using q4 without knowing it… I assumed ollama ran the fp16 by default 😬

            about vllm, yeah I see that you have to specify how much to offload manually which I wasn’t a fan of. I have 4x 3090 in an ML server at the moment but I’m using those for all AI workloads so the VRAM is shared for TTS/STT/LLM/Image Gen

            thats basically why I kind of really want auto offload

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              8 days ago

              Oh jeez. Do they have nvlink?

              Also I don’t know if ollama even defaults to QAT heh. It has a lot of technical issues.

              My friend, you need to set up something different. If you want pure speed, run vllm and split Gemma evenly across 3 or four of those 3090s, and manually limit its vram to like 30% each or whatever it will take. Vllm will take advantage of nvlink for splitting and make it extremely fast. Use an AWQ made from Gemma QAT.

              But you can also run much better models than that. I tend to run Nemotron 49B on a single 3090 via TabbyAPI, for instance, but you can run huge models with tons of room to spare for other ML stuff. You could probably run a huge MoE like Deepseek on that rig, depending on its RAM capacity. Some frameworks like TabbyAPI can hot swap them too, as different models have different strengths.

              • doodlebob@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 days ago

                Unfortunately i didn’t set up nvlink, but ollama auto splits things for models which require it

                I really just a “set and forget” model server lol (that’s why I keep mentioning the auto offload)

                Ollama integrates nicely with OWUI

                • brucethemoose@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  8 days ago

                  Basically any backend will integrate with open web ui since they all support OpenAI API. In fact, some will support more sampling, embeddings models for RAG and such.

                  They will mostly all do auto split too. TabbyAPI (which I would specifically recommend if you don’t have nvlink set up) is completely automated splitting across GPUs, vllm is completely automated. IK_llama.cpp (for the absolute biggest models) needs a more specific launch command, but there are good guides for it.

                  IMO it’s worth it to drill down some making one “set it and forget it” config, as these 200B MoEs that keep coming out will blow ollama Gemma 27B away, depending on what you use it for.

                  • doodlebob@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    8 days ago

                    I’ll take a look at both tabby and vllm tomorrow

                    Hopefully there’s cpu offload in the works so I can test those crazy models without too much fiddling in the future (server also has 128gb of ram)