Running large language models (LLMs) on your local machine has become increasingly popular, offering privacy, offline access, and customization. Ollama is a ...
Thanks. I’ll factor that in next time someone asks me for a recommendation. I personally have Kobold.CPP on my machine, that seems to be more transparent toward such things.
Kobold.cpp is fantastic. Sometimes there are more optimal ways to squeeze models into VRAM (depends on the model/hardware), but TBH I have no complaints.
It has support for more the advanced quantization schemes of ik_llama.cpp. Specifically, you can get really fast performance offloading MoEs, and you can also use much higher quality quantizations, with even ~3.2bpw being relatively low loss. You’d have to make the quants yourself, but it’s quite doable… just poorly documented, heh.
The other warning I’d have is that some of it’s default sampling presets are fdfunky, if only because they’re from the old days of Pygmalion 6B and Llama 1/2. Newer models like much, much lower temperature and rep penalty.
Thanks for the random suggestion! Installed it already. Sadly as a drop-in replacement it doesn’t provide any speedup on my old machine, it’s exactly the same number of tokens per second… Guess I have to learn about the ik_llama.cpp and pick a different quantization of my favourite model.
What model size/family? What GPU? What context length? There are many different backends with different strengths, but I can tell you the optimal way to run it and the quantization you should run with a bit more specificity, heh.
Thanks. I’ll factor that in next time someone asks me for a recommendation. I personally have Kobold.CPP on my machine, that seems to be more transparent toward such things.
Kobold.cpp is fantastic. Sometimes there are more optimal ways to squeeze models into VRAM (depends on the model/hardware), but TBH I have no complaints.
I would recommend croco.cpp, a drop-in fork: https://github.com/Nexesenex/croco.cpp
It has support for more the advanced quantization schemes of ik_llama.cpp. Specifically, you can get really fast performance offloading MoEs, and you can also use much higher quality quantizations, with even ~3.2bpw being relatively low loss. You’d have to make the quants yourself, but it’s quite doable… just poorly documented, heh.
The other warning I’d have is that some of it’s default sampling presets are fdfunky, if only because they’re from the old days of Pygmalion 6B and Llama 1/2. Newer models like much, much lower temperature and rep penalty.
Thanks for the random suggestion! Installed it already. Sadly as a drop-in replacement it doesn’t provide any speedup on my old machine, it’s exactly the same number of tokens per second… Guess I have to learn about the ik_llama.cpp and pick a different quantization of my favourite model.
What model size/family? What GPU? What context length? There are many different backends with different strengths, but I can tell you the optimal way to run it and the quantization you should run with a bit more specificity, heh.