Oh, that part is. But the splitting tech is built into llama.cpp
Oh, that part is. But the splitting tech is built into llama.cpp
With modern methods sometimes running a larger model split between GPU/CPU can be fast enough. Here’s an example https://dev.to/maximsaplin/llamacpp-cpu-vs-gpu-shared-vram-and-inference-speed-3jpl
fp8 would probably be fine, though the method used to make the quant would greatly influence that.
I don’t know exactly how Ollama works but a more ideal model I would think would be one of these quants
https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
A GGUF model would also allow some overflow into system ram if ollama has that capability like some other inference backends.
The technology for quantisation has improved a lot this past year making very small quants viable for some uses. I think the general consensus is that an 8bit quant will be nearly identical to a full model. Though a 6bit quant can feel so close that you may not even notice any loss of quality.
Going smaller than that is where the real trade off occurs. 2-3 bit quants of much larger models can absolutely surprise you, though they will probably be inconsistent.
So it comes down to the task you’re trying to accomplish. If it’s programming related, 6bit and up for consistency with whatever the largest coding model you can fit. If it’s creative writing or something a much lower quant with a larger model is the way to go in my opinion.
Oh shit, I thought he was running for president or something
There’s tons on huggingface
https://huggingface.co/datasets/sayakpaul/poses-controlnet-dataset
Use kobold.cpp instead of all of those backends. Plus it also does text to speech. https://github.com/LostRuins/koboldcpp
About a hundred years ago you could buy a “radio flyer”. It’s a red wagon. People don’t change.
YOU COULD JUST LEAVE??? WE HAD TWO RENT IT THREE TIMES IN A ROW TO BEAT THAT TUTORIAL!!
Think that points to it being Chinese propaganda. Russians don’t bring up the Huawei thing, they don’t understand it enough.
They’re bots that hooked onto the wrong video