Yeah for sure. Ollama makes all of this way easier including downloading models at runtime (assuming your query can wait that long, lol). I’ve been very pleased so far in the functionality it gives me. That said, if I was building a very tight integration or a desktop app, I would probably use llama.cpp directly. It just depends on the usecase and scale. I do wish they (EDIT: ollama) would be better netizens and upstreaming their changes to llama.cpp. Also, it is unfortunate that at some point ollama will get enshittified (no more easy model downloads from their library without an account, etc) if only because they are building a company around it. So I am really thankful that llama.cpp continues to be such foundational piece for FOSS LLM infra.
Yeah for sure. Ollama makes all of this way easier including downloading models at runtime (assuming your query can wait that long, lol). I’ve been very pleased so far in the functionality it gives me. That said, if I was building a very tight integration or a desktop app, I would probably use llama.cpp directly. It just depends on the usecase and scale. I do wish they (EDIT: ollama) would be better netizens and upstreaming their changes to llama.cpp. Also, it is unfortunate that at some point ollama will get enshittified (no more easy model downloads from their library without an account, etc) if only because they are building a company around it. So I am really thankful that llama.cpp continues to be such foundational piece for FOSS LLM infra.