Just putting this here for anyone else interested in a local UI that runs using Tauri https://tauri.app/ (eg. it doesn’t use electron!)
Just putting this here for anyone else interested in a local UI that runs using Tauri https://tauri.app/ (eg. it doesn’t use electron!)
Ollama doesn’t depend on Vulkan for its backend. The Vulkan back end is the most widely compatible GPU accelerated back end.
This is to the detriment of it’s users. Ollama works on top of llama.cpp which can run on Vulkan. People have created merge requests for including Vulkan in Ollama. Meta wouldn’t accept it.
Is this why I can’t get llama GPU acceleration on my 7840u with integrated 780m?
I’ve got 96gb of ram, I can share half of that as vram. I know the performance won’t be amazing, but it’s gotta be better than CPU only.
I looked into it once and saw a bunch of people complaining about lack of support, saying it’s an easy fix but had to happen upstream somewhere. I can’t remember what it was, just that I was annoyed 🤷♂️
Edit: accidentally a word
Possibly. Vulkan would be compatible with the system and would be able to take advantage of iGPUs. You’d definintely want to look into whether or not you have any dedicated vRAM thats DDR5 and just use that if possible.
Explanation: LLMs are extremely bound by memory bandwidth. They are essentially giant gigabyte-sized stores of numbers which have to be read from memory and multiplied by a numeric representations of your prompts…for every new word you type in and every word you generate. To do this, these models constantly pull data in and out of [v]RAM. So, while you may have plenty of RAM, and decent amounts of computing power, your 780m probably won’t ever be great for LLMs, even with Vulkan, because you don’t have the memory bandwidth to keep it busy.
roughly, for a small model
Thanks for the write-up! Specifically I’m running a framework 13 7840U, so it’s ddr5, and that iGPU has no dedicated vram, it shares the main system ram.
I’m not looking for ridiculous performance, right now for small models I’m seeing probably 3 ish words per second. But when I put bigger models in, it starts to get ridiculously slow. I’m fine waiting around, really I’m just playing with it because I can. But that’s entering no fun territory haha