Just putting this here for anyone else interested in a local UI that runs using Tauri https://tauri.app/ (eg. it doesn’t use electron!)

  • beastlykings@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    Is this why I can’t get llama GPU acceleration on my 7840u with integrated 780m?

    I’ve got 96gb of ram, I can share half of that as vram. I know the performance won’t be amazing, but it’s gotta be better than CPU only.

    I looked into it once and saw a bunch of people complaining about lack of support, saying it’s an easy fix but had to happen upstream somewhere. I can’t remember what it was, just that I was annoyed 🤷‍♂️

    Edit: accidentally a word

    • afk_strats@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      Possibly. Vulkan would be compatible with the system and would be able to take advantage of iGPUs. You’d definintely want to look into whether or not you have any dedicated vRAM thats DDR5 and just use that if possible.

      Explanation: LLMs are extremely bound by memory bandwidth. They are essentially giant gigabyte-sized stores of numbers which have to be read from memory and multiplied by a numeric representations of your prompts…for every new word you type in and every word you generate. To do this, these models constantly pull data in and out of [v]RAM. So, while you may have plenty of RAM, and decent amounts of computing power, your 780m probably won’t ever be great for LLMs, even with Vulkan, because you don’t have the memory bandwidth to keep it busy.

      roughly, for a small model

      • CPU Dual channel DDR4 - 1.7 words per second
      • CPU Dual channel DDR5 - 3.5 words per second
      • vRAM RTX 1060 - 10+ words per second …
      • beastlykings@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Thanks for the write-up! Specifically I’m running a framework 13 7840U, so it’s ddr5, and that iGPU has no dedicated vram, it shares the main system ram.

        I’m not looking for ridiculous performance, right now for small models I’m seeing probably 3 ish words per second. But when I put bigger models in, it starts to get ridiculously slow. I’m fine waiting around, really I’m just playing with it because I can. But that’s entering no fun territory haha