I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.

Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)

  • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I personally use llama.cpp in a VM, however if you have a nvidia GPU with lots of VRAM you’ve got more options available, as well as much faster inference (text generation) speed.

    Check out the community at !localllama@sh.itjust.works, they’re pretty experienced with running LLMs locally

      • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        At the moment most LLM libraries use CUDA for acceleration, which is a hardware feature on nvidia GPUs

        I believe llama.cpp can make use of AMD GPUs, but double check the project’s GitHub discussions first to confirm this, and see how people set it up