Yes, you can find it here.
Hungary 🇭🇺🇪🇺
Developer behind the Eternity for Lemmy android app.
@bazsalanszky@lemmy.ml is my old account, migrated to my own instance in 2023.
Yes, you can find it here.
Are you using mistral 7B?
I also really like that model and their fine-tunes. If licensing is a concern, it’s definitely a great choice.
Mistral also has a new model, Mistral Nemo. I haven’t tried it myself, but I heard it’s quite good. It’s also licensed under Apache 2.0 as far as I know.
I haven’t tested it extensively, but open webui also has RAG functionality (chat with documents).
The UI it self is also kinda cool and it has other useful features like commands (for common prompts) and searching for stuff online (e.g. with searx). It works quite well with Ollama.
Currently, I only have a free account there. I tried Hydroxide first, and I had no problem logging in. I was also able to fetch some emails. I will try hydroxide-push as well later.
I haven’t heard of Hydroxide before; thank you for highlighting it! Just one question: Does it also require a premium account like the official bridge, or is it also available for free accounts?
The latest version of Eternity is compatible with Lemmy 0.19. You might need to log out and log back in, though.
However, the new functions aren’t implemented yet (e.g. the new sorting methods)
From what I’ve seen, it’s definitely worth quantizing. I’ve used llama 3 8B (fp16) and llama 3 70B (q2_XS). The 70B version was way better, even with this quantization and it fits perfectly in 24 GB of VRAM. There’s also this comparison showing the quantization option and their benchmark scores:
Source
To run this particular model though, you would need about 45GB of RAM just for the q2_K quant according to Ollama. I think I could run this with my GPU and offload the rest of the layers to the CPU, but the performance wouldn’t be that great(e.g. less than 1 t/s).