When using llama.cpp, does it pass your prompts through a web server to process? Any privacy concerns?
Sounds like I’m looking at a few grand to run something decent. I’ll need to do more research before I commit to that big of a purchase, but your machine sounds nice!
Are there any small models you recommend that can run on 16GB DDR4 and an i7? No dedicated graphics card with separate VRAM. Maybe I’ll just experiment with something v small first.
No - it absolutely does NOT pass to a clandestine web-server. llama.cpp has thousands of eyes on the code; there’d be an uproar if there was any sneaky bullshit telemetry inbuilt.
PS: llama.cpp has its own Web-ui front end in built (think: chatGPT but local on your machine) that’s really, really nice and worth considering as your daily chat front end.
Small models in the 16GB range: sure. What would you like to do with your LLM? General use or something specific?
Thank you!!! This is awesome!
When using llama.cpp, does it pass your prompts through a web server to process? Any privacy concerns?
Sounds like I’m looking at a few grand to run something decent. I’ll need to do more research before I commit to that big of a purchase, but your machine sounds nice!
Are there any small models you recommend that can run on 16GB DDR4 and an i7? No dedicated graphics card with separate VRAM. Maybe I’ll just experiment with something v small first.
Thanks again!
No - it absolutely does NOT pass to a clandestine web-server. llama.cpp has thousands of eyes on the code; there’d be an uproar if there was any sneaky bullshit telemetry inbuilt.
PS: llama.cpp has its own Web-ui front end in built (think: chatGPT but local on your machine) that’s really, really nice and worth considering as your daily chat front end.
Small models in the 16GB range: sure. What would you like to do with your LLM? General use or something specific?
You would be surprised by the smaller 7-12B LLMs. Give them tools and they can work well