minus-squarepenguin@lemmy.pixelpassport.studiotoSelfhosted@lemmy.world•Self-hosted voice assistant with mobile applinkfedilinkEnglisharrow-up4·3 hours agoHome Assistant can do that, the quality will really depend on what hardware you have to run the LLM. If you only have a CPU you’ll be waiting 20 seconds for a response, which could also be pretty poor if you have to run a small quantized model linkfedilink
Home Assistant can do that, the quality will really depend on what hardware you have to run the LLM. If you only have a CPU you’ll be waiting 20 seconds for a response, which could also be pretty poor if you have to run a small quantized model