cm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 5 days agoollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comexternal-linkmessage-square12fedilinkarrow-up133arrow-down10
arrow-up133arrow-down1external-linkollama 0.11.9 Introducing A Nice CPU/GPU Performance Optimizationwww.phoronix.comcm0002@piefed.world to LocalLLaMA@sh.itjust.worksEnglish · 5 days agomessage-square12fedilink
minus-squareafaix@lemmy.worldlinkfedilinkEnglisharrow-up0·4 days agoDoesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
minus-squarepanda_abyss@lemmy.calinkfedilinkEnglisharrow-up1·4 days agoIt does, but I’ve never tried it, I just use the hf cli
Doesn’t llama.cpp have a -hf flag to download models from huggingface instead of doing it manually?
It does, but I’ve never tried it, I just use the hf cli