TheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · edit-22 days agoQwen3.6 27B releasedhuggingface.coexternal-linkmessage-square14fedilinkarrow-up153arrow-down11file-text
arrow-up152arrow-down1external-linkQwen3.6 27B releasedhuggingface.coTheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · edit-22 days agomessage-square14fedilinkfile-text
minus-squarethedeadwalking4242@lemmy.worldlinkfedilinkEnglisharrow-up2·2 days agoI can run models locally super easy in the CLI with a tool called ollama
minus-squarevenusaur@lemmy.worldlinkfedilinkEnglisharrow-up1·9 hours agoCool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?
minus-squarethedeadwalking4242@lemmy.worldlinkfedilinkEnglisharrow-up1·2 hours agoI’ve only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram. Gemma4 will probably be a good model to try on your hardware
I can run models locally super easy in the CLI with a tool called ollama
Cool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?
I’ve only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram.
Gemma4 will probably be a good model to try on your hardware