- cross-posted to:
- fosai@lemmy.world
- cross-posted to:
- fosai@lemmy.world
cross-posted from: https://lemmy.ml/post/43700680
“A terminal tool that right-sizes LLM models to your system’s RAM, CPU, and GPU. Detects your hardware, scores each model across quality, speed, fit, and context dimensions, and tells you which ones will actually run well on your machine.”
I would greatly prefer a simple calc software/website that you give a model to and it tells you how it would run, but I suppose in the era of vibecoding, making small, functional, well designed software has become a distant dream.
(and the readme tells you to curl and execute a shell script? no thanks)
feels like this could be made a lot safer by just making it a website where you enter your specs
true, I don’t like the
curl [something] | shpattern for installation. Calling it is just like letting random guy from the internet control of your PC to download some binaries. I’m seeing this trend more and more in Github reposit’s just going to be more common with vibe coding taking over, since if you ask a model to write a readme with an installation section that’s what you’ll get.




