

- cross-posted to:
- localllama@sh.itjust.works




Small/fast model with MIT license for local use.
Benchmarks look good for the size. But IMO these smaller models aren’t consistent enough to live up to their promises.

Will be interesting to see how it stacks up to Nemotron 3 nano. I’m hoping to have somewhat more reliable models at this size that I can run entirely locally on my 3090. Really hoping that with MCP servers and tools, etc. they can be functional enough for use cases which require more security and local operation. For most things, though, I have switched to using larger models via open router.