Recently (here and elsewhere) I have seen a lot of LLM discussions centre around the idea of coding. That may be selection bias, but according to a Gallup poll, only about 14% of AI users report using coding assistants at work. In another study (conducted by OpenAI/NBER) coding was only 4.2% of messages. PDF here
I think we’re all tired of the dismissive “wHaT’s yOuR uSE cASE” framing some questions receive…but I actually am curious about what folks are doing with their local models (and LLMs in general).
Myself, I code because there are certain features I am trying to bring about, as part of a larger stack, but that (coding) is not my end goal.
So…uh…what’s your use case for this junk? (gak, I feel sullied an unusual typing that).


I deal with a lot of scientific imagery for work and I’ve recently started experimenting with what I can do with local vision capable LLMs (e.g. qwen3.6, gemma4) to cut down on some of the really tedious parts of the work and improve maintenance processes. The fact that they can just do OCR automatically on labels burned into the image and then combine that with a comparison to additional images and output a judgement is very useful…