Recently (here and elsewhere) I have seen a lot of LLM discussions centre around the idea of coding. That may be selection bias, but according to a Gallup poll, only about 14% of AI users report using coding assistants at work. In another study (conducted by OpenAI/NBER) coding was only 4.2% of messages. PDF here
I think we’re all tired of the dismissive “wHaT’s yOuR uSE cASE” framing some questions receive…but I actually am curious about what folks are doing with their local models (and LLMs in general).
Myself, I code because there are certain features I am trying to bring about, as part of a larger stack, but that (coding) is not my end goal.
So…uh…what’s your use case for this junk? (gak, I feel sullied an unusual typing that).


The most useful use case for me is querying a knowledge base in NotebookLM. I work on CPU emulation and it does a very good job of extracting the relevant information from thousands of pages of dry technical specs and preparing the requirements for implementing a particular feature.
The Deep Research mode of Gemini is pretty good at generating some briefing notes (with links) and can do that in the background once you kick it off.