Recently (here and elsewhere) I have seen a lot of LLM discussions centre around the idea of coding. That may be selection bias, but according to a Gallup poll, only about 14% of AI users report using coding assistants at work. In another study (conducted by OpenAI/NBER) coding was only 4.2% of messages. PDF here

I think we’re all tired of the dismissive “wHaT’s yOuR uSE cASE” framing some questions receive…but I actually am curious about what folks are doing with their local models (and LLMs in general).

Myself, I code because there are certain features I am trying to bring about, as part of a larger stack, but that (coding) is not my end goal.

So…uh…what’s your use case for this junk? (gak, I feel sullied an unusual typing that).

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 hours ago

      That’s not nothing. Especially if you tie it into SearXNG or something else you self host.

      Ive got mine tied to Tavily API for search, with trust domains, white lists and ad-blocks (that’s llm harness side / not a feature of Tavily).

      Search (Google etc) really sucks these days. Kagi is great but I’m always in favour of roll your own.

  • chrash0@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 hours ago

    semantic search is a great use case. get a good embedding model and setup Postgres with pgvector, and i can semantic search my Obsidian D&D notes

  • e0qdk@reddthat.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 hours ago

    I deal with a lot of scientific imagery for work and I’ve recently started experimenting with what I can do with local vision capable LLMs (e.g. qwen3.6, gemma4) to cut down on some of the really tedious parts of the work and improve maintenance processes. The fact that they can just do OCR automatically on labels burned into the image and then combine that with a comparison to additional images and output a judgement is very useful…

  • AlternateHuman02@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    I feel like I collect models more than I use them lol. Right now I have been running GPT-OSS 20b and have enjoyed its output.

    Mainly using it for planning I guess? Helping me come up with a job proposal for work, exploring the framework for a book idea, giving me the basic for new powershell scripts, garden planning.

    Sometimes I just ask random weird questions just to see what it says.

  • Alex@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    The most useful use case for me is querying a knowledge base in NotebookLM. I work on CPU emulation and it does a very good job of extracting the relevant information from thousands of pages of dry technical specs and preparing the requirements for implementing a particular feature.

    The Deep Research mode of Gemini is pretty good at generating some briefing notes (with links) and can do that in the background once you kick it off.