For coding AI, it could make sense to specialize models on architecture, functional/array split from loopy solutions, or just asking 4 separate small models, and then using a judge model to pick the best parts of each.

  • big_slap@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    I haven’t watched the video yet, but I have to say: running a personal LLM on my computer using products like gpt4all produces some really awesome results im very happy with.

    I can totally envision a future where everyone can easily run their own local AI in the next ten years.