• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • AbouBenAdhem@lemmy.worldtoAI@lemmy.mlDo I understand LLMs?
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 days ago

    There’s a part of our brain called the salience network, that continually models and predicts our environment and directs our conscious attention to things it can’t predict. When we talk to each other, most of the formal content is predictable, and the salience network filters it out; the unpredictable part that’s left is the actual meaningful part.

    LLMs basically recreate the salience network. They continually model and predict the content of the text stream the same way we do—except instead of modeling someone else’s words so they can extract the unpredictable/meaningful part, they model their own words so they can keep predicting the next ones.

    This raises an obvious issue: when our salience networks process the stream of words coming out of such an LLM, it’s all predictable, so our brains tell us there’s no actual message. When AI developers ran into this, they added a feature called “temperature” that basically injects randomness into the generated text—enough to make it unpredictable, but not obvious nonsense—so our salience networks will get fooled into thinking there’s meaningful content.













  • AbouBenAdhem@lemmy.worldtoSelfhosted@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 months ago

    If you didn’t map a local config file into the container, it’s using the default version inside the container at /app/public/conf.yml (and any changes will get overwritten when you rebuild the container). If you want to make changes to the configuration for the widget, you’ll want to use the -v option with a local config file so the changes you make will persist.