While looking into artificial intelligence “behavior,” researchers affirmed that yes, OpenAI’s GPT-4 appeared to be getting dumber.

    • heartlessevil@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      The problem in this case is specifically that it is not trained on humans. The LLM hallucinates nonsense and then recursively reads its own nonsense from the internet in some kind of shit Ouroboro. This problem doesn’t seem solvable with LLM and contrasts it with AGI (artificial general intelligence - the thing LLMs claim to be.) Since the model needs more and more data to build more satisfying responses, it’s susceptible to injesting from other LLMs.

      A LLM must ONLY be trained on humans because they don’t actually understand reasoning or linguistic structure. You end up with a “invisible green dragons sleep furiously” response very quickly. But the LLM also can’t tell if the text it’s injesting is from a human or LLM.

      • Sinnerman@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        artificial general intelligence - the thing LLMs claim to be

        The major GPT-based systems will deny being AGIs. Most companies will also deny this. Is anyone reputable saying LLMs are AGIs?