• Hirom@beehaw.org
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    12 hours ago

    It shows LLMs can do significant harm without the capabilities of an AGI.

    Overhyping LLMs and overinflating their capabilities makes things worse, as people are less skeptical of LLM output.