• Hirom@beehaw.org
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    12 hours ago

    According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.

    Producing innaccurate technical advice, with a confident tonse, at scale.

    If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.

    • Tim@lemmy.snowgoons.ro
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.

      • Hirom@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        6 hours ago

        That’s a good way to represent LLMs. Very bad and very prolific consultants.