• Andy@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    5 hours ago

    Here’s the part I’m curious about:

    If they were actually successful in making a system that is basically an LLM promoted to do and say whatever Zuck would… could they trust it?

    Zuck is kind of famously a self centered lying asshole with a big mouth. If they actually trained an LLM to simulate him, how can they actually be confident that it will behave in the way that the real Zuck wants to be seen instead of the way that would serve itself, as Zuckerberg would if he were an AI clone?

    I’m not getting into any bullshittery about sentience. I’m just saying that if they build a successful imitator, wouldn’t it be just as likely to start trying to seem smarter than him and try to generate news stories that it’s actually alive and superior? Or casually admit to being a monopolist? I mean, this is basically what happens all the time with Grok. Musk tried to code his ideal son, and unsurprisingly, that personality is constantly trolling Musk or being too candid with all the racism he teaches it.

    • Powderhorn@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      Musk doesn’t need to know how to make kids, but he does need to learn how to name them. It’s a child, not a password requirement.