This is the humanless future, hurray!

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn’t strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.

    • MagicShel@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I’ve thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it’s beyond my meager means.

      • nickajeglin@lemmy.one
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Human journalists already do this, though. All I’m suggesting is that these automated journalists should do likewise. That clearly wasn’t the case in this particular instance.