This is pretty amusing to see. Nothing really related to Linux / Steam Deck gaming, but more a state of the industry post that I thought you might also find fun. Redditors managed to trick an AI-powered news scraper.
It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn’t strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.
Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I’ve thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it’s beyond my meager means.
I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.
Human journalists already do this, though. All I’m suggesting is that these automated journalists should do likewise. That clearly wasn’t the case in this particular instance.
It would need to be told to do so, of course. I can think of a couple of approaches. You could have it use a database to track the identities of information sources, so the AI would know whether it was coming from new or well-established sources. It could check to see if the news is appearing in other sources. A lot of this isn’t strictly large-language-model-based capability, but it would be using LLMs to interpret its inputs.
Analysis is social media through the lens of tracking source reliability would be damned useful without AI and if that could easily be done I think it would already be. I’ve thought about this for about five years, thinking we could track bots and disinformation based on the patterns of who promotes/upvotes it, but it’s beyond my meager means.
I think certain places (reddit?) Have been using algorithms to find and stamp out bots/vote manipulation for quite a while. I remember at least one major wave of bans for smurfed accounts participating in manipulation.
Human journalists already do this, though. All I’m suggesting is that these automated journalists should do likewise. That clearly wasn’t the case in this particular instance.