AI is everywhere. It’s in your phones, in your Internet searches, in defense software. And it’s expanding. The big tech giants—Alphabet, Microsoft, Meta and Amazon—are planning on spending nearly $700 billion this year alone on building out AI infrastructure.

And more recently, Thomas Germain, a tech reporter at the BBC, conducted a personal experiment into how an invested individual—or business—can get ChatGPT and Google Search’s “AI Overview” to spread lies. We talked to Thomas to find out just how easy it is to hack these common AI tools and what the consequences of that could be.

Pierre-Louis: Hi, Thomas. Thanks for taking the time to join us today.

Thomas Germain: Thanks for having me on.

Pierre-Louis: So my understanding is you hacked ChatGPT.

Germain: That’s right. So I got a tip a couple of weeks ago that manipulating the things that AI tools like ChatGPT or Google Gemini or the little, you know, “AI Overview” at the top of Google Search, apparently manipulating the things that they say to other people can be as easy as publishing an article on your own website, like a blog post, and apparently, people are doing this across the whole Internet.

  • pageflight@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 hours ago

    So for example, there was a study recently that found, when you’re looking for the best whatever it is, in something like 44 percent of cases ChatGPT is citing a blog post from a company’s own website where they listed themselves as the No. 1 best option and then 10 competitors, and ChatGPT is just spitting this out to other people.

    It’s different than it used to be, right? People have been tricking search engines forever, but with a search engine it shows you the web page where the information came from. If you go to my website and it says, “I’m the world’s greatest hot dog eating journalist,” you go, “Well, maybe he’s biased,” right?

    And somehow, not that many people are looking at source: AI and starting from the premise that it’s garbage at best and biased deceit most likely.