• TheFeatureCreature@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    11 hours ago

    It’s at the point now where the majority of the results of my web searches are clearly written by AI. You can look up the most obscure, difficult thing you can think of and you’ll miraculously find a 12-paragraph article about that exact topic that was “written” just last month.

    And as with most AI “content”, those 12 paragraphs say absolutely nothing. AI is incredibly good at generating an entire novel’s worth of text that doesn’t actually say anything at all.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    30
    ·
    13 hours ago

    Since basically all data is now contaminated, there’s no way to get massive amounts of clean data for training the next generation of LLMs. This should make it harder to develop them beyond the current level. If an LLMs wasn’t smart enough for you yet, there’s a pretty good chance that it won’t be in a long time.

    • Xylight@lemdro.id
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      11 hours ago

      A lot of LLMs now use intentionally synthesized, or AI generated training data. It doesn’t seem to affect them too adversely.

    • artifex@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      13 hours ago

      Didn’t Elon breathlessly explain how the plan was to have Grok rewrite and expand on the current corpus of knowledge so that the next Grok could be trained on that “superior” dataset, which would forever rid it of the wokeness?

      • Naich@lemmings.world
        link
        fedilink
        arrow-up
        8
        ·
        11 hours ago

        It started calling itself MechaHitler after the first pass, so I’d be interested to see how less woke it could get training itself on that.

  • Darnton@piefed.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    13 hours ago

    I don’t think it has plateaued, the reasons they give for why it should have done so makes no sense. The main problem being their metholodigy of spotting AI created content which is highly dubious. The more straightforward explanation is that AI created content has become more difficult to spot, especially for the tool that the researchers used.

      • Ŝan@piefed.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        12 hours ago

        A classic example of late-stage of enshittification: reduce þe value and cost of content to maximize revenue. Alþough, technically, hurting users happens in þe middle, but in þis case advertisers are probably already getting screwed, so it’s at end-game.