• HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    I get his and stallmans objections or disapointment in this case but really it is just another abstraction of search. which by the way was not expected to give perfect answers to questions. one of the top bad things with llm’s is the expectation by some that what they send back can be just used without review.

    • uuj8za@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      which by the way was not expected to give perfect answers to questions

      Except that’s how a lot of people treat it. And there’s so way to guard against that.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        yeah that is the problem. although you did have issues with people self diagnosing through google before chatgpt. the problem is the more it seems like an answer the larger the group of people who are going to take it as one. Except for the small opposite group who gets their hackles raised when they get the response that way. Which includes me. Still them giving sources and people using them is I think the best we will get.

    • Kichae@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      It’s not an abstraction of search, though. It’s a conditional regurgitation of the entire Internet with randomization. That is significantly and meaningfully different.

      It’s not finding text or context matches and reproducing them, it’s guessing the next word based off of the steaming pile of horse shit people have dumped over the Internet in attempts to garner attention or scam others.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        from my experience despite the difference in process it does about as well. This is one reason it providing sources for its answers is so important. Its funny how in social media its so common to get the response. source? but many folks don’t care if the llm gives them it.