• 0xtero@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    2 hours ago

    ChatGPT in its PhD thesis defense: “Oh, I’m sorry for the misinformation, let me try this again…”

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    3 hours ago

    Ph.Deez nutz.

    I have friends who actually have a Ph.D. It takes many years to get one and an attempt to actually better a field. People tend to trust your opinion on a subject when you have a doctorate in that field.

    I can’t even trust ChatGPT to answer a basic question without fucking up and apologizing to me, only to fuck up again.

    Maybe stop treating language models like AGI? They’re awesome at recognizing semantic similarities between words and phrases (embeddings) as well as generating arbitrary but reasonable looking output that matches an expected output (structured outputs). That’s cool enough. Stop pretending like it isn’t and falsely advertising it as being able to cure cancer and world hunger, especially when you wouldn’t even be happy if it did.

    • bobs_monkey@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      2 hours ago

      AI as it sits is a tool that has specific use cases. It is absolutely not intelligence, as it’s commonly marketed. It may seem intelligent to the uninformed, but boy howdy is that a mistake.

      • t3rmit3@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 hour ago

        It’s a sad reflection of our current state when being able to string together coherent sentences is impressive enough to many as to be confused with truth and/or intelligence.

  • shnizmuffin@lemmy.inbutts.lol
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 hours ago

    If I asked a PhD, “How many Bs are there in the word ‘blueberry’?” They’d call an ambulance for my obvious, severe concussion. They wouldn’t answer, “There are three Bs in the word blueberry! I know, it’s super tricky!”

    • GissaMittJobb@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      28 minutes ago

      LLMs are fundamentally unsuitable for character counting on account of how they ‘see’ the world - as a sequence of tokens, which can split words in non-intuitive ways.

      Regular programs already excel at counting characters in words, and LLMs can be used to generate such programs with ease.

  • ook@discuss.tchncs.de
    link
    fedilink
    arrow-up
    6
    ·
    3 hours ago

    I mean, that doesn’t really mean much, given that you don’t have to be very intelligent to get one. It’s mostly an endurance exercise and often a test how much frustration and uncertainty you can take in your life.