• rho50@lemmy.nz
    link
    fedilink
    arrow-up
    25
    ·
    8 months ago

    I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.

    These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

    The misinformation is causing real harm.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      arrow-up
      28
      ·
      edit-2
      8 months ago

      This is nothing but a modern spin on “hey internet, what’s wrong with me? WebMD: it’s cancer.”

    • B0rax@feddit.de
      link
      fedilink
      arrow-up
      12
      ·
      8 months ago

      To be honest, it is not made to diagnose medical scans and it is not supposed to be. There are different AIs trained exactly for that purpose, and they are usually not public.

      • rho50@lemmy.nz
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        Exactly. So the organisations creating and serving these models need to be clearer about the fact that they’re not general purpose intelligence, and are in fact contextual language generators.

        I’ve seen demos of the models used as actual diagnostic aids, and they’re not LLMs (plus require a doctor to verify the result).