• nonentity@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    ·
    11 hours ago

    No one who is impressed by LLMs should ever be permitted to make decisions which affect anyone not similarly cognitively impaired.

    • All Ice In Chains@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      9 hours ago

      On their own as an advancement in computing LLMs are impresive, but tech and finance bros overinflated the perception of their performance well beyond what’s reasonable to try to discipline workers for wanting more rights.

      From a computing perspective the thing I find saddest is that everyone will hate anything having to do with AI in the future because of this bullshit. Given ownership by the people, more research into the field could actually liberate us all more from tedious labor.

      • DacoTaco@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        7 hours ago

        I agree. The tech of an LLM is really cool and impressive. But what the tech market and finance markets have made of it is just really fucking sad. I really hope the bubble fucking bursts

  • tonyn@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    10 hours ago

    ChatGPT and other LLMs need access to tools for things like this just like you and I do. If you ask me how many seconds have elapsed since I started typing this, I would give you a convincing estimate. I would need a Casio watch to give you an exact answer.

    • Flyberius [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      9 hours ago

      Read the article. The AI can’t even give a convincing estimate.

      The point here is that LLMs will never be AIs. They are just text extruders. They are hideously over valued and they are upending society for all the wrong reasons

    • DacoTaco@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      This is correct. LLM’s are just the knowledge and information processing bit of our brain. To actually do things we need access to things like our limbs, eyes, ears, watch, computer,…
      Which is why my comment in this thread spoke of an mcp tool and a webhook, which is all thats needed. So a year for that? Fuck off, thats absurdly long for 2 things that already exist and just need to br plugged in the source…

    • All Ice In Chains@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      9 hours ago

      C’mon bro, just one more trillion, that’s all. Then we’ll have a paradise with a new Epstein island and everything.

  • pHr34kY@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    13 hours ago

    So, ChatGPT can’t match any function of a Casio wristwatch. I’m concerned that when it can, it will consume the power of microwaving a turkey just to tell a user what time it is.

    • bountygiver [any]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      a better comparison: if you ask siri to time you like this 10 years ago, it would correctly start the timer app

  • Tangentism@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    13 hours ago

    There’s a guy on tiktok called Huskistaken (yes, i know) that demonstrates repeatedly just how useless chatgpt is.

    The first video of his I saw was him playing a clip of altman stating that it doesn’t have a timer and chat gpt countering that it does.

    He then gets it to start a timer to time his long it takes him to run a mile and almost instantly tells it to stop. It tells him it was +7 minutes!

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        11 hours ago

        It’s because he knows how screwed OpenAI, actually is.

        He acts like he’s surfing the wave. He looks like he’s exactly as deep in the hole as he actually is.

        ChatGPT is the next Theranos.

        He hasn’t just scammed consumers. He’s scammed investors. And that’s the one crime that actually lands people like him in prison.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    16 hours ago

    Okay, but just to be clear, the problem is not that it can’t do a timer. The problem is that it claims to be able to and even produces a result which looks plausible. It means, you cannot trust it to do anything that you can’t easily verify. If they could fix that overconfidence in a year, it would be much better.

    • fox [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      15 hours ago

      The overconfident tone is baked in. LLMs don’t have knowledge or world models, and all text they produce is nothing more than statistical relation of input to output based on frequency of appearance and semantic closeness. Therefore you can train the things to lean towards doubtfulness (nobody will use them) or confidence (wow, it must be true if it’s this certain). It’s abusing the human tendency to anthropomorphize to sell a really shitty product.

      • wheezy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 hours ago

        What if we just, idk, handled those corner cases with something like a human created control system that follows a set of very specific instructions that always produce the same result.

        Stick with me here. I know this is a radical idea. But, say you were able to parse the input from the user and map it to the same resulting, let’s call it, function.

        So, the user says something like “start a timer for 60 seconds” or “60 second timer please”. Using a basic word mapping we could infer the confidence of English sentences and produce results.

        We could even improve our results through automatic user feedback based on behavior and popularity of their mapping choices. Yes.

        We could even do this for like multiple “features”. Like have one “function” that maps requests to timers, another to setting an alarm, maybe even something radical like doing mathematical computations.

        But, again, instead of throwing the input into a block box that burns massive compute power that we have no control of. We just. Write the box ourselves for very common tasks.

        Idk, maybe I’m crazy. It probably wouldn’t work. I’m probably just oversimplifying it.

  • DacoTaco@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    14 hours ago

    A year? To make a mcp tool that starts a timer and a website hook that listens for the timer?
    Alright, thats kinda fucked lol