Just a shower thought I had when thinking about claims like “80% of all code will be written by AI”…

    • JayleneSlide@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      7 hours ago

      I feel like this says more about neuroscience than it does about LLMs. :D

      But seriously, my teams’ and individual experiences with LLMs have been mixed, at best. Even with advanced prompt training, the tools are just not there yet for our work.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        I looked at LLM tools for software development a year ago, it was clearly unhelpful then. Showing some inklings of promise, but “just not there for our work” yet.

        I looked six months ago and the advancement was dramatic, while it was helpful sometimes and not others six months ago it was clearly improving at an impressive pace. Mind you, I’ve been dabbling with “AI” since the 1980s, built a software neural net in 1991 and tried to make it do something useful back then, so… obviously what we’ve got now is DRAMATICALLY better and faster improving that it was waaay back like that.

        Over the past six months it has become solidly “better” for a lot of uses than the methods it replaces. Now, I also notice big players like Google have been “enshittifying” their previous services for a few years leading up to this, so a lot of the “good stuff” I get from AI now is just what I used to get from basic search or “voice assistant” a few years back, but even ignoring that phenomenon - the frontier models really are better than anything that came before in a lot of ways.

        Also, starting six months ago, I actively engaged in learning how to use the LLM based tools - and I believe much of the improvement I have experienced is due to me learning how to use the tools better, in addition to the tools themselves improving.