from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.

Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.

  • Elgenzay@lemmy.ml
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 day ago

    Think in the future LLMs will perform worse on modern problems due to the lack of recent StackOverflow training data?

    • Rexios@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      Maybe but a lot of StackOverflow answers come straight from documentation anyways so it might not matter

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      1 day ago

      StackOverflow training data

      Q: detailed problem description with research and links explaining how problem is different from existing posts and that the mentioned solutions did not work for this case.

      A: duplicate. (links to same url Q explicitly mentioned and explained)

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      24 hours ago

      I suspect it may be a self-balancing problem. For topics that llms don’t do well there will be discussions in forums. Then the AI will have training data and catch up.

    • ikt@aussie.zoneOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      At the current rate yeah, it simply isn’t good enough, my go to question is print Hello World in brainfuck and then it passes that have it print Hello <random other place>

      In this case I just asked it ‘I have a question about brainfuck’ and it gave an example of Hello World! Great!

      Unfortunately it just outputs “HhT”

      So I know that they are trying hard with synthetic data:

      https://www.youtube.com/watch?v=m1CH-mgpdYg

      but I think fundamentally they just need to be straight better at absorbing the data that they’ve already got

      • cevn@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        I think the disconnect we are experiencing is how the AI will write some code and never execute it. It should absolutely be trying to compile it in some sandbox if we had a really smart AI , thru installing it on some box. Maybe someone has already come up with this.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I think so. I am legitimately worried about what happens in 10 years with everyone relying on llms to code when nobody seems to be planning for how things will work when LLM coding is nearly universal

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        there’s nothing to plan for. Shit will be broken, shit is already expected to be broken nowadays, business as usual. I hate what programming has become.

      • ikt@aussie.zoneOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I do wonder if a new programming language will be invented that is ‘ai friendly’ and far more better integrated

        • markovs_gun@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          The main concern for me is how that would even work. LLMs struggle to come up with anything truly novel, and are mostly copying from their training set. What happens when 99% of the training corpus for a programming language is AI code or at least partially AI code? Without human data to start with how do LLMs continue to get better? This is kind of an issue with everything LLMs do but especially programming.

          • ikt@aussie.zoneOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            22 hours ago

            I’m thinking more along the lines of a new programming language unlike any programming language ever made, simply made for an LLM to produce, like machine generation of machine code (but who knows, LLM’s in themselves are frankly magic to me, last thing I want to do is be like someone in the early 1900’s predicating in the year 2000 we’ll all use advanced hot air balloons to move about)

    • Kühlschrank@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      Do llms get the bulk of their training date from Stack? Legitimately curious as I am sure they do get at least some training from non Q&A style sources