from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.
Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.
from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.
Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.
Think in the future LLMs will perform worse on modern problems due to the lack of recent StackOverflow training data?
Maybe but a lot of StackOverflow answers come straight from documentation anyways so it might not matter
Q: detailed problem description with research and links explaining how problem is different from existing posts and that the mentioned solutions did not work for this case.
A: duplicate. (links to same url Q explicitly mentioned and explained)
Don’t need eight billion parameters to go “But why do you want that?”
I suspect it may be a self-balancing problem. For topics that llms don’t do well there will be discussions in forums. Then the AI will have training data and catch up.
At the current rate yeah, it simply isn’t good enough, my go to question is print Hello World in brainfuck and then it passes that have it print Hello <random other place>
In this case I just asked it ‘I have a question about brainfuck’ and it gave an example of Hello World! Great!
Unfortunately it just outputs “HhT”
So I know that they are trying hard with synthetic data:
https://www.youtube.com/watch?v=m1CH-mgpdYg
but I think fundamentally they just need to be straight better at absorbing the data that they’ve already got
I think the disconnect we are experiencing is how the AI will write some code and never execute it. It should absolutely be trying to compile it in some sandbox if we had a really smart AI , thru installing it on some box. Maybe someone has already come up with this.
I think so. I am legitimately worried about what happens in 10 years with everyone relying on llms to code when nobody seems to be planning for how things will work when LLM coding is nearly universal
2005 post, s/LLM/Google/g.
there’s nothing to plan for. Shit will be broken, shit is already expected to be broken nowadays, business as usual. I hate what programming has become.
Do you realise what sub you’re in?
I do wonder if a new programming language will be invented that is ‘ai friendly’ and far more better integrated
The main concern for me is how that would even work. LLMs struggle to come up with anything truly novel, and are mostly copying from their training set. What happens when 99% of the training corpus for a programming language is AI code or at least partially AI code? Without human data to start with how do LLMs continue to get better? This is kind of an issue with everything LLMs do but especially programming.
I’m thinking more along the lines of a new programming language unlike any programming language ever made, simply made for an LLM to produce, like machine generation of machine code (but who knows, LLM’s in themselves are frankly magic to me, last thing I want to do is be like someone in the early 1900’s predicating in the year 2000 we’ll all use advanced hot air balloons to move about)
2035: BASIC supremacy.
Do llms get the bulk of their training date from Stack? Legitimately curious as I am sure they do get at least some training from non Q&A style sources