from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.
Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.
from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.
Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.
Is the drop all due to AI?
That and they cover up half the fucking page when you try to view it. Google login, giant cookie popup etc
Nah, that drop comes WELL before AI answers. Look at the dates. They’ve had a culture of people overly aggressively closing new questions for pointless/irrelevant reasons as well as being generally nasty to new users for ages. Sure, it started dropping way faster post 2020 because of AI, but the problem was already there.
The fast drop yes, but really it’s been in decline for around a decade before that.
Interesting! When I first read your comment, I looked at the chart and thought “it looks to me like the drop starts at the end of 2022. Isn’t that before LLMs started being used broadly?”
Nope. Looks like ChatGPT was released in November 2022. It doesnt feel like it’s been around that long, but I guess it has.
They also announced their AI stuff in July 2023 https://stackoverflow.blog/2023/07/27/announcing-overflowai/
The drop starts in 2013, but people were certainly ready to all bail at once by the time LLMs came around.
That sucks, is there an alternative people are using? seems like it would still be a useful knowledge base to have
The common alternative is to just ask ChatGPT your software questions, get false information from the AI, and then try and push that horrible code to production anyway if my past two jobs are any indicator.
Stack Overflow is still useful to find old answers, but fucking sucks to ask new questions on. If you aren’t getting an AI answer to your question, then you’re getting your question deleted for some made up reason.
The real answer that everyone hates is: If you have a question about something, read the documentation and experiment with it to figure that something out. If the documentation seems wrong, submit an issue report to the devs (usually on GitHub) and see what they say.
The secondary answer is that almost everything FOSS has a slack channel or even sometimes discord channels. Go to the channels and ask people who use/make whatever tool you need help with.
If you have developers pushing bad and broken code to production your problem isn’t AI.
I believe it’s more of a generational shift.
The age groups who used to rely on SO are now skilled enough not to rely on it as much (or they more often have the types of questions SO can’t answer).
Younger age groups probably prefer other means of learning (like ChatGPT, Discord and YouTube videos).
Yeah I’m working in some niche and there is a stackoverflow that they refer newbies to because "no developer support on their discord“. But if you ask a question there no one will ever answer, otoh if you know where and how to ask you’ll actually get help on discord. I feel like SO is pretty much dead with anything where change happens quickly.
There’s also only so many ways to ask how to sort a list or whatever and SO removes duplicate questions. So at some point the number of unique questions asked begins to plateau. I think that explains the slow drop before LLMs came on the scene.
I assumed it was because stackoverflow already had all the answers I needed except for the things too obscure to search for that result in my crying and trying to piece it together from scraps of info on 50 different tabs.
Yes. But not just in the “obvious” way.
I first started to contribute back when LLMs first appeared. Then SO allowed became LLM training grounds. Which made me stop contributing instantly.
I guess a not-insignificant amount of people stopped answering questions, which means less search results, which ends in less traffic.
I’m sure the fall wouldn’t be as big as it is if they didn’t allow LLMs to train on their data.
How do you disallow LLMs to train on their data while still allowing humans to train on their data?
If they can charge for it. It means they can block it. https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/
You can also rate-limit. Blacklist known scrapper IPs.
And if it doesn’t work. You make signing-in not optional. Which makes rate-limiting way easier.
The rate of human data consumption is much lower than LLM’s. The humans won’t even notice that they have a rate limit. At most they would only notice the need to create a stack overflow account.
It probably started off when reddit/discord became a friendly place for troubleshooting (code among other things), then the AI dropped it off the cliff.
Rammed it off the cliff