I get his and stallmans objections or disapointment in this case but really it is just another abstraction of search. which by the way was not expected to give perfect answers to questions. one of the top bad things with llm’s is the expectation by some that what they send back can be just used without review.
yeah that is the problem. although you did have issues with people self diagnosing through google before chatgpt. the problem is the more it seems like an answer the larger the group of people who are going to take it as one. Except for the small opposite group who gets their hackles raised when they get the response that way. Which includes me. Still them giving sources and people using them is I think the best we will get.
It’s not an abstraction of search, though. It’s a conditional regurgitation of the entire Internet with randomization. That is significantly and meaningfully different.
It’s not finding text or context matches and reproducing them, it’s guessing the next word based off of the steaming pile of horse shit people have dumped over the Internet in attempts to garner attention or scam others.
from my experience despite the difference in process it does about as well. This is one reason it providing sources for its answers is so important. Its funny how in social media its so common to get the response. source? but many folks don’t care if the llm gives them it.
I get his and stallmans objections or disapointment in this case but really it is just another abstraction of search. which by the way was not expected to give perfect answers to questions. one of the top bad things with llm’s is the expectation by some that what they send back can be just used without review.
Except that’s how a lot of people treat it. And there’s so way to guard against that.
yeah that is the problem. although you did have issues with people self diagnosing through google before chatgpt. the problem is the more it seems like an answer the larger the group of people who are going to take it as one. Except for the small opposite group who gets their hackles raised when they get the response that way. Which includes me. Still them giving sources and people using them is I think the best we will get.
It’s not an abstraction of search, though. It’s a conditional regurgitation of the entire Internet with randomization. That is significantly and meaningfully different.
It’s not finding text or context matches and reproducing them, it’s guessing the next word based off of the steaming pile of horse shit people have dumped over the Internet in attempts to garner attention or scam others.
from my experience despite the difference in process it does about as well. This is one reason it providing sources for its answers is so important. Its funny how in social media its so common to get the response. source? but many folks don’t care if the llm gives them it.