I think the issue is most people don’t understand what an LLM is doing. It’s not thinking about your question and finding the right answer. It is just doing a bunch of math to calculate the most probable response based on all it’s training plus or minus some minor random variation. If your question could be answered by a thorough Google search then an llm can probably give you a good answer. If it’s about something you’re not going to find on the internet then the LLM will just make up something that sounds convincing. And that’s the problem. It may sound convincing but it’s a con man.
I think the issue is most people don’t understand what an LLM is doing. It’s not thinking about your question and finding the right answer. It is just doing a bunch of math to calculate the most probable response based on all it’s training plus or minus some minor random variation. If your question could be answered by a thorough Google search then an llm can probably give you a good answer. If it’s about something you’re not going to find on the internet then the LLM will just make up something that sounds convincing. And that’s the problem. It may sound convincing but it’s a con man.