Look I’m not saying you’re wrong or anything just that I really don’t appreciate you stalking me.
Look I’m not saying you’re wrong or anything just that I really don’t appreciate you stalking me.
That’s a coal fired power plant. Specifically the John Amos power plant in W. Virginia.
I think it was more poking fun at the fact that the developers, not the LLM, basically didn’t do any checks for edible ingredients and just exported it straight to an LLM. What I find kind of funny is you could’ve probably exported the input validation to the LLM by asking a few specific questions about whether or not it was safe for human consumption and/or traditionally edible. Aside from that it seems like the devs would have access to a database of food items to check against since it was developed by a grocery store…
I do agree, people are trying to shoehorn LLMs into places they really don’t belong. There also seems to be a lot of developers just straight piping input into a custom query to chatgpt and spitting out the output back to the user. It really does turn into a garbage in garbage out situation for a lot of those apps.
On the other hand, I think this might be a somewhat reasonable use for LLMs if you spent a lot of time training it and did even the most cursory of input validation. I’m pretty sure it wouldn’t even take a ton of work to get some not completely horrendous results like the “aromatic water mix” or “rat poison sandwich” called out in the article.
Why would you need to defend yourself for ordering a pizza and being shocked by the high price? Sometimes I think I’ve gotten too old for the internet. People should be allowed to order a pizza every once in a while and not have to formulate a 5 point list of the reasons why it’s okay for them to order pizza.