What if we just, idk, handled those corner cases with something like a human created control system that follows a set of very specific instructions that always produce the same result.
Stick with me here. I know this is a radical idea. But, say you were able to parse the input from the user and map it to the same resulting, let’s call it, function.
So, the user says something like “start a timer for 60 seconds” or “60 second timer please”. Using a basic word mapping we could infer the confidence of English sentences and produce results.
We could even improve our results through automatic user feedback based on behavior and popularity of their mapping choices. Yes.
We could even do this for like multiple “features”. Like have one “function” that maps requests to timers, another to setting an alarm, maybe even something radical like doing mathematical computations.
But, again, instead of throwing the input into a block box that burns massive compute power that we have no control of. We just. Write the box ourselves for very common tasks.
Idk, maybe I’m crazy. It probably wouldn’t work. I’m probably just oversimplifying it.
I mean that’s basically the idea behind neurosymbolic AI, have the LLM deal with natural language input, convert it to a formal spec, and give it to a symbolic engine to execute https://arxiv.org/abs/2305.00813
I’ve unfortunately read so much slop where some claim to have discovered the next big thing that I couldn’t see that this was an obvious joke. Now I feel like an idiot
What if we just, idk, handled those corner cases with something like a human created control system that follows a set of very specific instructions that always produce the same result.
Stick with me here. I know this is a radical idea. But, say you were able to parse the input from the user and map it to the same resulting, let’s call it, function.
So, the user says something like “start a timer for 60 seconds” or “60 second timer please”. Using a basic word mapping we could infer the confidence of English sentences and produce results.
We could even improve our results through automatic user feedback based on behavior and popularity of their mapping choices. Yes.
We could even do this for like multiple “features”. Like have one “function” that maps requests to timers, another to setting an alarm, maybe even something radical like doing mathematical computations.
But, again, instead of throwing the input into a block box that burns massive compute power that we have no control of. We just. Write the box ourselves for very common tasks.
Idk, maybe I’m crazy. It probably wouldn’t work. I’m probably just oversimplifying it.
I mean that’s basically the idea behind neurosymbolic AI, have the LLM deal with natural language input, convert it to a formal spec, and give it to a symbolic engine to execute https://arxiv.org/abs/2305.00813
Some sort of coding language… By god!
Haven’t you just recreated Siri/Alexa/etc. now? I can’t tell if this comment is sarcastic
They are jokingly suggesting that we invent programming. It’s a good bit, you should upvote them
I’ve unfortunately read so much slop where some claim to have discovered the next big thing that I couldn’t see that this was an obvious joke. Now I feel like an idiot