As always, I use the term “AI” loosely. I’m referring to these scary LLMs coming for our jobs.
It’s important to state that I find LLMs to be helpful in very specific use cases, but overall, this is clearly a bubble, and the promises of advance have not appeared despite hundreds of billions of VC thrown at the industry.
So as not to go full-on polemic, we’ll skip the knock-on effects in terms of power-grid and water stresses.
No, what I want to talk about is the idea of software in its current form needing to be as competent as the user.
Simply put: How many of your coworkers have been right 100% of the time over the course of your career? If N>0, say “Hi” to Jesus for me.
I started working in high school, as most of us do, and a 60% success rate was considered fine. At the professional level, I’ve seen even lower with tenure, given how much things turn to internal politics past a certain level.
So what these companies are offering is not parity with senior staff (Ph.D.-level, my ass), but rather the new blood who hasn’t had that one fuckup that doesn’t leave their mind for weeks.
That crucible is important.
These tools are meant to replace inexperience with incompetence, and the beancounters at some clients are likely satisfied those words look similar enough to pass muster.
We are, after all, at this point, the “good enough” country. LLM marketing is on brand.
No. Not really anyways.
HOWEVER… The AIs in question MUST BE Competent Enough. What your definition of that will be is likely to be flexible and possibly even debatable with others depending on the situation.
What needs to be true is that AI must not be capable of making the same mistakes a human could, but the mistakes that an AI COULD POSSIBLY MAKE are required to be mistakes that any human could reasonably and very easily catch.
Unfortunately the above IS NOT TRUE of current AI LLM type implementations. These LLMs have no consciousness nor ability to reason beyond what a computer could. They have no creativity, despite having the ability to parse language and guess the next word.
If you only learned the rules, grammar and vocabulary of a specific language and were given absolutely zero context or cultural and historical teaching; an LLM is what that would look like. This by itself is not enough to replace jobs.
Is that fact enough to stop heartless corporations from trying it? Hell. The. Fuck. No. They will try it anyways, they will ‘fuck around and find out’ on the off chance that it may save them money. They don’t care that it’s the company selling the ‘AI product’'s job to lie to sell their product. The fact that some companies are that desperate to save cash is telling in and of itself about the state of the world right now…but that’s another topic for another day and another threaded post in another subcommunity on Beehaw.