Sorry, it’s a bit long so I only skimmed your post. However, I sometimes use LLMs for Linux issues as well.
I think they’re pretty good at explaining concepts and give you some hints which commands to use.
However, in some situations they tend to do things their way and not the way the distribution you’re using intended. E.g. if you have a distro like fedora that comes with podman it will likely have you install docker if you ask something about containers.
Or in other cases it might basically vibe code you a bunch of shell scripts for what you want, even though your distro already comes with the feature (but the LLM didn’t know about it).
So it’s a bit of a two side sword. Explaining a concept usually works great. But otherwise prompt it to use Internet search for the latest and distribution specific docs and to always prefer ideomatic solutions.
If the LLMs answer to your problem looks like a whole lot of work, there’s a good chance these an easier way to do it. If it doesn’t work and the LLM continuously keeps trying different approaches, there’s also a good chance it’s getting desperate and doing more dangerous things, probably not the right ones.
Sorry, it’s a bit long so I only skimmed your post. However, I sometimes use LLMs for Linux issues as well.
I think they’re pretty good at explaining concepts and give you some hints which commands to use.
However, in some situations they tend to do things their way and not the way the distribution you’re using intended. E.g. if you have a distro like fedora that comes with podman it will likely have you install docker if you ask something about containers.
Or in other cases it might basically vibe code you a bunch of shell scripts for what you want, even though your distro already comes with the feature (but the LLM didn’t know about it).
So it’s a bit of a two side sword. Explaining a concept usually works great. But otherwise prompt it to use Internet search for the latest and distribution specific docs and to always prefer ideomatic solutions.
If the LLMs answer to your problem looks like a whole lot of work, there’s a good chance these an easier way to do it. If it doesn’t work and the LLM continuously keeps trying different approaches, there’s also a good chance it’s getting desperate and doing more dangerous things, probably not the right ones.