

I’ve never heard that story. I think they might be hallucinating or trolling. Of course if you pull random Docker containers or execute some Github project to try new AI, you’re running other people’s code, and that could do arbitrary things…
But that’s not what we do. Usually, we download models in safetensors format, or gguf. And those are specifically designed to prevent this very thing, and not contain executable code.
Tools and MCP servers are a different story. Once you give your LLM access to the internet, it …well… has access to the internet. It mostly does what it’s supposed to do. But there’s occasional stories how someone’s AI Agent deleted all their email. Or reproduced some scifi story tropes and tried to use the internet to blackmail their user. AI can also make mistakes. Like you tell it to write a software project and it accidentally includes your password and API key. Or tell private information about you to other people if you grant it generous access to everything. The news about OpenClaw is full of hilarous anecdotes about things going wrong.



Syncthing or Nextcloud. There’s a bunch of Linux sync software: https://awesome-selfhosted.net/tags/file-transfer--synchronization.html
Traditionally, you’d just put it on a NFS volume and be done with it. Or make it a boring plain old independent laptop with nightly backups configured, if your users always work from the same machine and don’t like… switch to a different computer in the middle of a task.