Recently a user posted a comment on one of my posts about Qwen secretly sending information over the internet even if run locally.
Is there any privacy concern for locally run models to share your conversations or data? What if they can connect to the internet via a tool or MCP?


I’ve never heard that story. I think they might be hallucinating or trolling. Of course if you pull random Docker containers or execute some Github project to try new AI, you’re running other people’s code, and that could do arbitrary things…
But that’s not what we do. Usually, we download models in safetensors format, or gguf. And those are specifically designed to prevent this very thing, and not contain executable code.
Tools and MCP servers are a different story. Once you give your LLM access to the internet, it …well… has access to the internet. It mostly does what it’s supposed to do. But there’s occasional stories how someone’s AI Agent deleted all their email. Or reproduced some scifi story tropes and tried to use the internet to blackmail their user. AI can also make mistakes. Like you tell it to write a software project and it accidentally includes your password and API key. Or tell private information about you to other people if you grant it generous access to everything. The news about OpenClaw is full of hilarous anecdotes about things going wrong.