

… don’t forget about the backups.
And if your major issue is putting things in wrong locations… Maybe learn about some abstraction layers, so next time you’re able to just move it, instead of tearing it down?
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.


… don’t forget about the backups.
And if your major issue is putting things in wrong locations… Maybe learn about some abstraction layers, so next time you’re able to just move it, instead of tearing it down?


Sure. I should have phrased it a bit differently. My point was more or less, why is the curl developer’s review of the performance in a hypothetical scenario a decisive factor here… That feels like super random information. Same with the other two people. I’m fairly sure this is true and all… There’s just no context given, nor is there a connection being made between the statements and the rest of the content of the article.


I usually start with the Wikipedia Article when I’m interested in new things. It’ll have many references at the bottom to read more about a concept.
Interestingly enough, there’s zero mention of Claude in there. And when I google it, there’s many very convoluted blog posts. And I can’t tell whether it’s above my head or hallucinated stories. They go on for like 20 pages but don’t really explain anything with all those words. Or what they actually found in Claude’s code.
Symbolic-AI in itself isn’t too hard. That’s stuff from the 1980s and in every computer science textbook. Just no clue how something like an expert system is supposed to be connected to a Chatbot or programming agent.


Lmao. Just add a big RELEVANCE? I mean why do they cite 3 random people’s opinion on random aspects of the entire concept? It’s supposed to be an encyclopedia, not a blog post…


Thanks for the link! But I’m afraid it doesn’t tell me much. a) FreeBSD isn’t even on the list, so I don’t know the numbers to compare it to. and b) there’s things like survivorship bias. Looking at numbers like this is literally the textbook example of how to do it the wrong way. You have to do statistics the proper way around. For all we know by those numbers, Linux could be the best battle-tested OS in the world. I mean they fixed 3 times as many vulnerabilities as Microsoft did for any of their products?!


Sometimes I wish people would back up their factual claims with numbers and studies.
Also: FreeBSD phone, when??


I don’t think a multi billion parameter LLM counts as proper machine translation… That’d be something like Argos Translate or the models from Mozilla’s Bergamot Project. Seems they’re the ones used in the open source Android App linked by TheLeadenSea.


Sorry, I just saw the recommendations. I’m using a Matrix server myself. And it’s connected to the internet, since I use it 24/7 and on my phone, etc.
I guess technically, most protocols can be used in an internal network. But maybe you’ll need to put in some extra effort, for example if a platform requires SSL certificates or something like that.
I mean you could try… If it asks for a hostname, just put a local hostname in. Or the IP address. Or set up a DNS entry on the router. And see if it works.
Or try something like RocketChat, or depending how your team’s workflow is, maybe you don’t want a messenger. But some (online) collaboration platform more focused on documents, like Nextcloud.
I think the added benefit of an OpenWRT router is, you get 3 more ports (for your TV, Playstation and PC), plus a Wifi network. And it’s really hard to break it. But a MiniPC with OPNsense, of course will be more powerful. And some more advanced things have been notoriously difficult to set up in OpenWRT, maybe OPNsense does it a bit better.


I think some people here recommended Snikket. It’s supposed to be easy to install and modern. I don’t know what components it’s made up of. It’s a dockerized XMPP server + Apps.


I dislike it. Usually I’d use packages from my Linux distribution. Or package it myself and maybe upstream the effort if my distro has a user repository. Now (this way) it’s down to everybody download random files from the internet and execute them. Specifically what every Linux tutorial instructs you not to do. Plus there’s no updates, no security, no version control or transparency. It’s not licensed in any free way, so I can’t fix it or adapt it to my liking, I can’t help you write better Python code…
But it’s your software project. You’re perfectly fine to do whatever you want with it. And it’s certainly commendable to write software, whether you do it for yourself, or put it out there in some way.


Yes. As far as I know, any gguf file should be completely safe. There had been some bugs/security vulnerabilities early on in llama.cpp, but they fixed that and I think overall, they have a good track record.
Issues might come after that, if you run some Agents on top of it, and give them access to your computer. But you don’t have to do that. If you just talk to it, I don’t see any reason to be alarmed. Other than the usual stuff. Keep using your own brain once in a while, and don’t blindly trust what AI Chatbots tell you, they give inaccurate information all the time 😅


Shouldn’t the upgrade also update the bootloader’s default entry to a new kernel? The way I’ve been doing it was apt update && apt dist-upgrade. And then reboot once every 1 to 2 years if I feel like it, am bored, or there’s all these news articles about a severe bug in the kernel.
Syncthing or Nextcloud. There’s a bunch of Linux sync software: https://awesome-selfhosted.net/tags/file-transfer--synchronization.html
Traditionally, you’d just put it on a NFS volume and be done with it. Or make it a boring plain old independent laptop with nightly backups configured, if your users always work from the same machine and don’t like… switch to a different computer in the middle of a task.


I’ve never heard that story. I think they might be hallucinating or trolling. Of course if you pull random Docker containers or execute some Github project to try new AI, you’re running other people’s code, and that could do arbitrary things…
But that’s not what we do. Usually, we download models in safetensors format, or gguf. And those are specifically designed to prevent this very thing, and not contain executable code.
Tools and MCP servers are a different story. Once you give your LLM access to the internet, it …well… has access to the internet. It mostly does what it’s supposed to do. But there’s occasional stories how someone’s AI Agent deleted all their email. Or reproduced some scifi story tropes and tried to use the internet to blackmail their user. AI can also make mistakes. Like you tell it to write a software project and it accidentally includes your password and API key. Or tell private information about you to other people if you grant it generous access to everything. The news about OpenClaw is full of hilarous anecdotes about things going wrong.


I didn’t have any luck with some uncensored Qwen 3.5 either. It always reasons about the guardrails. And it leans towards weaseling itself out of the situation. And the 3.5 version goes on for 1500 tokens anyway, just to think about how to respond to “Hello”.
I didn’t do a lot of LLM stuff lately. I’m also looking for a new local model which isn’t censored nor a sycophant, nor overly verbose and repetetive. But I guess I see that with a lot of models. And lots of the supposedly uncensored ones will give you the kids version of a murder mystery story, because they’re still averse to violence, conflict, taboo and all kinds of things.
And a lot of internet recommendations are older models from at least a year ago?! At least I didn’t find any perfect fit (yet).
I have a port forwarding without any tunnel to third parties and Wireguard.


The entire page is an advertisement for an AI tool that helped uncover it. Guess that’s the demonstration on how it augments a report.


I think there’s pros and cons to everything. That way would have been less of a dickhead move towards the Forgejo developers. But a big letdown to admins as they don’t know what’s up with the software they’re running on their servers. The way the author chose gives some new intelligence to admins, and they can now act on it, since it’s public knowledge. But it’s annoying to the devs.
I guess I as a Forgejo user am kinda greatful they did it this way. Now I got to learn the story and can allocate 2h on the weekend to see if my personal Forgejo container is isolated enough and whether the backups still work.
(But that’s just my opinion after reading one side of the story. Maybe there’s more to the story and they’re being a dick nonetheless…)
Edit: And regarding just dropping the security team an informal mail… I don’t know if that’s clever. You’d normally either follow some security policy, or don’t engage. Sending them other kinds of mails which violate their policy (an internal carrot) might not be the best choice.
Nice write-up. Thanks for also including all the numbers. If I might ask: What is the thermal/throttling behaviour you mention? Is it still within the laptop’s thermal budget? Or does it reach throttling territory when doing inference on a long context window?