I have a port forwarding without any tunnel to third parties and Wireguard.
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.
I have a port forwarding without any tunnel to third parties and Wireguard.


The entire page is an advertisement for an AI tool that helped uncover it. Guess that’s the demonstration on how it augments a report.
I think there’s pros and cons to everything. That way would have been less of a dickhead move towards the Forgejo developers. But a big letdown to admins as they don’t know what’s up with the software they’re running on their servers. The way the author chose gives some new intelligence to admins, and they can now act on it, since it’s public knowledge. But it’s annoying to the devs.
I guess I as a Forgejo user am kinda greatful they did it this way. Now I got to learn the story and can allocate 2h on the weekend to see if my personal Forgejo container is isolated enough and whether the backups still work.
(But that’s just my opinion after reading one side of the story. Maybe there’s more to the story and they’re being a dick nonetheless…)
Edit: And regarding just dropping the security team an informal mail… I don’t know if that’s clever. You’d normally either follow some security policy, or don’t engage. Sending them other kinds of mails which violate their policy (an internal carrot) might not be the best choice.


Thanks. Maybe Agent Zero is a bit too close to the “usual” security model for my liking. Seems they also tell me to run it isolated and not connect it to private data and production systems… But that’s kind of what I want. I’d like it to screen my email inbox or move the remaining spam mails to the spam folder. But I thought there maybe was some sane approach where a human programmed the email adapter of it. And I can just configure the agent to stick to read permissions only, so it’d be fine.
Thanks for the other link. From reading the list, I think crewAI and smolagents are closest to what I want. I mean I don’t have an exact use case. I just figured since everyone and their grandma supposedly has AI agents these days. And AI is supposed to my life better, I’d try it. Idk. Let it sift trough my email inbox. Some online RSS feeds and the changelogs of some open-source projects I follow and alert me if there’s something interesting. Or if there’s something going on in a pull request I was part of… Maybe it can help with some other things. Or be a FAQ bot for all the knowledge I stored on my computer… Or generate a cat picture and send it to me via chat at lunchtime to brighten up my day. Connect to my Home Assistant and ping me before I leave the house if the train is delayed, it’s icy conditions on the road. That’s roughly what could be my needs.
But I want something more grounded than OpenClaw. It’s probably easy to build in some permission system and come up with separate agents for tasks, so the email agent can’t blackmail me with information from my knowledgebase, or delete the inbox. And sure I could use LangChain. That’d do it. But I’ve tried, and that’s just a lot of work. I’ll end up coding all the workflows myself. Figure out the prompts or steal them from another project. Reinvent how planning and subdividing tasks works… Copy lots of boilerplate code to start a vector database and then do RAG. Memory, skills. I’ll have to write all the email, chat, RSS, webcrawler integrations myself. A scheduler, background tasks. Then code an entire UI because what they have is more for testing and very straightforward chatbots. And it just escalates to a 100h+ Python project. For what I think must have been written several times already?!


Yeah, OpenClaw is the lunatic approach I mentioned. It’s hilarious. And a nice idea. But it might also delete all your data, write a nasty email to your boss/wife, leave you with a $200 a month API bill… It can also do a lot of things and it is fun. Especially reading what crazy things are going on in that ecosystem. But it’s not what I’m looking for, right now 😅


Thx very much. That’s valuable info. I edited my comment and crossed it off my list of software to evaluate for future projects. I already got the vibe-coding and a bit of sketchiness by scrolling through the latest commits and issue tracker.


Thanks for pointing it out. Yeah it does. I just copy-pasted what I found and didn’t check.


Laptops are designed to be fairly power-efficient. I don’t know what yours does. But mine goes down to only a very few watts if idle and the display is switched off. There’s the Linux tool “powertop” which shows power consumption and it can also tune most components to go to low power. Sometimes there’s also a power profile setting. That shouldn’t be on “performance” or anything like that. I don’t think Linux has more to offer, except sleep with wake-on-lan.
Erotic roleplay and messing around 😊


Uh, it is a bit more involved. The Arch Wiki has a lengthy article on it: https://wiki.archlinux.org/title/Wake-on-LAN
Basically, you’d enable it in your BIOS/firmware. Make sure it’s enabled in the network driver. And then you need to figure out a way to send such a magic packet. You can make your router or another device in the network send it. Or do a port-forward or send it through a VPN.


I use my homeserver for it. It’s located in the broom closet and on 24/7. But there’s ways to do it with a laptop. You can inhibit standby and let it run contunuously. Or configure Wake on Lan and wake it up before you use it… I mean a switched-off computer obviously can’t do any computation.
Mint is based on Ubuntu. It’s not strictly tied to any Debian release channel?! There’s LMDE as well. That’s based on stable.


Probably because they play the same game as Mark Zuckerberg, the Chinese, to some degree OpenAI… They all release open-weights models.
They’ll generate some hype for their company that way, so it’s advertising. They build good will. They undercut the competition. Or make it clear how they outperform them. Maybe they get some more investor money if they do expand to the local models market. I bet there’s a million reason why it makes sense from a business perspective.
Yes. I’ve been somewhat lucky as well. Upgraded my homeserver to 48GB to run a few virtual machines and maxed out my old laptop well before prices skyrocketed. Got to check if I still pay the ~8€ a month for my netcup VPS or if they increased price for existing customers as well…
Hmmh. I’ve tried to do benchmarks early on, about when Llama 2 was a thing… Followed the Reddit discussions. And then at some point I wanted to replace Mistral-Nemo with something newer but I disliked how every other model had turned to the ChatGPT / sycophant style of talking… But it’s a massively laborious undertaking. The official benchmarks don’t cover any of that. And there’s no good way to automate it either. So I spend half a day reading output manually and rating it in an Excel spreadsheet. With some success, but it’s way too complicated. So I mainly eyeball it these days. And sometimes there’s some recommendations somewhere on the internet. And I learned to accept how Chatbots always go on and on with redundant information unless I tell them skip the bullshit, I have an appointment at the hairdresser in 10 minutes and you need to explain it in 3 sentences. 😄
I suppose for tasks like coding, or factual knowledge it’s way easier to come up with fully automated benchmarks.
I guess that’s been my general experience for a while. I’d download some new model with promising benchmarks. And once I try it, the results are kinda underwhelming. A few weeks ago -for example- I tried Qwen 3.5, which had “outstanding results across a full range of benchmark evaluations”. And I deleted it after it kept wasting thousands of tokens to reason about how to respond to a “Hello” by the user. And sometimes I just don’t see any real performance improvement with new models. If I had to guess, I’d say they mainly trained (and improved) for/on the benchmarks, not my use-case.


Sounds reasonable. Yeah, good luck. I’m sure you’ll figure it out. Unfortunately it’s always a bit difficult to diagnose problems over the internet, without typing in the commands and seeing the exact output. But there should be a way to make it work, F2FS is designed for something like this.


Did you read the Wiki? You need to either pass the compress_extension option when mounting it. The Arch Wiki lists how to enable compression on all text files. And I gave you the version with a ‘*’, which enables compression for all files. Or you do a chattr -R +c ... on specific files or directories to compress them. Maybe you missed that and that’s why it doesn’t compress?!
There’s probably also a way to debug it and somehow figure out what it does and how many files/sectors got compressed on the filesystem. Linux usually buries that kind of information somewhere in /sys or /proc, or there’s special commands to figure it out. But I’m not really an expert on it.
And there’s also files which just can not be compressed any further because they’re already compressed. Most images, for example. Or music or ZIP archives. If you try to compress those, they’ll usually stay the same size.


Was pretty much clear since last year. At the latest in December when they switched to “maintenance mode”. And now they archived it.
https://blog.vonng.com/en/db/minio-is-dead/
Alternatives include Garage, SeaweedFS (and RustFS).
Edit: RustFS looks very sketchy. Read object Object’s comment below before using it.
I didn’t have any luck with some uncensored Qwen 3.5 either. It always reasons about the guardrails. And it leans towards weaseling itself out of the situation. And the 3.5 version goes on for 1500 tokens anyway, just to think about how to respond to “Hello”.
I didn’t do a lot of LLM stuff lately. I’m also looking for a new local model which isn’t censored nor a sycophant, nor overly verbose and repetetive. But I guess I see that with a lot of models. And lots of the supposedly uncensored ones will give you the kids version of a murder mystery story, because they’re still averse to violence, conflict, taboo and all kinds of things.
And a lot of internet recommendations are older models from at least a year ago?! At least I didn’t find any perfect fit (yet).