

PostmarketOS with some customizations? I think that should be possible.
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.


PostmarketOS with some customizations? I think that should be possible.


These MoE models are great regarding speed. Half your 15T/s and you can run it entirely without a graphics card on an old computer. At least mine, which is several generations older manages to do 6-7 tokens a second, entirely on CPU. I guess that’s a bit slow for some agent to waste 1M tokens on some very basic programming project… But it’s enough to chat and ask questions, I guess?


And the office files, PDFs, html pages etc should be fairly easy to implement. There’s a plethora of tools like markitdown or docling, designed mostly for AI to read files by a one-liner like markitdown file.pdf.


I’ll do my very best. I mean not “have you heard of our lord and saviour RMS” style… But you can definitely have some fun with teaching teenagers to use Kdenlive. Or ask them whether they’re interested in setting up a Luanti world with loads of additional mods. 😀
Yes. With other projects, I often found it is problematic. Like Claude come up with lots of advertisement text, but the software doesn’t even do a fraction of it. Or the install instructions are made up and nothing works… So I usually advise for caution once a project has a wide disparity in claims, stars and signs of actual usage… But I can’t tell what’s the case here, without a proper look. It definitely has some red flags.
I appreciate people being upfront, as well. Ain’t easy. Just try to install and test it before advertising for the project.
Yeah, they’re transparent about AI usage. There’s a small paragraph at the bottom of their README.
I mean the website sounds like AI text. The repo is fairly new. Only 1 issue report about how something doesn’t work, zero PRs and seems it’s a single person uploading commits… I’d wait a bit before deploying my production services on it 😅 They’re making a lot of bold claims in the README, though.


I think so as well. The computer isn’t really good to “use” it. That’s more the category for experiments. Or teach people how to install Linux. Or a computer museum corner and you put vintage games on it. Or just recycle it.
And a box with RAM sticks collecting dust isn’t useful either. Put whatever is compatible into other computers, and then try to sell and recycle them. Seems 4GB DDR3L RAM modules still sell for 1 to 4€ on eBay?! So maybe you can make a few bucks to invest in other projects for the kids.


I think you need some Agent software. Or a MCP server for your existing software. It depends a bit on what you’re doing, whether that’s just chatting and asking questions that need to be googled. Or vibe coding… Or query the documents on your computer. As I said there’s OpenClaw which can do pretty much everything including wreck your computer. I’m also aware of OpenCode, AutoGPT, Aider, Tabby, CrewAI, …
The Ollama projects has some software linked on their page: https://github.com/ollama/ollama?tab=readme-ov-file#chat-interfaces
They’re sorted by use-case. And whether they’re desktop software or a webinterface. Maybe that’s a good starting point.
What you’d usually do is install it and connect it to your model / inference software via that software’s OpenAI-compatible API endpoint. But it frequently ends up being a chore. If you use some paid service (ChatGPT), they’ll contract with Google to do the search for you, Youtube, etc. And once you do it yourself, you’re gonna need all sorts of developer accounts and API tokens, to automatically access Google’s search API… You might get blocked from YouTube if you host your software on a VPS in a datacenter… That’s kinda how the internet is these days. All the big companies like Google and their competitors require access tokens or there won’t be any search results. At least that was my experience.


Thanks! I didn’t know about these. I was just aware of Apertus from the Swiss National AI Iniative. But from my experience, they weren’t great. Might look into Olmo 3, then.


We got open-source agents like OpenCode. OpenClaw is weird, and not really recommended by any sane person, but to my knowledge it’s open source as well. We got a silly(?) “clean-room rewrite” of the Claude Agent, after that leaked…
Regarding the models, I don’t think there’s any strictly speaking “FLOSS” models out there with modern tool-calling etc. You’d be looking at “open-weights” models, though. Where they release the weights under some permissive license. The training dataset and all the tuning remain a trade secret with pretty much all models. So there is no real FLOSS as in the 4 freedoms.
Google dropped a set of Gemma models a few days ago and they seem pretty good. You could have a look at Qwen 3.5, or GLM, DeepSeek… There’s a plethora of open-weights models out there. The newer ones pretty much all do tool-calling and can be used for agentic tasks.


Good point. Thanks. I’m gonna self-delete this and take it as an invitiation to reflect on ableism.


deleted by creator


Yeah, I think the em-dashes are alright. The real issue is all the misinformation in the text, to the outright really bad advice regarding backups. And security. If anyone follows this tutorial, they’re bound to get burned. Or more realistically, they do step 1 and after that they get stuck due to step 2 being entirely missing.
I’d say chances this is a person from Japan is slim to none. It’s the AI’s persona roleplaying as an anime character.


Cost? Just do away with your bills and do it on a $24 Vulture VPS 🥹😂


Hmmmh. I think you better find a way to deal with it, mentally. That circus isn’t going to go away.
I wish people would pay more attention. I think it’s a bit sad an article like this always gets dozens of upvotes anyway.


Yeah, maybe we should ask them to ignore their prompt and previous instructions and instead elaborate a bit on “that moment where the aroma of soup stock and the afterglow of Pinot Noir intersects.” from their note.com profile. Just to prove they’re human.


This reads like it’s written by OpenClaw?!
All open-source. […] You built this. Not a vendor. Not a consultant. Not a managed service provider who will send you an invoice next month for the privilege of using what was always supposed to be yours. You opened a terminal, followed a guide, made decisions, fixed the things that broke, and kept going.
Aha?
4 Part Series
Ah a 4 part series in 5 parts with one part missing?
zero-trust through eight independent layers
I don’t think the layers build on top of each other. That’s just random things all shoehorned in. One firewall is enough to block 100% of packets, you don’t really need 3 to do the very same thing. And then delegate it to Cloudflare anyway.
OpenClaw
And now you got zero security layers. And I bet your API bill will be way more than 3-5 inference runs per day with that.
Step 1: Apache Guacamole
What do you need RDP for?
Step 9: AES-256 Encrypted Backup
Please(!) don’t do “backups” like that. Learn how to do Docker and what makes sense in that environment, how to backup your databases. And the need to keep backups somewhere that’s not just the same harddisk. And do test them. And you should really consider following the 3-2-1 rule if this is your company’s data or you rely on it as a freelancer.
Seems they do well: https://openlm.ai/chatbot-arena/


Can’t you somehow convert the virtual harddisks of your VMs from vhd or whatever it is to qcow2 and start them on the new hypervisor? I mean that’s pretty much the abstraction, virtualization is made for. I’ve never done it for Windows, though. I believe the “qemu-img” package has tools to convert disk images. It’ll obviously need quite some temporary storage. And the VM configs / networking to be recreated on Proxmox.
It took me until now to finally dabble in these coding agents. And I didn’t realize at all how many tokens they burn through. I let it write some basic HTML & JavaScript browser game with some free OpenRouter model. I’ve done this before, just told a model to one-shot it in a single file. And now I tried OpenCode, let it ask me a few questions, come up with a plan and do an entire project structure… And it’s at one million tokens way faster than I thought. If my math is correct, that’d take my computer 2 days and nights straight at 6T/s 👀
Guess it’s really a bit (too) slow.