

I just did the same f’ing thing and came here to write your comment!
well done.
WYGIWYG


I just did the same f’ing thing and came here to write your comment!
well done.


Dev: My app’s getting a 400 hitting the server. Your firewall changes broke it.
Me: You’re getting to the server, it’s giving you back a malformed request error. Most likely it’s a problem in your client.
Dev: it worked fine until you made that change in QA.
Me: Your server is in production.
After that, I just get too busy to look at it for a while… They figure it out eventually.


Everything you expose is fine until somebody finds a zero day.
Everything these days is being built from a ton of publically maintained packages. All it takes is for one of those packages to fall into the wrong hands and get updated which happens all the time.
If you’re going to expose web yourself, use anubus and fail2ban
Put everything that doesn’t absolutely need to be public open behind a VPN.
Keep all of your software updated, constant vigilance.


When I started with it, I looked through references all over and just felt f’ing lost, and I do this kind of stuff all the time. I am intimately familiar with AWS and Azure, but setting K8S up is just very different than the normal stuff we’re used to. I’m big on installing a package and screwing with it until it works, but this doesn’t work like that.
At the risk of being criticized here, and I’m very sorry if you’re strongly opposed to AI, consider asking ChatGPT or Copilot to guide you through setting up Kubernetes step by step. Out of desperation, I figured I’d give AI a shot, and for the most part, it was really great at teaching it to me.
Ask it to give you the different options for setting up Kubernetes on your home lab (there are numerous ways to do this). You can save a lot of steps by using something like Rancher (k3s), which is a simplified version, but I prefer starting with the official kubeadm first. It’s harder, but it gives you a better feel for what’s happening, and it’s more capable and closer to what you’d experience when crafting a production deployment.
Indicate your level of experience in the next prompt and specify which systems you’re familiar with so it can tailor training to your existing knowledge and play to your strengths. Ask it to make a lesson plan first, and then pick what items you want it to walk you through. If anything feels weird or you have questions, stop it and ask away. You’re working on something from scratch, so there’s little to lose if it gets something wrong, but honestly, teaching technical things with tons of documentation available is probably the best use of LLMs that has ever existed.
If you decide against AI, focus your research on Docker cli, Kubeadm installation (the control plane/controller) and creating/joining nodes, persistent storeage and networking, K8S Namespace, then pod deployment. Complicated parts that might hang you up are getting logs from PODS that die on startup, and getting interactive prompts in a cluster are a little different than Docker (have to specify namespace)
For persistent storage, you then have numerous options. For a homelab, I like Longhorn; it’s a RAID-like system that stores data blocks across the nodes, and it easily backs up to S3 if you want it to.
For homelab learning and testing, I just crapped out a Proxmox and started 3 VMs, setup kubeadm on the control plane and then joined two nodes, then spent I an hour getting NTFY to run in it for the first time, I really should have done a python hello world, NTFY is fiddly. But, it’s super fun to stop a VM and watch the app come back up like nothing happened.
Once you get a base system up, whatever you choose, do check out https://www.ansibleforkubernetes.com/
Jeff Geerling did a bang-up job on the book, and it supports his cause. It just doesn’t go into the detail you need to get started with k8s.


The nodes go on x86. You use the pi’s for control planes. They sit around doing pretty much nothing until a pod get’s wrecked or upgraded then they spin a new one. You use 3’s or 4’s clocked down to save power.
You really only need one, but for $50 two gives your fault tolerance, fault tolerance.


K8s is pretty cheap for fault tolerance
Two VM’s and two Pi
If my wife decides whe wants to watch the wedding video or the kids first TKD break and it’s down, she’ll clamor to move back to Google/Apple. I can also move my piholes over there and some of my arr stack.
Resillient hosting for zero cost is pretty hot.


I didn’t say copy facebook
I’m not saying don’t decentralize at all
Forcing people to decentralize isn’t* going to work in most places.
I’m not spending any more time on the subject, I think we’re at an impassse and neither of us are going to change our minds.
Honestly, it’s a great project though,
best of luck
edit: brain said isn’t, fingers wrote is’t, autocorrect did me dirty


the explicit design goal
IMO, it’s a bad goal. Not that decentralized is a bad goal, but dictating the amount of decentralization will decimate wide adoption.
A server for every community is also a Mastodon goal that never really happened. Sure there are some out there, but the general public doesn’t want that. It’s a waste of compute resources to run a 24x7 server for every community. It’s a problem of scale. I get the decentralized point, but I think it’s going to utterly fail at widespread adotion if it needs a technical caretaker and a $20 a month bill evey time a zipcode wants to sell things. It migth work well in Germany, it’s not going to work well in most places.


I’m just going by what’s said here because i’m not about to go through installing it to find out.
So every town that wants to sell things needs to host their own instance? And make sure that their instance doesn’t federate with other towns that are ‘too far away’?
edit:
OK I read the readme.
Why not just setup communities on the server as locations? Why is there a need to install another server for every location that wants to sell things? Certainly one server could handle thousands of locations.


Wait, you mean it actually won’t let you set your location and search for local ads?
If someone is going to build a site for selling things, that’s ‘kind of’ the most important part of the site. Having it be federated makes that a thousand times worse. Now I’m supposed to find other local federated services in my area?
That is so against how any of this works.


God damnit, now I need to set up k8s and install it.
I’ve been putting off moving out of Google photos for years. No, no, I shouldn’t spend the time to host it. It has that scary banner.
Way to ruin my weekend! /s
Congrats Immich Team! /and if you’re listening, thanks!


If everything is working correctly, whether or not you transcode is basically whether or not the end device can play the file without changes.
For example, my old Roku can play a raw 4K File under H264 with no problem. But if I throw an H265 at it, it requires the server to transcode. It also has problems with AAC audio. And my server is so old that just trying to rip the 4k apart entrance to the audio is often too much for it.
So to start, make sure your client device can play the files directly. If it can’t, you’re going to need to handbrake it before you put it on your server.
Ive been using Unraid for years.
I am fully capable of running a Docker solution and setting up drives in a raid configuration. It’s more or less one of my job duties so when I get home I’m not in a hurry to do a lot more of that.
But Unraid is not zero maintenance, and when something goes wrong, it’s a bit of a pain in the ass to fix even with significant institutional knowledge.
Running disks in JBOD with parity is wonderful for fault tolerance. But throughput for copying files is very slow.
You could run it with zfs and get much more performance, but then all your discs need to be the same size, and there’s regular disk maintenance that needs to happen.
They have this weird dedication to running everything is root. They’re not inherently insecure, but it’s one of those obvious no-nos that you shouldn’t do that they’re holding on to.
If you want to make it a jellyfin/arr server and just store some docs on the side, it’s reasonable and fairly low maintenance.
I’m happy enough with them not to change away. And if you wait till a black Friday they usually have a pretty good sale.
I’ll probably eventually move to a ProxMox and a Kubernetes cluster as I’ve picked up those skills at work. I kind of want to throw together a 10-inch rack with a cluster of RPI. But that’s pretty against what direction you’re looking to head :)


Good thing Linux distros can be forked,
Distro isn’t the problem; The OS is already on a far too tight update schedule, fork away, but we need the package repository to remain sound and I don’t know what this would do to that.


Privoxy on your always on VPN.
Tailscale home, proxy out VPN.


That’s fine for browser-based watching, literally no one in my group watches via the browser. Even on android it’d be a fight. Grandma’s not going to go on to a browser to auth her session.
The clients need to support it. If it were just backend, I’d fork it myself.


The authentication is lacking 2fa and has a half hearted attempt at fail2ban
If you try to properly implement either of those, the standard device clients won’t work anymore.
Plex provides default SSL.
The relay is actually a bit more useful.
You can be on a carrier grade NAT with no real external IP.
It’s more akin to running a VPS somewhere and SSH tunneling your home server through it.
They also cache* the entirety of the TVDB and EPG Services.
I’m not sore about most of this with jellyfin, and I am trying to primarily use it, but I really miss some of the features. But realistically, adding 2FA to the clients would be a huge benefit. trying to replace 2FA with wish.com fail2ban feels particularly dirty.


I run deluge torrent with openVPN with pivoxy in a container.
Arr stack and deluge uses proxy if the VPN drops off, all secure communication halts by default.
Arr stack has default download set to SSD, copies files to a rotational disk storage on completion. continues to host the seed from the SSD.
Once in a blue moon, I’ll go back through Deluge and right-click Delete torrent and data for stuff that’s well satisfied and who are really available.
Once a year or so, I’ll go back and make sure that there’s no extra stuff in the download folders.
You’re fine without hard links as long as you have enough space and don’t mind a little maintenance now and then.
A wprthy cause, but there’s no end of other things to host in LXC. It’s possible, but unpleasant and can be brittle for updates.
There have been 209 versions of that site
https://web.archive.org/web/20250331043558/https://www.isitdns.com/
it predated AI, but likely seems to have had some AI cleanup.
If it was truly just vibecoded, the comments would usually be on every element.