Apologies for the poor grammar, English IS my first language and so I’m rather flagrant with runons.
I’m really not half as tech literate as half the people on the fediverse, but my noia about the state of online cloud hosting and lack of control over my data has led me far out of my depth. I’m wanting to set up a LibreCMC router and connect it to some type of home server (made of local office E-Waste) for media storage, email hosting, and fucking Minecraft servers or something. I promise I’ve tried my best in searching for the problem but often find myself floundering in 3-letter acronyms, and relations between systems I don’t understand (like dockers, or the Jellyfin vs Plex argument.) I don’t need an explanation but maybe some orientation on where I am to look for resources on these topics that assume I’m the 6 celled neurobase I am.
Thank you for your help, or your chastising.
I highly recommend you use Proxmox as the base OS. Proxmox makes it easy to spin up virtual machines, and easy to back up and revert to backups. So you’re free to play around and try stupid stuff. If you break something in your VM, just restore a backup.
In addition to virtual machines, Proxmox also does “LXC containers” , which are system level containers. They are basically a very light weight virtual machine, with some caveats like running the same kernel as the host.
Most self-hosting software is released as a docker-image. Docker is application level containers, meaning only the bare minimum to run the application is included. You don’t enter a docker container to update packages, instead you pull down a new version of the image from the author.
There are 3 ways to run docker on Proxmox:
- Install docker inside a virtual machine (recommended).
- Install docker inside a LXC Containers (not recommended because of various edge cases)
- Install docker directly on the Proxmox host (not recommended for various reasons).
- (There is ongoing work for running docker images directly in Proxmox, this is in beta/preview since Proxmox 9.1).
The “overhead” of running docker inside a VM on the host is so negligible, you don’t need to worry about it.
I hope someone else can pitch in with a more indepth instructions, but two things I wanted to mention:
First, forget about hosting your own email from home. Seriously. Even those who do it professionally don’t want to deal with that at home. You’ll find people on fediverse who do it but I’m sure plenty will give you this same recommendation/warning. It’s a huge hassle and it’s so easy to get your domain blocked/ending up on a blacklist and way harder to get out of it.
Second, I can personally recommend https://linuxupskillchallenge.org/ if you are really starting from scratch ( there’s a community here: !linuxupskillchallenge@programming.dev ). This is how I started and set up my own linux server and started self hosting stuff on it. It’s really basic and won’t teach you everything you need but it’s a great start for setting up your own server. You can do everything with a local server at home that you have set up.
I run my email server, but not at home. Running it at home is not all all more difficult, but it will only work for internal traffic and inbound from the internet. Residential IPs are simply blacklisted by ISP and as such - nothing will reach external recipients. Still useful, but is limited.
To have your smtp reach everyone globally you need to run it on a business IP. I use Linode, has worked very well since the setup in 2019, although they did get acquired by Akamai, which might become an issue at some point.
Yeah, email at home sucks. Even if you wanna selfhost you wanna do it with a static ip and an rdns pointer to the email server and good luck getting that at home.
Maybe I can start shedding some light off docker.
When you start setting up a server, you end up having to setup many things. You install various programs and their dependencies. Sometimes those dependencies can conflict with each other, or you mess up your system by manually pasting some command you found on stack exchange. Then you need to manually keep all the software you use up-to-date and pray they don’t brick your server and force you to start over. And then when you need to update your OS or move to a new machine, you need to repeat this whole dance again.
Docker is like legos. You want to install jellyfin? There’s already a docker imagine for that. You just spin it up with some little configure file and you’re done. You want to setup a firewall? You want to setup https access? Automatic updates? There are docker images already made for it.
So you keep on setting up those docker containers and they all run in isolation but can communicate with each other. If you break something, you just restart one or all the containers and you always start fresh. Docker keeps nothing in memory, unless you explicitly want it (e.g. Your jellyfin config will presist in external config files).
Want to move to a new machine? You can just copy over the scripts that run the docker containers and those config files. Software updates? Just update the docker container and it handles all dependencies.
Also, Jellyfin all the way. It’s open source and free all the way.
Honestly I’d like to say that docker is pre-built legos. Instead of putting it all together yourself you get it all built and ready to go.

Haha OK. DIY server is like legos, docker is playmobil.
If you go for openwrt instead of librecmc the amount of guides and docs will skyrocket.
Compatible hardware for openwrt is found here:
https://toh.openwrt.org/?view=normal
A tip is to sort on the 5.0GHz table so all the devices that support ac and ax (newer wifi standards) are shown first.
They have a lot of good guides here:
https://openwrt.org/docs/guide-quick-start/startRegarding home server you would want to decide on the host operating system first. Examples are proxmox (hypervisor, controlled mainly through a web ui), a standard linux server with kvm/qemu and docker, openmediavault (NAS operating system) or Windows 11 with HyperV (please don’t).
First thing after that is to figure out of to make and restore backups of the system. Knowing that you can restore everything to how it was last night makes tinkering a lot less frustrating. Proxmox has builtin backup systems, with linux I like BORG Backup.
Regarding services you will want to read up on dockers and find a docker management system you like. I run portainer, others swear by dockge and yet some prefer the command line.
Regarding video streaming; If you don’t a lifetime license for Plex I would go for Jellyfin. Plex free is continuing to lose, not gain, functions as of now.
Immich is popular for photo storage.
Regarding game servers I think https://pterodactyl.io/ is popular to make it simpler but you can probably find a plain docker image to host minecraft. If you wanna mod mc I know Pterodactyl makes it simpler to add mods on the server.
I had never heard of dockge before, but this sounds like the killer feature for me:
File based structure - Dockge won’t kidnap your compose files, they are stored on your drive as usual. You can interact with them using normal docker compose commands
Does that mean I can just point it at my existing docker compose files?
My current layout is a folder for each service/stack , which contains docker-compose.yaml + data-folders etc for the service. docker-compose and related config files are versioned in git.
I have portainer, but rarely use it , and won’t let it manage the configuration, because that interfered with versioning the config in git.Does that mean I can just point it at my existing docker compose files?
You add the compose via the DockGE UI, it then creates the necessary files and folders in
/opt/stacks/. Not sure whether it works the other way around: to create the folder, copy the compose file in there, and see if it is recognized.I’ve been using it for over a year, works very smooth.
Do you version your compose files in git? If so, how does that work with the dockGE workflow?
What’s the question



