Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?
I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.
I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.
Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…
I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.
I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?
Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.
As someone used to the bad old days, gimmie containers. Yes it kinda sucks but it sucks less than the alternative. Can you imagine trying to get multiple versions of postgres working for different applications you want to host on the same server? I also love being able to just use the host OS stock packages without needing to constantly compile and install custom things to make x or y work.
You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I’d like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just “make bounce” and the software starts up without me having to remember all the app-specific commands and configs.
If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called “up next” that details where you’re running into challenges and need to make improvements.
I started a blog specifically to make me document these things in a digestable manner. I doubt anyone will ever see it, but it’s for me. It’s a historical record of my projects and the steps and problems experienced when setting them up.
I’m using 11ty so I can just write markdown notes and publish static HTML using a very simple 11ty template. That takes all the hassle out of wrangling a website and all I have to do is markdown.
If someone stumbles across it in the slop ridden searchscape, I hope it helps them, but I know it will help me and that’s the goal.
Would love to see the blog
🤮 I hate gui config! Way too much hassle. Give me cli and a config file anyday! I love being able to just ssh into my server anytime from anywhere and fix, modify or install and setup something.
The key to not being overwhelmed is manageable deployment. Only setup one service at a time, get it working, safe and reliable before switching to actually using full time, then once certain it’s solid, implement the next tool or deployment.
My servers have almost no breakages or issues. They run 24/7/365 and are solid and reliable. Only time anything breaks is either an update or new service deployment, but they are just user error by me and not the servers fault.
Although I don’t work in IT so maybe the small bits of maintenance I actually do feel less to me?
I have 26 containers running, plus a fair few bare metal services. Plus I do a bit of software dev as a hobby.
I love cli and config files, so I can write some scripts to automate it all.
It documents itself.
Whenever I have to do GUI stuff I always forget a step or do things out of order or something.Story of my life (minus the dev part). I self host everything out of a Proxmox server and CasaOS for sandboxing and trying new FOSS stuff out. Unless the internet goes out, everything is up 24/7 and rarely do I need to go in there and fix something.
You’re not alone.
The industry itself has become pointlessly layered like some origami hell. As a former OS security guy I can say it’s not in a good state with all the supply-chain risks.
At the same time, many ‘help’ articles are karma-farming ‘splogs’ of low quality and/or just slop that they’re not really useful. When something’s missing, it feels to our imposter syndrome like it’s a skills issue.
Simplify your life. Ditch and avoid anything with containers or bizarre architectures that feels too ontricate. Decide what you need and run those on really reliable options. Auto patching is your friend (but choose a distro and package format where it’s atomic and rolls back easily).
You don’t need to come home only to work. This is supposed to be FUN for some of us. Don’t chase the Joneses, but just do what you want.
Once you’ve simplified, get in the habit of going outside. You’ll feel a lot better about it.
That’s true, I’ve done a lot of stuff as testing that I thought would be useful services but then never really got used by me, so I didn’t maintain.
I didn’t take the time to really dive in and learn Docker outside of a few guides, probably why is a struggle…
As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.
It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.
Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.
The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an “IT Guy”, although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I’ve been doing this for a while.
I really don’t know how people without a similar level of experience can even begin to cope.
Sounds like you haven’t taken the time to properly design your environment.
Lots of home gamers just throw stuff together and just “hack things till they work”.
You need to step back and organize your shit. Develop a pattern, automate things, use source control, etc. Don’t just file follow the weirdly -opinionated setup instructions. Make it fit your standard.
This. I definitely need to take the time to organize. A few months ago, I setup a new 4U rosewill case w 24 hotswap as bays. Expanded my storage quite a bit, but need to finish moving some services too. I went from a big outdated SMC server to reusing an old gaming mobo since its an i7 but 95w vs 125wx2 lol.
It took a week just to move all my Plex data cuz that Supermicro was only 1GbE.
only 1gbE
What needs more than 1gbe? Are you streaming 8k?
Sounds like you are your own worst enemy. Take a step back and think about how many of these projects are worth completing and which are just for fun and draw a line.
And automate. There are tools to help with this.
What needs more than 1gbe? Are you streaming 8k?
I think they wanted to mean it was a bottleneck while moving to the new hardware
Also on top of that, find time to keep it up to date. If leave it rot things will get harder to maintain.
I sit down once a week and go over all the updates needed, both the docker hosts and all the images they run.
If a project doesn’t make it dead simple to manage via docker compose and environment variables, just don’t use it.
I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.
Sometimes you see a program and it starts with “Clone this repo” and it has a docker compose file, six env files, some extra fig files, and consists of a front end container, back end container. Database container, message queueing container, etc… just close that web page and don’t bother with that project lol.
That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole
That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole.
Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.
I work in IT and like most we’re also a Windows shop. I have zero professional experience with Linux but I’m learning through my home lab while simultaneously trying extract myself from the privacy cluster fuck that is the current consumer tech industry. It’s a transition and the documentation I find more or less matches the OPs experience.
I research, pick what seems to be the best for my situation (often most popular), get it working with sustainable, minimal complexity, and in short time find that some small, vital aspect of its setup (like reverse proxy) has literally zero documentation for getting it to work with some other vital part of my setup. I guess I should have made a better choice 18 months ago when I didn’t expect to find this new service accessible. I find some two year old Github issue comment that allegedly solves my exact problem that I can’t translate to the version I’m running because it’s two revisions newer. Most other responses are incomplete, RTFM, or “git gud n00b”, like your response here
Wherever you work, whatever industry, you can get burnt out. It’s got nothing to do with if you’ve “got what it takes” or whatever bullshit you think “you’re in the wrong field of work and you’re trying to jam a square peg in a round hole” equates to.
I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.
If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.
Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.
why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.
but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.
working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.You’ve completely misread everything I’ve said.
Let’s make a few things clear here.
My response is not “Git gud”. My response is that sometimes there are selfhosted projects that are really cool and many people recommend, but the set up for them is genuinely more complex than it should be, and you’re better off avoiding them instead of banging your head against a wall and stressing yourself out. Selfhosting should work for you, not against you. You can always take another crack at a project later when you’ve got more hands on experience.
Secondly, it’s not a matter of whether OP “has what it takes” in his career. I simply pointed out the fact that everything he seems to hate about selfhosting, are fundamental core principals of working in IT. My response to him isn’t that he can’t hack it, it seems more like he just genuinely doesn’t like it. I’m suggesting that it won’t get better because this is what IT is. What that means to OP is up to him. Maybe he doesn’t care because the money is good which is valid. But maybe he considers eventually moving into a career he doesn’t hate, and then the selfhosting stuff won’t bother him so much. As a matter of fact, OP himself didn’t take offense to that suggestion the way you did. He agreed with my assessment.
As you learn more about self hosting, you’ll find that certain things like reverse proxy set up isn’t always included in the documentation because it’s not really a part of the project. How reverse proxies (And by extension http as a whole) work is a technology to learn on its own. I rarely have to read documentation on RP for a project because I just know how reverse proxying works. It’s not really the responsibility of a given project to tell you how to do it, unless their project has a unique gotcha involved. I do however love when they do include it, as I think that selfhosting should be more accessible to people who don’t work in IT.
If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.
Most of them TBH. I often don’t engage with a project that involves me cloning a repo because I know it means it’s going to be a finicky pain in the ass. But most things I set up were done in less than 20 minutes, including secure access from the internet using a VPS proxy with a WAF and CrowdSec, and integration with my SSO. If you want to share with me your common pain points, or want an example of what my workflow looks like let me know.
I agree with that 3rd paragraph lol. That’s probably some of my issue at times. As far IT goes, does it not get overwhelming of you had a 9 hour workday just to hear someone at home complain this other thing you run doesn’t work and you have to troubleshoot that now too?
Without going into too much detail, I’m a solo operation guy for about 200 end users. We’re a Win11 and Office shop like most, and I’ve upgraded pretty much every system since my time starting. I’ve utilized some self-host options too, to help in the day to day which is nice as it offloads some work.
It’s just, especially after a long day, to play IT at home can be a bit much. I don’t normally mind, but I think I just know the Windows stuff well enough through and through, so taking on new Docker or self host tools stuff is Apple’s and oranges sometimes. Maybe I’m getting spoiled with all the turn key stuff at work, too.
I’m an infrastructure guy, I manage a few datacenters that host some backends for ~100,000 IoT devices and some web apps that serve a few million requests a day each. It sounds like a lot, but the only real difference between my work and yours is that at the scale I’m working with, things have to be built in a way that they run uninterrupted with as little interaction from me as possible. You see fewer GUIs, and things stop being super quick and easy to initially get up and running, but the extra effort spent architecting things right rewards you with a much lighter troubleshooting and firefighting workload.
You sorta stop being a mechanic that maintenances and fixes problem cars, and start being an engineer that builds cars to have as few problems as possible. You lose the luxury of being able to fumble around under a car and visually find an oil filter to change, and start having to make decisions on where to put the oil filter from scratch, but to me it is far more rewarding and satisfying. And ultimately the way that self hosting works these days, it has embraced the latter over the former. It’s just a different mindset from the legacy click-ops sysadmin days of IT.
What this looks like to me in your example is, when I have users of my selfhosted stuff complain about something not working, I’m not envisioning yet another car rolling into the shop for me to fix. I envision a puzzle that must be solved. Something that needs optimization or rearchitecting that will make the problem that user had go away, or at the very least fix itself, or alert me so I can fix it before the user complains.
This paradigm I work under is more work, but the work is rewarding and it’s “fun” when I identify a problem that needs solving and solve it. If that isn’t “fun” to you, then all you’re left is the bunch more work part.
So ultimately what you need to figure out is what your goal is. If you’re not interested in this new paradigm and you just want turnkey solutions there are ways of self hosted that are more suited to that mindset. You get less flexibility, but there’s less work involved. And to be clear there’s absolutely nothing wrong with that. At the end of the day you have to do what works for you.
My recommendations to you assuming you just want to self hosted with as little work and maintenance as possible:
- Stick with projects that are simple to set up and are low maintenance. If a project seems like a ton of work get going, just don’t use it. Take the time to shop around for something simpler. Even I do this a lot.
- Try some more turn key self hosting solutions. Anything with an App Store for applications. UnRAID, CasaOS, things of that nature that either have one click deploy apps, or at least have pre-filled templates where all you need to do is provide a couple variable values. You won’t learn as much career wise this way, but it’ll take a huge mental load off.
- When it comes to tools your family is likely to depend on and thus complain about, instead of selfhosting those things perhaps look for a non-big tech alternative. For example, self hosting email can be a lot of work. But you don’t have to use Gmail either. Move your family to ProtonMail or Tutanota, or other similar privacy friendly alternatives. Leave your self hosting for less critical apps that nobody will really care if it goes down and you can fix at your leisure.
It’s a mess. I’m even moving to a different field in it due to this.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code IoT Internet of Things for device controllers LAMP Linux-Apache-MySQL-PHP stack for webhosting LXC Linux Containers Plex Brand of media server package RPi Raspberry Pi brand of SBC SBC Single-Board Computer SSO Single Sign-On VPS Virtual Private Server (opposed to shared hosting)
[Thread #40 for this comm, first seen 29th Jan 2026, 05:20] [FAQ] [Full list] [Contact] [Source code]
honestly, i 100% do not miss GUIs that hopefully do what you want them to do or have options grayed out or don’t include all the available options etc etc
i do get burnout, and i suffer many of the same symptoms. but i have a solution that works for me: NixOS
ok it does sound like i gave you more homework, but hear me out:
- with NixOS and flakes you have a commit history for your lab services, all centralized in one place.
- this can include as much documentation as you want: inline comments, commit messages, living documents in your repository, whatever
- even services that only provide a Docker based solution can be encapsulated and run by Nix, including using an alternate runtime like podman or containerd
- (this one will hammer me with downvotes but i genuinely do think that:) you can use an LLM agent like GitHub Copilot to get you started, learn the Nix language and ecosystem, and create Nix modules for things that need to be wrapped. i’ve been a software engineer for 15 years; i’ve got nothing to prove when it comes to making a working system. what i want is a working system.
Selfhoster on NixOS here too.
Nix (and operating services on a NixOS machine) is a learning curve, and even though tho project is over 10 years old now the semantic differences between the conventional approach to distro design/software development/ops is still a source of friction. But the project has come a long way and lots of popular software is packaged and hostable and just works (when you are aware of said semantic differences)
But when it works, and it often it does, it’s phenomenal and a very well integrated experience.
The problem in my exparience with using LLMs to assist is that the declarative nature of Nix makes them prone to hallucination: “Certainly, just go
services.fooService.enable = true;in yourconfiguraton.nixand you’re off to the races”. OTOH, because nix builds are hermetic and functional they’re pretty safe to include as a verification tool that something like Claude code can use to iterate on a solution.There are some pretty good examples of selfhosting system configurations one can use as inspiration. I just discovered github.com/firecat53/nixos that is an excellent example of a modular system configuration that manages multiple machines, secrets, and self hosted services.
I will check that out even though, yes is homework lol.
And +1 for the contribution to help a stranger out!
Lost me at LLMs. My Nix config is over 20k lines long at this point, neatly split into more than a hundred modules and managing 8 physical machines and 30+ VMs. I love it.
But every time I’ve tried to use an LLM for nix, it has failed spectacularly.
Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.
just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.
It’s matomo analytics, so not as bad as some big tech, but still.
+1 for Portainer. There are other such options, maybe even better, but I can drive the Portainer bus.
Why did I never think of that?! That would make sense lol. Thank you!
No problem. I have been using it for a while and I really like it. There’s nothing stopping you from doing it the old fashioned way if you find you don’t like portainer but once you familiarize yourself with it I think you’ll be hooked on the concept.
I deliberately have not used docker at home to avoid complications. Almost every program is in a debian/apt repo, and I only install frontends that run on LAMP. I think I only have 2 or 3 apps that require manual maintenance (apart from running “apt upgrade”). NextCloud is 90% of the butthurt.
I’m starting to turn off services on IPv4 to reduce the network maintenance overhead.
I’m sick of everything moving to a docker image myself. I understand on a standard setup the isolation is nice, but I use Proxmox and would love to be able to actually use its isolation capabilities. The environment is already suited for the program. Just give me a standard installer for the love of tech.
unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.
I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.
For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it’s going to use that amount (and will crash if it can’t get that amount)
These CT’s do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.
For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I’m going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)
Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.
Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it’s just ephemeral instead of persistent, And designed to be plug-and-go, which I’ve found in the case of running a Proxmox-style setup, isn’t super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.
You can still use VMs and do containers in there. That’s what I do, makes separating different services very easy.
This is what I currently do with non-specialized services that require Docker. I have one container, which runs Docker Engine, and I throw everything on there, and then if I have a specialized container that needs Docker, I will still run its own CT. But then I use Docker Agent, So I can use one administration panel.
It’s just annoying because I would rather just remove Docker from the situation because when you’re running Proxmox, you’re essentially running a virtualized system in a virtualized system because you have Proxmox, which is the bare bones running a virtualized environment for the container, which is then running a virtualized environment for the Docker container.
NixOS for the win! Define your system and services, run a single command, get a reproducible, Proxmox-compatible VM out of it. Nixpkgs has basically every service you’d ever want to selfhost.
I thought that was the point of supporting OCI in the latest version so you can pull docker images and run them like an lxc container
If there’s a way of pulling a Docker container and running it directly as a CT on Proxmox, please fill me in. I’ve been using it for a year and a half to two years now, but I haven’t seen any ability to directly use a Docker container as an LXC.
I wouldn’t say im stick of it, but it can be a lot of work. It can be frustrating at times, but also rewarding. Sometimes I have to stop working on it for a while when I get stuck.
In any case, I like it a lot better than being Google’s bitch.
I have to stop working on it for a while when I get stuck.
I feel you there bro. Sometimes, when I’m creating a piece of music, I get to a point where, I’m just not making any progress, I’ll step of for a piece, let it simmer for a bit. Same with servers in general for me. It’s the reason I have a test server and have, in the past, leaned a bit heavily on a few backups. LOL! I can screw something up quick when I’m frustrated. The reward for me is learning something new. It’s a rewarding and useful hobby for me. among others.
Good point. I think I’ve got so caught up between projects at home and work I need a break from both.







