

I’ve run a publicly accessible low-legitimate-traffic website that has been indexed off my home network for >20 years without anything buckling so far. I don’t even have a great connection (30mbps upstream).
Maybe I’m just lucky?


I’ve run a publicly accessible low-legitimate-traffic website that has been indexed off my home network for >20 years without anything buckling so far. I don’t even have a great connection (30mbps upstream).
Maybe I’m just lucky?


I ran a fairly popular RTCW server back in the day… Insta-gib and sniper rifles only. Good times.


What? I’m looking for minimal setup but for desktop use (by this I mean machine which I’ll be using as my workstation), not for a server.
Which isn’t a “minimal” Linux installation. Which is fine - you don’t actually need a “minimal” system. I’m not sure you even know why you want that. Just install any Linux distro and go to town. You can remove things if you like.


Sounds like “minimal” just went out the door?


They’re good at different things.
Terraform is better at “here is a configuration file - make my infrastructure look like it” and Ansible is better at “do these things on these servers”.
In my case I use Terraform to create proxmox VMs and then Ansible provisions and configures software on those VMs.


Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.


Flatpaks are similar, but more aimed at desktop applications. Docker containers are made for services and give more isolation on the network.
Docker containers get their own IP addresses, they can discover each other internally, you get port forwarding, etc. Additionally you get volume mounts for persistent storage and other features.
Docker compose allows you to bring up multiple dependent containers as a group and manage connections between them, with persistent volumes. It’ll handle lifecycle issues (restarting crashed containers) and health checks.
An example - say you want a Nextcloud service and an immich service running on the same host. You can create two docker-compose files that launch both of them, each with its own supporting database, and give each db and application persistent volumes for storage. Your applications can be exposed to the network and the databases only internally to other containers. You don’t need to worry about port conflicts internally since each container is getting its own IP address. So those two MySQL DBs won’t conflict with each other. All you need to do is ensure that publicly available services have a unique port forwarded to them. So less to keep track of.


deleted by creator
Cloud backups.


deleted by creator


How isolated could it really be as a docker container vs a separate machine or proxmox?
You can get much better isolation with separate machines but that gets very expensive very fast.
It’s not that it provides complete isolation - but it provides enough isolation very cheaply. You still compete with other applications for compute resources but you run in your own little filesystem jail and can run that janky python version that your application needs and not worry about breaking yum. Or you can bundle that old out-of-support version of libaio that your application requires. All of your dependencies are bundled with your application so you don’t affect the rest of the system.
And since containers are standardized it allows you to move between physical computers without any modification or server setup other than installing docker or podman. You can run on Amazon Linux, RedHat, Ubuntu, etc. If it can run containers it can run your application. Containers can also be multi-platform so you can run on both ARM64 and AMD64 seamlessly.
And given that isolation you can run on a kubernetes cluster, or Amazon ECS with FARGATE instances, etc.
But that starts to get very enterprisey. For the home-gamer there is still a ton of benefit to just having file-system isolation and an easy way to run an application regardless of the local system version and installed packages. It’s a bit of an “experience” thing to truly appreciate it I suppose. Like I said - if you’ve tried running a lot of services on a system in the past without containers it gets kinda complicated rather fast. Especially if they all need databases (with containers you can spin up one db for each application easily).


But what do you actually gain from this?
Isolation. The number one reason to use docker is isolation. If you’ve not tried to run half a dozen services on a single server then this may not mean much to you but it’s a “pretty big deal.”
I have no idea how the synology app store works from this pov - maybe it’s docker under the covers. But in general I despise the idea of a NAS being anything than a storage server. So running Nextcloud, Immich, etc. on a NAS is pretty anathema to me either way.
This is… Pretty stupid. There are things to be careful about but it’s pretty straight forward to use iptables.
But absolutely none of the issues you listed are issues with iptables.


apt install nfs-utils
point is, firewalld and iptables is for amateur hour and hobbyists.
Which is weird for you to say since practically all of the issues you list are mistakes that amateurs and hobbyists make.
Containers run “on bare metal” just as much as non-containerized applications.
Let’s encourage human interaction rather than sending people away to an llm.
I’m totally in favor of people asking other people for help with these things. But here’s what Claude gave.
The problem is a mismatch between your find output and what read expects:
find with a regular pipe outputs newline-separated filenamesread -r -d '' expects null-terminated input (the -d '' means “use null byte as delimiter”)-print0 with find (Recommended)Change your find command to use -print0:
find ./ -type f \( -iname "*.jpg" -o -iname "*.png" \) -print0 | while IFS= read -r -d '' file; do
-d '' from readfind ./ -type f \( -iname "*.jpg" -o -iname "*.png" \) | while IFS= read -r file; do
-iname \*.jpg should be -iname "*.jpg" to prevent shell expansion.jpg, .png). You probably want to keep those!#! /bin/bash
echo "This script will rename all files in this directory with unique names. Continue? (Y/N)"
read proceed
if [[ "$proceed" == "Y" ]]; then
echo "Proceed"
find ./ -type f \( -iname "*.jpg" -o -iname "*.png" \) -print0 | while IFS= read -r -d '' file; do
echo "in loop"
echo "$file"
# Extract the directory and extension
dir=$(dirname "$file")
ext="${file##*.}" # Get file extension
# Generate new name with UUID but keep extension
new_name="$dir/$(uuidgen -r).$ext"
echo "Renaming ${file} to ${new_name}"
# mv "$file" "$new_name" # Uncomment to actually perform the rename
done
echo "After loop"
else
echo "Cancelling"
fi
The key changes:
-print0 to find${file##*.}Try this and let me know if it works!
Links to lms, navidrome, gonic, ampache, nextcloud, airsonic, the previous post… But none to the thing you posted about?