Do you host all services just from your root account with docker or do you seperate the services between user accounts with rootless docker?
Do you use podman or docker?
It’s easier to just host everything from root with normal docker, but seperating services into special user account is probably way saver, at least as far as i know. Do you think ist worth going the extra step or do you just trust docker and your containers to not get exploited?
Last but not least do you use an automatic update service for your host system and your containers?
I keep all my services in one docker-compose yml, and run it from a normal user account added to the docker group.
I am really conscious of what I expose to the internet though, since I already almost had a security incident.
I used to run non-standard ssh port to my machine with password authentication enabled.
Turns out I didn’t know the sonarr/radarr containers came with default users, and a bruteforce attack managed to login to one of them (or something like that anyway,it’s been awhile). Fortunately they have a default home of /sbin/nologin so crisis averted there, but it definitely was a big lesson for me.
Years later, the current setup is only plex, tautulli, and ombi open to the internet, and to reach everything else I use tailscale. And of course,only key-based authentication.
Oh and for updates, I run apt upgrade once in a while on the box (Ubuntu server 18.04 LTS) and for the containers, I use watchtower.
Currently, I’m just using my root account with Docker and update everything manually. I have dockcheck-web installed to check whether any updates are available (https://github.com/Palleri/DCW). From the outside everything is only accessible using Wireguard and connections have to go through a Caddy proxy in order to reach a container. Curious what other peoples setup is.
I use rootless Podman, because security. A container breakout exploit will only impact that one Unix user. Plus no Docker daemon to worry about.
I don’t seperate services into separate users, although maybe I should. The main impediment with separation is that you give up the conveniences of container networking / container DNS and have to connect everything on the host instead. I don’t know if that’s even possible (conveniently) with a service like Traefik that’s supposed to introspect running containers. Also, with separation by Unix user, there’s not one convenient place to SSH in and run podman ps or docker ps to see all containers. Maybe not a big deal?
Auto-update of containers: No, I don’t, because updates somtimes break things and I want to be there in case something goes wrong. The one exception is I auto-update the containers I develop myself as the last implicit deployment step of a CI pipeline.
I’m using network overlays for individual containers and separation.
Secondly fail2ban installed on host to secure docker services. Ban FORWARDING chains specific to docker instead of INPUT chains. [fail2ban docker](Configure Fail2Ban for a Docker Container – seifer.guru) Use 2FA for services if available.Rootless docker has limitations when it comes to port exposing, storage drivers, network overlays etc.
The host is auto-updating security batches but rebooted manually only.
Docker containers are updated manually too. I built all containers from file and don’t pull them because most are modified (plugins, minimizing sizes, dedicated user rights etc.)Rootless docker via Terraform. Can create all my containers with traefik and dashboard configs at the click of a button.
Docker and a Synology NAS. Everything is accessed though a wireguard VPN.
Nomad, consul, and gluster. Not as easy as a simple docker compose, but definitely not as annoying as kubernetes.
I run docker on almalinux on Proxmox. Nothing is exposed to the Internet. Yes, I do automatic updates for everything, but reboots are manual.