You’re probably already aware of this, but if you run Docker on linux and use ufw or firewalld - it will bypass all your firewall rules. It doesn’t matter what your defaults are or how strict you are about opening ports; Docker has free reign to send and receive from the host as it pleases.

If you are good at manipulating iptables there is a way around this, but it also affects outgoing traffic and could interfere with the bridge. Unless you’re a pointy head with a fetish for iptables this will be a world of pain, so isn’t really a solution.

There is a tool called ufw-docker that mitigates this by manipulating iptables for you. I was happy with this as a solution and it used to work well on my rig, but for some unknown reason its no-longer working and Docker is back to doing its own thing.

Am I missing an obvious solution here?

It seems odd for a popular tool like Docker - that is also used by enterprise - not to have a pain-free way around this.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    It doesnt actually bypass the firewall.

    When you tell docker to expose a port on 0.0.0.0 its just doing what you ask of it.

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      17 minutes ago

      Wow that’s so helpfull!! Not low-effort at all! You’re so clever!!

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 hours ago

    So, this discussion has intrigued me and some good points have been brought up by seemingly knowledgeable network engineers of which I am not. If I may, introduce you guys to my network to see if there are points I can improve on.

    For simplicity, the network diagram would be: modem---->stand alone pfsense firewall with a tailscale overlay, running Suricata, pfblockerng, vlans to segment server traffic from normal traffic, & a very robust rule set & ntopng for traffic analysis -----> server & devices. Server is piped through Cloudflare Tunnel/Zero Trust. On the server, I run UFW, fail2ban with a hair trigger & Crowdsec. Also, since I am the only user, I lock everything down in the .host Allow/Deny & use ssh keys. Users cause complexities and complexities turn into issues. All devices are running a VPN. I do run Docker in lieu of Podman. Server has been hardened through various means and to an extent in line with Lynis.

    I’ve been told that this is overengineered, but it seems to work just jammy. Knock on wood, I’ve never had a breach on my local network, though there is always the possibility. A long time ago, when I stood my first server up on a VPS, it got hacked almost immediately. So I dropped back and did some studying, but I am no network engineer.

    Anyways, for the experts here, my question is: What would you do to improve, harden, rip out, redo, add etc?

    ETA: Server also has a tailscale overlay.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    How I sleep knowing Fedora + podman actually uses safe firewalld zones out of box instead of expecting the user to hack around with the clown show that is ufw.

    I could be wrong here but I feel like the answer is in the docs itself:

    If you are running Docker with the iptables or ip6tables options set to true, and firewalld is enabled on your system, in addition to its usual iptables or nftables rules, Docker creates a firewalld zone called docker, with target ACCEPT.

    All bridge network interfaces created by Docker (for example, docker0) are inserted into the docker zone.

    Docker also creates a forwarding policy called docker-forwarding that allows forwarding from ANY zone to the docker zone.

    Modify the zone to your security needs? Or does Docker reset the zone rules ever startup? If this is the same as podman, the docker zone should actually accept traffic from your public zone which has your physical NIC, which would mean you don’t have to do anything since public default is to DROP.

  • fizzle@quokk.au
    link
    fedilink
    English
    arrow-up
    4
    ·
    12 hours ago

    I basically just avoid exposing ports from containers unless I really do want them exposed on the host?

    Most services go through my reverse proxy, traefik.

    Things like databases don’t publish ports on the host because they’re only accessed internally, using their container name.

  • Melmi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    14 hours ago

    If there’s a port you want accessible from the host/other containers but not beyond the host, consider using the expose directive instead of ports. As an added bonus, you don’t need to come up with arbitrary ports to assign on the host for every container with a shared port.

    IMO its more intuitive to connect to a service via container_name:443 instead of localhost:8443

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      23 hours ago

      you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

      Also, if the Docker container only has to be accessed from another Docker container, you don’t need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.

      • Matt The Horwood@lemmy.horwood.cloud
        link
        fedilink
        English
        arrow-up
        7
        ·
        15 hours ago

        sure, you can see below that port 53 is only on a secondary IP I have on my docker host.

        ---
        services:
          pihole01:
            image: pihole/pihole:latest
            container_name: pihole01
            ports:
              - "8180:80/tcp"
              - "9443:443/tcp"
              - "192.168.1.156:53:53/tcp" # this will only bind to that IP
              - "192.168.1.156:53:53/udp" # this will only bind to that IP
              - "192.168.1.156:67:67/udp" # this will only bind to that IP
            environment:
              TZ: 'Europe/London'
              FTLCONF_webserver_api_password: 'mysecurepassword'
              FTLCONF_dns_listeningMode: 'all'
            dns:
              - '127.0.0.1'
              - '192.168.1.1'
            restart: unless-stopped
            labels:
                - "traefik.http.routers.pihole_primary.rule=Host(`dns01.example.com`)"
                - "traefik.http.routers.pihole_primary.service=pihole_primary"
                - "traefik.http.services.pihole_primary.loadbalancer.server.port=80"
        
      • tux7350@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        23 hours ago

        Something like this. This is a compose.yml that only allows ips from the local host 8080 to connect to the container port 80.

        services:
          webapp:
            image: nginx:latest
            container_name: local_nginx
            ports:
              - "127.0.0.1:8080:80"
        
          • tux7350@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            22 hours ago

            Well if your reverse proxy is also inside of a container, you dont need to expose the port at all. As long as the containers are in the same docker network then they can communicate.

            If your reverse proxy is not inside a docker container, then yes this method would work to prevent clients from connecting to a docker container.

              • tux7350@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                21 hours ago

                Course, feel free to DM if you have questions.

                This is a common setup. Have a firewall block all traffic. Use docker to punch a hole through the firewall and expose only 443 to the reverse proxy. Now any container can be routed through the reverse proxy as long as the container is on the same docker network.

                If you define no network, the containers are put into a default bridge network, use docker inspect to see the container ips.

                Here is an example of how to define a custom docker network called “proxy_net” and statically set each container ip.

                networks:
                  proxy_net:
                    driver: bridge
                    ipam:
                      config:
                        - subnet: 172.28.0.0/16
                
                services:
                  app1:
                    image: nginx:latest
                    container_name: app1
                    networks:
                      proxy_net:
                        ipv4_address: 172.28.0.10
                    ports:
                      - "8080:80"
                
                  whoami:
                    image: containous/whoami:latest
                    container_name: whoami
                    networks:
                      proxy_net:
                        ipv4_address: 172.28.0.11
                

                Notice how “who am I” is not exposed at all. The nginx container can now serve the whoami container with the proper config, pointing at 172.28.0.11.

    • Björn@swg-empire.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      24 hours ago

      Yeah, leaving unwanted ports open is a configuration problem. A firewall gives you just the opportunity to fuck up twice.

  • bizdelnick@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    23 hours ago

    I’ve read the article you pointed to. What is written there and what you wrote here are absolutely different things. Docker does integrate with firewalld and creates a zone. Have you tried configuring filters for that zone? Ufw is just too dumb because it is suited for workstations that do not forward packets at all, so it cannot be integrated with docker by design.

    • BlueBockser@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      23 hours ago

      +1 for Podman. I’ve found rootful Podman Quadlets to be a very nice alternative to Docker Compose, especially if you’re using systemd anyway for timers, services, etc.

  • davad@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    edit-2
    1 day ago

    In an enterprise setting, you shouldn’t trust the server firewall. You lock that down with your network equipment.

    Edit: sorry, I failed to read the whole post 🤦‍♂️. I don’t have a good answer for you. When I used docker in my homelab, I exposed services using labels and a traefik container similar to this: https://docs.docker.com/guides/traefik/#using-traefik-with-docker

    That doesn’t protect you from accidentally exposing ports, but it helps make it more obvious when it happens.

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      In an enterprise setting, you shouldn’t trust the server firewall. You lock that down with your network equipment.

      I thought someone might say this, but it doesn’t seem very zero-trust?

      Ideally you’d still want the host to be as secure as humanly possible?

  • gerowen@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    22 hours ago

    I just host everything on bare metal and use systemd to lock down/containerize things as necessary, even adding my own custom drop-ins for software that ships its own systemd service file. SystemD is way more powerful than people often realize.

    • prettybunnys@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      When you say you’re using systems to lock down/containerize things as necessary can you explain what you mean?

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        19 hours ago

        I don’t know what the commenter you replied to is talking about, but systemd has it’s own firewalling and sandboxing capabilities. They probably mean that they don’t use docker for deployment of services at all.

        Here is a blogpost about systemd’s firewall capabilities: https://www.ctrl.blog/entry/systemd-application-firewall.html

        Here is a blogpost about systemd’s sandboxing: https://www.redhat.com/en/blog/mastering-systemd

        Here is the archwiki’s docs about drop in units: https://wiki.archlinux.org/title/Systemd#Drop-in_files

        I can understand why someone would like this, but this seems like a lot to learn and configure, whereas podman/docker deny most capabilities and network permissions by default.

      • gerowen@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Systemd has all sorts of options. If a service has certain sandbox settings applied such as private /tmp, private /proc, restricting access to certain folders or devices, restricting available system calls or whatever, then systemd creates a chroot in /proc/PID for that process with all your settings applied and the process runs inside that chroot.

        I’ve found it a little easier than managing a full blown container or VM, at least for the things I host for myself.

        If a piece of software provides its own service file that isn’t as restricted as you’d like, you can use systemctl edit to add additional options of your choosing to a “drop-in” file that gets loaded and applied at runtime so you don’t have to worry about a package update overwriting any changes you make.

        And you can even get ideas for settings to apply to a service to increase security with:

        systemd-analyze security SERVICENAME

  • ryokimball@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I use podman instead, though I’m honestly not certain this “fixes” the problem you described. I assume it does purely on the no-root point.

    Agreeing with the other poster, network tools and not relying on the server itself is the professional fix

    • Overspark@piefed.social
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      24 hours ago

      Podman explicitly supports firewalls and does not bypass them like docker does, no matter whether you’re using root mode or not. So IMHO that is the more professional solution.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I’ve had similar issues using CSF firewall. They just pushed out updates that apparently support docker a little better but I still have to fight with that to get that working, I don’t know if that will fix it, but give it a try

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    12
    ·
    21 hours ago

    this is the second time I’ve seen a post like this.

    docker has always been like this. if it’s news to you then you must be new to docker.

    if you’re using the built in firewall to secure your system on your wan, you’re doing it wrong. get a physical firewall. if you’re doing it to secure your lan then you just need to put in some proper routes and let your hardware firewall sort it out with some vlans.

    don’t rely on firewalld or iptables for anything.

    • lukecyca@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      20 hours ago

      What if you rent a bare metal server in a data center? Or rent a VPS from a basic provider that expects you to do your own firewalling? Or run your home lab docker host on the same vlan as other less trusted hosts?

      It would be nice if there was a reliable way to run a firewall on the same host that’s running docker.

      You may say these are obscure use cases and that they are Wrong and Bad. Maybe you’re right, but personally I think it’s an unfortunate gap in expected functionality, if for no other reason than defense-in-depth.

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        19 hours ago

        What if you rent a bare metal server in a data center?

        any msp will work with your security requirements for a cost. if you can’t afford it, then you shouldn’t be using a msp.

        Or rent a VPS from a basic provider that expects you to do your own firewalling?

        find a better msp. if a vendor you’re paying tells you to fuck off with your requirements for a secure system, they are telling you that you don’t matter to them and their only goal is to take your money.

        Or run your home lab docker host on the same vlan as other less trusted hosts?

        don’t? IDK what to tell you if you understand what a vlan is and still refuse to set one up properly to segment your network securely.

        It would be nice if there was a reliable way to run a firewall on the same host that’s running docker.

        don’t confuse reliable with convenient. iptables and firewalld are not reliable, but they are certainly convenient.

        You may say these are obscure use cases and that they are Wrong and Bad. Maybe you’re right, but personally I think it’s an unfortunate gap in expected functionality, if for no other reason than defense-in-depth.

        poor network architecture is no excuse. do it the proper way or you’re going to get your shit exposed one day.

          • GreenKnight23@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            13 hours ago
            • anyone gaining physical or remote access to the device can set rules. by protecting the entire network with a hardware firewall you mitigate attack vectors from other hardware on your network that become compromised.
            • iptables and firewalld are notorious for locking users out of the system by overzealous or green system admins. in the msp world this happens practically by the hour.
            • iptables and firewalld can be used against you in the event of a breach. one of the first things an attacker may attempt is to forward ports and lock system admins out as they take over the system.
            • make sure you save your rules properly or they’ll be gone after a reboot or botched upgrade
            • migrating your rules from one system to another when you’re changing hardware or restoring a system is a huge pain in the ass.
            • got a network change that’s going to modify the subnet your systems are on? get ready to migrate all 15 of your devices one by one for the next 8-15 hours (depending on the complexity of your rules)

            it’s far easier, and safer to have all your network config done in the network. from system migrations to securing/hardening. it’s far more efficient and effective to have a single source of truth that manages network routing and firewall rules. hell, you can even have a redundant or load balanced firewall configuration if you’re afraid of a single point of failure.

            point is, firewalld and iptables is for amateur hour and hobbyists.

            if you want to complain that “docker doesn’t respect system firewalls” then at least have the chutzpah enough to do it the right way from the beginning.

            • atzanteol@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              point is, firewalld and iptables is for amateur hour and hobbyists.

              Which is weird for you to say since practically all of the issues you list are mistakes that amateurs and hobbyists make.

            • slazer2au@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              13 hours ago

              None of those speak to the reliability of iptables. They all sound like skill issues.

              In 15 years of network engineering iptables has been the simplest part.

              A layered approach with hardware firewalls is valid but when those firewalls get popped, looking at you Cisco, Fortinet, and PA you still want host level restrictions.
              Your firewall or switch should never be used as a jump host to servers

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    23 hours ago

    If you are good at manipulating iptables there is a way around this

    Modern systems shouldn’t be using iptables any more.