I’m sure I’m massively overthinking this, but any help would be greatly appreciated.

I have a domain name that I bought through NameCheap and I’ve pointed it to Cloudflare (i.e. updated the name servers). I have a Synology NAS on which I run Docker and a few containers. Up until now I’ve done this using IP addresses and ports to access everything (I have a Homepage container running and just link to everything from there).

But I want to setup SSL and start running Vaultwarden, hence purchasing a domain name to make it all easier.

I tried creating an A record in Cloudflare to point to the internal IP of my NAS (and obviously, this couldn’t be orange-clouded through CF because it’s internal to my LAN). I’m very reluctant to point the A record to the external IP of my NAS (which, for added headache is dynamic, so I’d need to get some kind of DDNS) because I don’t want to expose everything on my NAS to the Internet. In actual fact, I’m not precious about accessing any of this stuff over the internet - if I need remote access I have a Tailscale container running that I can connect to (more on that later in the post). The domain name was purely for ease of setting up SSL and Vaultwarden.

So I guess my questions are:

  • What is the best way to go about this - do I create a DDNS on the NAS and point that external IP address to my domain in Cloudflare, then use Traefik to just expose the containers I want to have access to using subdomains?
  • If so, then how do I know that all other ports aren’t accessible (I assume because I’m only going to expose ports 80 and 443 in Traefik?)
  • What do other people see (i.e. outside my network) if they go to my domain? How do I ensure they can’t access my NAS and see some kind of page?
  • Is there a benefit to using Cloudflare?
  • How would Pi-hole and local DNS fit into this? I guess I could point my router at Pi-hole for DNS and create my A records on Pi-hole for all my subdomains - but what do I need to setup initially in Cloudflare?
  • I also have a RPi that has a (very basic) website on it - how do I setup an A record to have Cloudflare point a sub-domain to the Pi’s IP address?
  • Going back to the Tailscale thing - is it possible to point the domain to the IP address of the Tailscale container, so that the domain is only accessible when I switch on the Tailscale VPN? Is this a good idea/bad idea? Is there a better way to do it?

I’m sure these are all noob-type questions, but for the past 6-7 years I’ve purely used this internally using IP:port combinations, so never had to worry about domain names and external exposure, etc.

Many thanks in advance!

  • DRx@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I do this for some dockers in my unraid, except I use the zero trust tunnels. MUCH easier, can use SSL, and can set up a login page for users. Also, you don’t have to open any ports on your router!

    Im not sure about synology, but I would assume you can find a “cloudflared” docker in the app store.

    check out this youtube video for a good explanation: https://www.youtube.com/watch?v=ZvIdFs3M5ic

      • DRx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Christian brings up some great points worthy of consideration; however, if your going to use traditional routing through their network (A/cname) your still doing the same thing. CF will still see your traffic.

        The second thing I should say is, I only use zero trust for websites I share with family. So, I have a Searxng and wef/voyager dockers running through zero trust.

        For admin, homeassistant/iot/ip cams, I use an always on IPSec vpn on my iPhone, iPad, and steam deck (take it to work and plug into 3rd monitor) … this is cool because I get 24/7 ad blocking no matter where I am because it routes all my traffic through my pihole at home. This is a great solution for a single person, but I do not want to manage vpn access for multiple ppl. So, I agree with christian in NOT putting admin stuff/sensitive info behind CF at all (zero trust OR tradition web routing) unless you fully trust them. Otherwise do a 24/7 vpn like I do.

        • schmurnan@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I don’t plan on exposing any of this stuff to anybody other than me. I do plan on spinning up SearX but it’ll only be me using it. I’ve given up trying to convince my family to move away from Google to even DuckDuckGo or Startpage, so there’s no way I’ll convince them to use SearX!

          I think, therefore, for accessing away from home I’ll perhaps setup a subdomain that points to the IP of my Tailscale container — that means it’ll be accessible externally but only when I turn on the VPN.

          When I’m on my home network I have a VPN on my Mac anyway.

    • schmurnan@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Before I was using Traefik I used to use plain NGINX and was pretty happy with it. I made the switch to Traefik after reading some good things about it on Reddit.

      More than happy to switch to NPM and give it a try. At this point I have no reverse proxy running at all, so not even like I have to swap out Traefik — there’s nothing they’re to begin with.

  • dm_me_your_feet@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Easiest Solution imo:

    • get Wildcard DNS, point it to the public IP of your NAS
    • deploy the ssl cert (containing your main domain and sudomains for your docker containers)
    • configure reverse Proxy in Synology configy proxying requests for the subdomains to your docker container (you can enforce only local access to certain services too)
    • Static route or local dns (Pihole) to redirect local requests for your public ip to the private IP of your NAS
    • done!
    • schmurnan@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks, I’d like to know more about how to go about this approach.

      I guess in my head, I want to achieve the following (however I go about it):

      • Access https://mydomain.com from outside my network and hit some kind of blank page that wouldn’t necessarily suggest to the public that anything exists here
      • Access https://mydomain.com from inside my network and hit a login page of some kind (Authelia or otherwise), to then gain access to the Homepage container running in Docker (essentially a dashboard to all my services)
      • Access https://secure.mydomain.com from outside my network and route through to the same as above, only this would be via the Tailscale IP address/container running on my stack to allow for remote access
      • Route all HTTP requests to HTTPS
      • Use the added protection that Cloudflare brings (orange clouds where possible)
      • SSL certificates for all services
      • Ability to turn up extra Docker containers and auto-obtain SSL certs for them Ensure that everything else on my NAS and network is secure/inaccessible other than the services I expose through Traefik.

      I have no idea where Cloudflare factors in (if at all), nor how Pi-hole factors in (if at all).

      Internal stuff I’ve been absolutely fine with. Stick a domain name, a reverse proxy and DNS in front of me and it’s like I’m learning how to code a Hello World app all over again.