I’m going round in circles on this one.

What I want to do is:

  • serve up my self-hosted apps with https (to local clients only - nothing over the open web)
  • address them as ‘app.server.lan’ or ‘sever.lan/app’
  • preferably host whatever is needed in docker

I think this is achievable with a reverse proxy, some kind of DNS server and self-signed certs. I’m not a complete noob but my knowledge in this area is lacking. I’ve done a fair bit of research but I’m probably not using the right terminology or whatever.

Would anyone have a link to a good guide that covers this?

  • Willdrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    I recently finished something like this at home, npmplus+pihole. I’ll never do it again, and the moment it breaks I’ll go back to just using Tailscale’s MagicDNS

  • coffeeboba@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    Pasting a comment I made in another similar thread:

    I use a reverse proxy (caddy), and point a domain at my machine.ts-domain.ts.net which hosts caddy

    this way I can go to service.my.domain instead of machine:port as long as I’m connected to tailscale. any devices not on my tailscale network just get bounced if they hit the domain

    • jonno@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      This is the way! Also without having to open a port, you can use dns over tls. Also use duckdns or any other free dynamic dns provider.

  • Funky_Beak@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    Personal solution:

    • Openssl certs (lots of youtube videos on best practice there).
    • nginx reverse proxy manager
    • adguard home using the dns rewrite pointing to the wildcard domain.

    This is enough i find for intranet use. You can get fancy and put it over a wireguard or tailscale network too.

  • borax7385@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago
    1. Point the hostname of your service to the IP of the proxy in the DNS.

    2. For the certs you need an internal CA. I use Step CA which has ACME support so the proxy can get certificates easily.

    3. Add the root CA certificate to your computer certificate trust store.

    4. Profit!!

  • Scrath@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I have a pretty similar setup currently running but I bought a public domain that I use for my certificates.

    I used to have a pi-hole as my DNS server where I entered all subdomains and pointed them at the right address, namely my reverse-proxy.

    My reverse-proxy, Nginx Proxy Manager, got the certificates from my domain registrar and forwarded the requests to the correct services based on subdomain.

  • glitching@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 day ago

    Imma be the problemXY guy here - ditch the https part. without it, you don’t gotta deal with certs, signing, shit that’s outside your LAN, etc. it’s your LAN, do you really need that level of security? who’s gonna sniff packets and shit on your LAN?

    now all you need is pihole where you set up your hostnames (jellyfin.lan, nextcloud.lan, etc.) and nginx proxy that maps e.g. jellyfin.lan to 192.168.0.123:8096. both of them run plenty fine in docker.

    • Willdrick@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      22 hours ago

      You say that, but I’ve seen so many dodgy iot devices… Specially deploying PiHole you start to see so much random traffic from stupid stuff like a smartplug or a TV box

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        If you’re on the same subnet, no amount of reverse proxy will help with dodgy apps. It’s more appropriate to put the dodgy iot in a DMZ to control what they can do.

        Putting https on these is fine, but it’s not a solution to isolating bad clients.

  • solrize@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I don’t know of an all-in-one-place guide but there’s not a whole lot to it. Just look up how to do each of the parts you mentioned. I’d say that buying a domain and using LetsEncrypt is not really in the self-hosting spirit (i.e. you should run your own DNS and CA) but it’s up to you. Running a serious CA with real security is quite hard, but for your purposes you can just do whatever. There are various programs or scripts for it. I still use CA.pl from the openssl distro, but that’s very old school and people here hate it. Anyway, you will do a little head scratching to get everything working right, but it will be educational, so you’ll get something out of it in its own right.

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 days ago

    It’s probably just easier to use public certs with DNS verification than building and distributing your own certs.

  • timuchan@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I think you could achieve this with largely the same method as typical when using Nginx, Caddy, etc.

    The main difference is that where you’d usually use ACME/Let’s Encrypt - you’ll likely need to generate your own certs using a took like mkcert. You’ll need to get the CA cert used to generate the SSL certs and install it on any other systems/browsers that will be accessing the apps over https (mkcert will install them for the system you generate from).