Last I checked they haven’t yet added user-facing controls to configure this yet. I don’t know where it is on the priority list.
Last I checked they haven’t yet added user-facing controls to configure this yet. I don’t know where it is on the priority list.
https://tailscale.com/kb/1218/nextdns/
Easy to set up, mine is working great.
It’s accessing literally anything you self host from home, with minimal latency and without any port forwarding on your router or exposing your services to the Internet.
It’s primary benefit is how fast it is, how much easier it is to set up for even the most novice of users, and how ubiquitous all the clients are.
Plus it’s free for 100 endpoints, which is far more than most individuals will need for home labs. And even that you can get around by using subnet routing.
If you’ve ever wanted to run your own sort of Dropbox or Google docs (Syncthing/Next cloud) but didn’t want to deal with the security hassle of exposing it to the Internet, this removes that completely. No more struggling with open ports, fail2ban, or messing with reverse proxies.
This is not remotely ghetto, this is really well done. Sure the fans are a bit wonky but that is one hell of a machine for the money.
Well done!
If you want to make money and kill clones make your distro free but charge for official support.
That model just does not work. For the engineering that goes into RedHat (and all the contributions back to the community they send), they just don’t make enough for that to happen. Everyone just wants to shrug this off as “Oh IBM has lots of money so that’s not a problem”. This “make it free and charge for support” model almost never works for FOSS yet so many people want to believe it does. On an enterprise level, it just doesn’t. People who want to use an enterprise distro of Linux for free also likely don’t want to pay for support either, instead wanting to support it themselves. Which is all well and good but that doesn’t account for the fact RHEL does all the engineering, all the building, all the testing, everything, and then puts that release up for use. All of that has to be covered somehow.
There was never any promise that you’d always be able to create a “bug compatible distro”. Ever. The GPL does not cover future releases or updates and never has, and even implying that it should sets a dangerous precedent of people being entitled to what you haven’t even created yet.
Rather than hearing the emotional takes from people that want to turn this into “RedHat vs the Linux Community”, I strongly suggest you listen to LinuxUnplugged: https://linuxunplugged.com/517?t=506.
RedHat is still contributing everything upstream, and CentOS Stream is not going anywhere. You have full access to the source of whatever you buy.
The only thing that has changed here is that the loophole that Alma and Rocky were using to create a RHEL clone and then offer support for it (Which is literally RedHat’s own business model) is gone. Those two are throwing a tantrum because they got to set up a nice easy business model where they literally did nothing more than clone RHEL and then offer support for it and that free lunch is over. That’s it. They don’t contribute back to RHEL, they don’t do anything to help development. They sold themselves as the “free” or “cheaper” alternative and now they’re getting burned for building their entire business of the work done by RHEL.
Everything else in this story is noise, drama, and unnecessary emotion.
Not remotely.
Maybe certain people should think twice about setting up an entire business model of support based on having the current company do all the engineering work, cloning it, and then taking the support contracts for it.
Both Fedora and CentOS Stream are still very much upstream. Just certain CentOS alternatives are throwing a hissy-fit/tantrum that their nice neat little “cloned distro + support” business model fell apart overnight because they built their entire business off of what’s basically (not entirely) a loophole.
I stopped messing with port forwarding and reverse proxies and fail2ban and all the other stuff a long time ago.
Everything is accessible for login only locally, and then I add Tailscale (alternative would be ZeroTier) on top of it. Boom, done. Everything is seamless, I don’t have any random connection attempts clogging up my logging, and I’ve massively reduced my risk surface. Sure I’m not immune; if the app communicates on the internet, it must be regularly patched, and that I do my best to keep up with.
Just so I understand, you’re using your compose file to handle updating images? How does that work? I’m using some hacked together recursive shell function I found to update all my images at once.
Side note, I really feel for you with the duplicate comments, it happens to me constantly and I know it’s not our fault :(
Tailscale completely negated and desire I’ve ever had to run any kind of proxy or VPN. The setup tool all of 30 seconds to make an account, and then like 15-20 seconds per client. I set it up once several months ago and I completely forgot about it…it’s just quietly working in the background, completely transparent to me.
Strong suggestion for Tailscale here. It is incredibly easy to use and very easy to set up with multiple users. Opening ports directly to the internet is a thing of the past for me now, ever since I started.
And not even a remotely creative statement. 🙄