

Destiny 2 gun play is god damn good, but the new player experience and the monetization is downright pathetic.
Destiny 2 gun play is god damn good, but the new player experience and the monetization is downright pathetic.
Thanks for the info, I appreciate it.
I am a newbie so I am not sure I understand correctly. Tell me if my understanding is good.
Your Pi-Hole act as your DNS, so the VPS use the pi-hole through the tunnel to check for the translation IP, as set through the DNS directive in the wg file. For example, my pi-hole is at 10.0.20.5, so the DNS will be that address.
On the local side, the pi-hole is the DNS for all the services on that subnet and each service automatically populate their host name on pi-hole. I can configure the DNS server in my router/firewall (OPNSense in my case)
So when I ping service.example.com, it goes through the VPS, which queries the pi-hole through the tunnel and translates the address to the local subnet IP if applicable.
So when I have the wg connection active and my pi-hole is the DNS, every web request will go through the pi-hole. If the IP address is inside the range of AllowedIPs, the connection will go through the tunnel to the service, otherwise, the connection will go through outside the wg tunnel.
Does that make sense?
How does WG work on the local side of the network? Do you need to connect each VM/CT to the wireguard instance?
I am currently setting up my home network again, and my VPS will tunnel through my home network and NPM will be run locally on the local VLAN for services and redirect from there.
I wonder if there is any advantage to run NPM on the VPS instead of locally?
It is a lot simpler nowadays. Download Caddy, put a 2 line config and you are good to go.
Yes, but since he is working on the product itself, it’s heavily biased.
He can use the app without leaving a review.
You talk about vibecoding buddy, you think they SSH into anything?
The tech itself is great.
But:
It’s akin to when everything is urgent, nothing is.
At one point, you gotta accept that you can’t do everything and move on. You can always re-find the information if it comes down to it in the future. Or you can use bookmark folders to be able to eventually go back to what you think is important.
If I have more than 6-7 tabs open, I check what I need to absolutely save and add that to a bookmark folder, then I close my browser and start fresh.
You gotta be nimble to navigate through 50+ tabs to find what you are looking for
Laziness. I used Ubuntu, then tried a few distros based on it, and Linux Mint worked well enough out of the box.
I have a few issues with it, but i have easy workarounds so that’s good enough for me.
Dropshippers don’t advertise themselves as such.
I split my docker containers so that I can selectively backup what I want easily on proxmox
For example, I am currently running an Abiotic Factor server that I don’t care to backup. So I just dont add the container to the backups and I am done.
Proxmox is a great starting point for self hosting. You don’t need advanced features to start, and you can easily create VMs and containers.
Here is a bunch of random tips to become more comfortable with the terminal.
Do absolutely everything that you can on the terminal.
When you install something, enable the verbose if possible and snoop around the logs to see what is happening.
If an app or an install fails, look at the logs to see what is the issue, and try to fix it by actually resolving the error itself first instead of finding the commands on the internet to fix your issue.
Instead of googling for your command options, use the help menu from the application and try to figure out how to use the command from there.
Yeah that was the issue. I though I had switched to my LTE network connection from my phone, but my phone was still on my local network.
Thanks for the answer
You are right and I should have been more precise.
I understand why docker was created and became popular because it abstracts a lot of the setup and make deployment a lot easier.
I hate how docker made it so that a lot of projects only have docker as the official way to install the software.
This is my tinfoil opinion, but to me, docker seems to enable the “phone-ification” ( for a lack of better term) of softwares. The upside is that it is more accessible to spin services on a home server. The downside is that we are losing the knowledge of how the different parts of the software work together.
I really like the Turnkey Linux projects. It’s like the best of both worlds. You deploy a container and a script setups the container for you, but after that, you have the full control over the software like when you install the binaries
I edited the post. Since it’s all local it’s fine to show the IP. It’s just a reflex to hide my ips.
I use IP directly as I don’t have a local domain configured properly.
The outpost ip in my configuration file is the same provided in the outpost on Authentik.
I am trying to get it to work still, but I am pretty sure that the issue is between Authentik and Firefly.
I don’t see any of the headers (x-authentik-email more specifically) specified in the caddy file when Authentik is sending the request to Firefly. The only header I see is x-authentik-auth-callback.
I am not sure how I can specify which headers are sent in Authentik.
My uneducated kernel take. Flexibility is acceptable and desirable in small projects or low impact projects.
When the majority of the internet and a good chunk of PC are dependent on your project, predictability and stability is much more important than flexibility.