• 0 Posts
  • 118 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • Fair, though I personally don’t let my ISP indirectly dictate what I do with my LAN. If I didn’t already have a v6-enabled WAN, I would still manage my LAN using IPv6 private range addresses. There are too many benefits to me, like having VMs and containers be first-class citizens on my LAN, rather than sitting behind yet another layer of NAT. That lets me avoid port forwarding at the border of my home Kubernetes cluster (or formerly, my Docker Swarm), and it means my DNS names correctly resolve to a valid IP address that’s usable anywhere on my network (because no NAT when inside the LAN).

    I will admit that NAT64 is kinda a drag to access v4-only resources like GitHub, but that’s only necessary because they’ve not lit up support for v6 (despite other parts of their site supporting v6).

    This is my idea of being future-ready: when the future comes, I’m already there.


  • The approach isn’t invalid, but seeing as you already have the framework set up to deny all and log for IPv4, the same could be done with IPv6.

    That is to say, your router advertises an IPv6 gateway to the global internet, but you then reject it because your VPN doesn’t support v6 (sadly). I specifically say reject, rather than drop, because you want that ICMP Unreachable (administratively prohibited) message to get returned to any app trying to use v6. That way, Happy Eyeballs will gracefully and quickly fall back to v6. Unless your containers have some exceptionally weird routing rules, v6 connections will only be attempted once, and will always use the route advertised. So if your router denies this attempt, your containers won’t try again in a way that could leak. v6 leaks are more likely when there isn’t even a route advertised.

    This makes your apps able to use v6, for that day when your VPN supports it, and so it’s just a question of when the network itself can be upgraded. IMO, apps should always try for v6 first and the network (if it can’t support it) will affirmatively reply that it can’t, and then apps will gracefully fall back.

    This also benefits you by logging all attempted v6 traffic, to know how much of your stuff is actually v6-capable. And more data is always nice to have.


  • Firstly, I wish you the best of luck in your community’s journey away from Discord. This may be a good time to assess what your community needs from a new platform, since Discord targeted various use-cases that no single replacement platform can hope to replace in full. Instead, by identifying exactly what your group needs and doesn’t need, that will steer you in the right direction.

    As for Element, bear in mind that their community and paid versions do not exactly target a hobbyist self-hosting clientele. Instead, Element is apparently geared more for enterprise on-premises deployment (like Slack, Atlassian JIRA, Asterisk PBX) and that’s probably why the community version is also based on Kubernetes. This doesn’t mean you can’t use it, but their assumptions about deployments are that you have an on-premises cloud.

    Fortunately, there are other Matrix homeservers available, including one written in Rust that has both bare metal and Docker deployment instructions. Note that I’m not endorsing this implementation, but only know of it through this FOSDEM talk describing how they dealt with malicious actors.

    As an aside, I have briefly considered Matrix before as a group communications platform, but was put off by their poor E2EE decisions, for both the main client implementation and in the protocol itself. Odd as it sounds, poor encryption is worse than no encryption, because of the false assurance it gives. If I did use Matrix, I would not enable E2EE because it doesn’t offer me many privacy guarantees, compared to say, Signal.


  • My Ecobee thermostat – which is reasonably usable without an Internet connection – has one horrific flaw: the built in clock seems to drift by a minute per month, leading to my programmed schedules shifting ever so slightly.

    I could have it connected to a dedicated IoT SSID and live in a VLAN jail so that it only has access to my NTP server… or I just change the time manually every six months as part of DST.





  • Admittedly, I haven’t finished reflashing my formerly-Meshtastic LoRA radios with MeshCore yet, so I haven’t been able to play around with it yet. Although both mesh technologies are decent sized near me, I was swayed to MeshCore because I started looking into how the mesh algorithm works for both. No extra license, since MeshCore supports roughly the same hardware as Meshtastic.

    And what I learned – esp from following the #meshtastic and #meshcore hashtags on Mastodon – is that Meshtastic has some awful flooding behavior to send messages. Having worked in computer networks, this is a recipe for limiting the max size and performance of the mesh. Whereas MeshCore has a more sensible routing protocol for passing messages along.

    My opinion is that mesh networking’s most important use-case should be reliability, since when everything else (eg fibre, cellular, landlines) stops working, people should be able to self organize and build a working communications system. This includes scenarios where people are sparsely spaced (eg hurricane disaster with people on rooftops awaiting rescue) but also extremely dense scenarios (eg a protest where the authorities intentionally shut off phone towers, or a Taylor Swift concert where data networks are completely congested). Meshtastic’s flooding would struggle in the latter scenario, to send a distress message away from the immediate vicinity. Whereas MeshCore would at least try to intelligently route through nodes that didn’t already receive the initial message.


  • Very interesting! Im no longer pursuing Meshtastic – I’m changing over my hardware to run MeshCore now – but this is quite a neat thing you’ve done here.

    As an aside, if you later want to have full networking connectivity (Layer 2) using the same style of encoding the data as messages, PPP is what could do that. If transported over Meshtastic, PPP could give you a standard IP network, and on top of that, you could use SSH to securely access your remote machine.

    It would probably be very slow, but PPP was also used for dial-up so it’s very accommodating. The limiting factor would be whether the Meshtastic local mesh would be jammed up from so many messages.



  • I’ll take a stab at the question. But I’ll need to lay some foundational background information.

    When an adversarial network is blocking connections to the Signal servers, the Signal app will not function. Outbound messages will still be encrypted, but they can’t be delivered to their intended destination. The remedy is to use a proxy, which is a server that isn’t blocked by the adversarial network and which will act as a relay, forwarding all packets to the Signal servers. The proxy cannot decrypt any of the messages, and a malicious proxy is no worse than blocking access to the Signal servers directly. A Signal proxy specifically forwards only to/from the Signal servers; this is not an open proxy.

    The Signal TLS Proxy repo contains a Docker Compose file, which will launch Nginx as a reverse proxy. When a Signal app connects to the proxy at port 80 or 443, the proxy will – in the background – open a connection to the Signal servers. That’s basically all it does. They ostensibly wrote the proxy as a Docker Compose file, because that’s fairly easy to set up for most people.

    But now, in your situation, you already have a reverse proxy for your selfhosting stack. While you could run Signal’s reverse proxy in the background and then have your main reverse proxy forward to that one, it would make more sense to configure your main reverse proxy to directly do what the Signal reverse proxy would do.

    That is, when your main proxy sees one of the dozen subdomains for the Signal server, it should perform reverse proxying to those subdomains. Normally, for the rest of your self hosting arrangement, the reverse proxy would target some container that is running on your LAN. But in this specific case, the target is actually out on the public Internet. So the original connection comes in from the Internet, and the target is somewhere out there too. Your reverse proxy simply is a relay station.

    There is nothing particularly special about Signal choosing to use Nginx in reverse proxy mode, in that repo. But it happens to be that you are already using Nginx Proxy Manager. So it’s reasonable to try porting Signal’s configuration file so that it runs natively with your Nginx Proxy Manager.

    What happens if Signal updates that repo to include a new subdomain? Well, you wouldn’t receive that update unless you specifically check for it. And then update your proxy configuration. So that’s one downside.

    But seeing as the Signal app demands port 80 and 443, and you already use those ports for your reverse proxy, there is no way to avoid programming your reverse proxy to know the dozen subdomains. Your main reverse proxy cannot send the packets to the Signal reverse proxy if your main proxy cannot even identify that traffic.


  • I have a UniFi EdgeRouter (old, and I’m looking into replacing it with a FreeBSD box) and I have a similar issue where the router – but maybe the ISP? – misses a DHCP renewal, resulting in the wholesale loss of connectivity. It’s even more annoying because the ISP simultaneously rejects follow-up DHCP requests, on the theory that if the renewal was missed, the device cannot possibly exist anymore, at least for a few minutes.

    Since this router takes 12 minutes to manually reboot, that’s usually enough time for the ISP to clear their cache and everything comes back up properly. But it’s terribly annoying, hence why I’m looking to finally replace this router.


  • I’m going off what I remember from a decade ago when working on embedded CPUs that have an Ethernet interface. IIRC, the activity LED – whether a separate LED than the link LED, or combined as a single LED – is typically wired to the PHY (the chip which converts analog signals on the wire/fibre into logical bits), as part of its transceiver functions. But some transceivers use a mechanism separate from the typical interface (eg SGMII) to the MAC (the chip which understands Ethernet frames; may be integrated into the PHY, or integrated into the CPU SoC). That auxiliary interface would allow the MAC to dictate what the LED should indicate.

    In either case, there isn’t really a prescribed algorithm for what level of activity should warrant faster blinking, and certainly not any de facto standard between switch and NIC manufacturers. But generally, there will be something like 4 different “speeds” of blinking, based on whatever criteria the designers chose to use


  • The full-blown solution would be to have your own recursive DNS server on your local network, and to block or redirect any other DNS server to your own, and possibly blocking all know DoH servers.

    This would solve the DNS leakage issue, since your recursive server would learn the authoritative NS for your domain, and so would contact that NS directly when processing any queries for any of your subdomains. This cuts out the possibility of any espionage by your ISP/Google/Quad9’s DNS servers, because they’re now uninvolved. That said, your ISP could still spy in the raw traffic to the authoritative NS, but from your experiment, they don’t seem to be doing that.

    Is a recursive DNS server at home a tad extreme? I used to think so, but we now have people running Pi-hole and similar software, which can run in recursive mode (being built atop Unbound, the DNS server software).

    /<minor nitpick>

    “It was DNS” typically means that name resolution failed or did not propagate per its specification. Whereas I’m of the opinion that if DNS is working as expected, then it’s hard to pin the blame on DNS. For example, forgetting to renew a domain is not a DNS problem. And setting a bad TTL or a bad record is not a DNS problem (but may be a problem with your DNS software). And so too do I think that DNS leakage is a DNS problem, because the protocol itself is functioning as documented.

    It’s just that the operators of the upstream servers see dollar-signs by selling their user’s data. Not DNS, but rather a capitalism problem, IMO.

    /</minor nitpick>


  • I loaded True Nas onto the internal SSD and swapped out the HDD drive that came with it for a 10tb drive.

    Do I understand that you currently have a SATA SSD and a 10TB SATA HDD plugged into this machine?

    If so, it seems like a SATA power splitter that divides the power to the SSD would suffice, in spite of the computer store’s admonition. The reason for splitting power from the SSD is because an SSD draws much less power than spinning rust.

    Can it still go wrong? Yes, but that’s the inherent risk when pushing beyond the design criteria of what this machine was originally built for. That said, “going wrong” typically means “won’t turn on”, not “halt and catch fire”.


  • In my mind, I figured that an attacker would sidestep binding to L3 at all, and would just craft raw L2 packets that contain TCP headers with src_addr of every possible address in the subnet. But that too would require elevated privileges, so point taken.

    That said, using most of the same general scenario where S is blitheringly unsecured against internal threats – under the false pretense that NAT somehow provides security – a DNS rebinding attack that uses an unwitting user’s web browser to proxy Mallory’s traffic to S could succeed. Maybe not SSH per-se, but any internal service that S is hosting would be vulnerable.

    This isn’t an attack that’s per-se exacerbated by NAT, but a good-and-proper firewall config at the network and on S would easily protect against this, which is why I mention it. If NAT is believed to be “security”, then almost certainly the firewall configuration will be overlooked and attack vectors will be left open.


  • The difference between NAT and firewalls is a common point of confusion, and particularly irks me because I am very pro-IPv6 and very anti-NAT. Nevertheless, we should start with the raison d’etre for NAT and firewalls.

    Network Address Translation (NAT) is a variety of technologies used to map one (or more) external-facing IP addresses to one (or more) internal IP addresses. The various flavors of NAT are described in RFC2663, but suffice it to say that the most common situation is to share a single public IP with many internal machines on a LAN, by rewriting port numbers to distinguish each internal machine’s packets from each other. This is usually called Network Address/Port Translation (NAPT) and necessarily only works for protocols that the translator understands (usually just TCP, UDP, and ICMP).

    A firewall is a security feature that inspects and rejects unwanted traffic from entering or exiting a network. A simple configuration would be to reject all ingress traffic, except when that traffic is expected as a reply to an earlier packet sent from the internal network. This sort of simple configuration is sufficient for client-server applications, such as web browsing, but will not support peer-to-peer software nor would it support hosting services at home. Note that rejecting traffic at the firewall still means that the packet had to be sent through the ISP all the way to the firewall, and then blocked there.

    With these definitions in mind, all commercially-available home routers for the past two decades have always included both a NAT and a default firewall, even if people did not necessarily notice the presence of the latter. This has led to the perception that only the NAT is what keeps the internal network safe, but it really doesn’t.

    Perhaps the best way I can explain why NAT != security is by pointing to the Wikipedia page on NAT Traversal. This is when various applications (legitimate or not) have a valid need to overcome the port-mapping that NAPT does, or whatever else molestation that the translator is doing. STUN and TURN are two specific technologies explicitly designed to help break through a NAT, and they’re in common use for enabling VoIP in video games and video/screen sharing in telepresence applications. Those two examples have a need to send traffic directly inbound to a user’s machine, and the fact that it’s possible means that NAT can (and has) been abused to break into networks.

    Here is the general structure of a malicious break-in to a network, in spite of NAT, requiring only an unwitting machine on the inside to help jump-start the attack. Consider that we have a secure machine (eg accounting database server) on the network, labeled S. And we have a number of user machines, with one of them being labeled U. Somewhere faraway, Mallory has set up a scam website on HTTP port 80, and convinces the user on machine U to visit it. Through any available mechanism that Mallory can conceive, Mallory gets U to send an outbound packet with a fake source IP and with src port 22. Actually, Mallory gets U to send loads of packets, each from a different source IP. By sheer brute force, one of these packets will have the forged source IP of S and port 22.

    From the NAPT translator’s perspective, it just sees a bunch of outbound connection attempts to a faraway server. So it happily does translation and passes the connection through. It also marks the port as “established”, meaning a reply will be allowed back into the network. It turns out, all of these connections are to Mallory’s machine. Uh oh.

    Mallory can now send packets back towards the the internal network, and the NAT lets those packets through. Mallory specifically crafts an SSH connection attempt, and lo and behold, the accounting machine is running a seriously out-of-date, unpatched version of SSH server (why? idk). Pwned. You can replace any part of this theoretical attack to make it more plausible, but that’s the rub: it’s entirely possible. NAT does not provide security. Mallory can trick the translator.

    Why didn’t the firewall do anything? As explained before, a default firewall allows any machine on the inside to talk to the internet. But that was not a wise choice, because the S machine really shouldn’t ever be talking to the internet. Good network design would have blocked outbound connections from the S machine’s IP, because S has no legitimate reason for doing so. This is the principle of least-privilege.

    Even better design would be defense in depth: even with the network’s firewall configured to reject any attempt by S to reach the internet, S should also be running its own firewall. Two > One. And that firewall can be a lot less complicated, because it only needs to deal with a single machine’s traffic. Did S need to have an SSH server? Turn that off, and then configure S’s firewall to drop port 22 inbound traffic.

    TL;DR: check your firewall settings. Imagine ways to break into your own network. Fix what you find.


  • If I understand the Encryption Markdown page, it appears the public/private key are primarily to protect the data at-rest? But then both keys are stored on the server, although protected by the passphrase for the keys.

    So if the protection boils down to the passphrase, what is the point of having the user upload their own keypair? Are the notes ever exported from the instance while still being encrypted by the user’s keypair?

    Also, why PGP? PGP may be readily available, but it’s definitely not an example of user-friendliness, as exemplified by its lack of broad acceptance by non-tech users or non-government users.

    And then, why RSA? Or are other key algorithms supported as well, like ed25519?