Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 491 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • They’re just examples of things you could pipe curl into, but no not really. If the download fails you end up with an incomplete file in your tmpfs anyway, and have to retry. Another use I have is curl | mysql to restore a database backup.

    If the server supports resuming, I guess that can be better than the pipe, but that still needs temporary disk space, and downloads rarely fail. You can’t corrupt downloads over HTTPS either as the encryption layer would notice it and kill the connection, so it’s safe to assume if it downloaded in full, it’s correct.

    With downloads being IO bound these days, it’s nice to not have to read it all back and write the extracted files to disk afterwards. Only writes the final files once.

    That’s far from the weirdest thing I’ve done with pipes though, I’ve installed Windows 11 on a friend’s PC across the ocean with a curl | zstd | pv | dd, and it worked. We tried like 5 different USBs and different ISOs and I gave up, I just installed it in a VM and shipped the image.


  • I’ve had to use that flag.

    --silent is useful when you don’t want the progress bar or you’re piping curl into something else. I like to do curl | tar -zxv to download and decompress at the same time, I’ve even tar -zc | curl to upload a backup taking no disk space to do so.

    The problem however is it’s really silent: if it fails, it exits with a non-zero code and that’s it. Great when you don’t want debug info to interfere, annoying when you need to debug it.

    So you can opt-in to print some errors when in silent mode, but otherwise be silent.






  • For all its flaws and mess, NFS is still pretty good and used in production.

    I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.

    The thing with rsync is that it’s designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it’s literally designed to do that easily.

    The only cool new alternative I can think of is, use btrfs or ZFS and btrfs/zfs send | ssh backup btrfs/zfs recv which is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.

    The problem with backups over any kind of network share is that if you’re gonna use rsync anyway, the latency will be horrible and take forever.

    Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server’s backup directory locally so you can easily browse and access older stuff.



  • Aside from the other answers, no you can’t offload computations to memory. Memory stores data, it doesn’t compute.

    The only way having more memory can possibly improve performance, is by having a cached copy of files so they don’t have to be fetched from disk, and applications potentially caching the results of heavy but reusable computations. (Unless you run out of memory and starts spilling over to disk, then more memory will make it fast again by avoiding swapping).

    I mean I guess technically yes you could transcode into H264 into a tmpfs mount, and then play the H264, but you’re still not doing it faster and certainly not fast enough to watch in real time, you’re just decoding the AV1 well in advance before actually watching it.



  • The main issue you’ll run into is nicher proprietary software being hard to install, but that’s what containers are for. The main one I see is if you need to install some proprietary VPN client it gets annoying, but since you’ll be running a VM anyway you can do some network trickery. My work’s antivirus only works on Ubuntu and RHEL, proprietary kernel modules so it’s got to be at least one of those kernels.

    Linux is Linux, nothing’s impossible to solve even with Bazzite’s immutability. Worst comes to worst you make your own images and it’s not that hard, you basically just fork it on GitHub and let the CI do its thing.

    But do you have time to fiddle to make it work and take the risk, or do you want to play it safe? How confident are you with Bazzite’s more advanced topics?






  • What do you want the UI for? For configuration it’s usually meh because it’s the kind of thing you configure by config file, often generated config files even. For stats it’s where it gets interesting, usually third-party options like Grafana is used along with something like Prometheus to collect the metrics.

    When it comes to easy configuration, newer options go for the zero configuration angle rather than a nice UI to configure it. Just need some Docker tags and Traefik automagically configures itself, so the UI is just for viewing information.



  • Few of them for most use cases, especially a VPS. My server have a couple of IPs each mapping to a different VM, they can all claim 22/80/443 as you’d expect, but that’s just basically the same as having a bunch of VPSes anyway.

    It’s useful for some other uses like, I might want to dedicate an IP for VPN exit that doesn’t expose any services.

    Another use is sometimes you just want two things to stay entirely separate, even if on a technical level it could work with a reverse proxy. It can eliminate some class of exploits like request smuggling.

    One use case I’ve had for a customer is they have a system that can only do TLSv1.0, which is wildly obsolete and exploitable. So that particular API endpoint was served from a secondary IP, that way I can continue to enforce TLSv1.2+ on the primary IP. It’s possible with some reverse proxy magic with HAproxy, but I could also just make a new server block in the existing NGINX bound to that IP and call it a day.


  • The performance is a good point. You can do the striped mirror with ZFS too and still get the advantages of ZFS.

    I think you can do all of that through the Proxmox UI, but it shouldn’t be too hard to do on the CLI either. You just make two mirror sets and you’re good to go. ZFS should automatically distribute the load across the two mirrors.


  • Max-P@lemmy.max-p.metoSelfhosted@lemmy.worldFirst time software set up help
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 months ago

    I’d probably do RAID-Z with ZFS rather than RAID10, better space utilization and better error correction. Should be able to easily set that up in the Proxmox web UI.

    Everything else sounds good. Don’t worry too much about it, you will find things you wish you did differently regardless, that’s part of the learning experience.