Infrastructure nerd, gamer, and Lemmy.ca maintainer
Trailer of the netflix movie they’re talking about - https://www.youtube.com/watch?v=lM_hkJ0Rl-c
Picking obscure ports doesn’t really add security, are you using authentication?
That’s correct. You’re telling docker to bind to that specific network interface. The default is 0.0.0.0 which listens on all interfaces.
You could just swap the two disks and see if it follows the drive or the link.
If the drive, rma it. I don’t put a lot of faith in smart data.
Usually means a failing drive in my experience.
Look at workstation cards. Things like the T1000 for example.
Ntfs isn’t going to care or even be aware of the hypervisor FS, zfs or btrfs would both work fine.
Making sure you don’t have misaligned sectors, is pretty much the only major pitfall. Make sure you use paravirt storage and network drivers.
Edit: I just realized you’re asking for the opposite direction, but ultimately the same guidelines apply. It doesn’t matter what filesystems are on what, with the above caveats.
There’s nothing stopping a browser from salting a hash. Salts don’t need to be kept secret, but it should be a new random salt per user.
Someone hasn’t learned to block themselves out a lunch hour.
Never attribute to malice that which is adequately explained by stupidity
Had a zfs array on an adaptec raid card. On reboot the partition table would get trashed and block the zfs pool from coming up, but running fdisk against the disk would recover it from the backup.
Had a script to run on reboot that just ran “fdisk -l” on every disk, then brought up the zfs pool. Worked great for years until I finally did a kernel upgrade that resolved it.
I’d believe it. I’ve had hundreds of Linux servers that don’t have any desktop Gui at all deployed on them.
Linux desktop users make up an absolutely tiny fraction of Linux installs.
Yes. I’ve always splurged on nice cards for my personal stuff. I think it’s more about the write behavior of Linux than anything else, since I’ve never had a card die in my camera.
I refuse to use a pi with SD at this point. Saving $50 isn’t worth my time to reinstall things.
I couldn’t count the number of failed sd cards I’ve seen across all my fingers and toes.
I’ve seen like 4 ssds in my entire life fail. Plus you could just do mdraid 1 / btrfs across 2 of them if you want
Why not just connect an ssd via USB and save yourself the hassle and torment?
Yes, this is a commonly done thing. If you google you’ll find a lot more info on this.
Pull one drive at a time and replace it with the new one, let zfs rebuild and then do it again.
“it takes two”