

yeah, I adopted it last year and I probably wouldn’t have picked it today. I’m glad that despite of that, in the end it’s just an S3 compatible storage and, thanks to that, it’s not too difficult to replace.


yeah, I adopted it last year and I probably wouldn’t have picked it today. I’m glad that despite of that, in the end it’s just an S3 compatible storage and, thanks to that, it’s not too difficult to replace.


We do it for an immediate benefit not for some hypothetical apocalyptic scenario result of a half baked conspiracy theory.
It’s a bit like calling people who camp in the woods, fish, or rock climb “preppers” because these would be useful skills after the modern civilization.


this is one of the most misused templates
meh, use whatever the fuck you want
there, I can swear too
That usually works IME, you just need to drag the sliders to pick a more general selector than an ID or specific class


RDR2


Yeah, if they fail twice in a row when I know I completed them correctly, I don’t bother a third time.
fwiw, I used Kopia for around a year, but eventually the backup got corrupted with a BLOB not found error and there was no way to fix it.
similar to this issue, except that nothing would fix or improve the situation https://github.com/kopia/kopia/issues/1087
and because it seemed to be an issue with the repo (not just with a snapshot), the remote copy was also borked. I couldn’t even list the snapshots.
I’ve since migrated to Rustic (though Restic might be more reliable today).
This seems to be the a similar issue too, but I was nowhere near the scale of this user. There are other similar reports that may or may not be linked to the same root cause, so it’s hard to say how rare this problem is.


deleted by creator


Isn’t that creating hardlinks between source and dest? Hard links only work on the same drive. And I’m not sure how that gives you “time travel”, as in, browsing snapshots or file states at the different times you ran rsync.
Edit: ah the hard link is between dest and the link-dest argument, makes more sense.
I wouldn’t bundle fs and backup compression in the same bucket, because they have vastly different reqs. Backup compression doesn’t need to be optimized for fast decompression.


exes


yeah, more often than not I notice the bottleneck being the storage drive itself, not rsync.


yeah, it doesn’t, it’s just for file transfer. It’s only useful if transferring files somewhere else counts as a backup for you.
To me, the file transfer is just a small component of a backup tool.


It works fine if all you need is transfer, my issue with it it’s just not efficient. If you want a “time travel” feature, your only option is to duplicate data. Differential backups, compression, and encryption for off-site ones is where other tools shine.


I tried portainer for a while, but it was almost useless to me, as I’d always end up in the command line anyway. So I dropped that and any other dashboard idea.


Yeah, I have everything as compose.yaml stacks and those stacks + their config files are in a git repo.


yeah, not just video, but embedding crap from social media is the best way to make your article irrelevant in a few years when that crap breaks or is put behind a login so they can stalk users.
There’s no reason to embed a tweet in your article. If that’s really relevant, just make a screenshot instead.


it might be worth watching this PR for memos, which adds encryption at rest. I can’t vouch for it, as I didn’t read the code but I do use memos and might consider this if it’s merged.
https://github.com/usememos/memos/pull/5130
This is not E2EE, but I don’t think E2EE is that important if you’re hosting your own data. And clients can use TLS for encryption in transit.
cleaning is included in the price - which is also charged daily, btw, so fuck tipping. That’s some real audacity from management right there to leave a passive aggressive note about tipping their staff for doing their job.
lol the readme reads “a not so terrible” but the repo description reads like