Lots of people have mentioned rsynx, restic, borgbackup, and others, but which would be best for backing up nextcloud, immich, and radicale? Do all of them have a method of automatically backing up every X days/weeks? Why use one over the other, what are the differences?

  • un_ax@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    Here are some features off the top of my head that some backups software might have that other don’t, or that you’ll want to consider when choosing or making a system:

    • Application aware backups: E.G. DB Backups without shutting down the database. Could also be hypervisor/container awareness.
    • Restore: The ability to automatically restore files, systems, possibly to a new location.
    • Application aware restore/browsing: Being able to pull individual files from a backup, or accounts from a directory system
    • Backup copy: Automatically copying the backup to multiple destinations, disk or S3.
    • Retention: Automatically keeping a set number of backups, often including a number of weekly or monthly historical long term copies.
    • Backup Diffs: Keeping backups in a way that you only store the data changed rather than a full copy.
    • Compression: Compressing the stored data
    • Immutability: Keeping backups in a way that allows a (usually cloud) storage provider to lock files for X amount of days to prevent malicious deletions
    • Encryption: Encrypting your backups if they’re kept on someone else’s infrastructure or a non-secured area
    • Verification: Checking that the backups are intact and not corrupt.
    • Control panel: A single place to view the progress and completeness of all backups
    • Alerts: Notifications for failed backups, or hooks for successful backups for healthchecks.
    • Virus scans: Making sure a backups is malware-free before restore.
    • Retries: Ability to retry backups or copies in case of temporary outages before sending alerts

    I’m not sure which of these exist in free software, my experience is mostly on enterprise software. A backups system can be as simple as a rsync/zip cron job or a full integrated system depending on what you need.

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    I use restic in combination with rsync.

    Two days ago, I tried to setup unbound and fucked up my Nextcloud instance on the same host.

    Restic restored /opt, /etc, /home and /var and then I used rsync to divvy them all out. For some reason, restic didn’t recognize the --delete flag so, rsync it was.

    Reboot, waited 3 minutes, reload, there’s my Nextcloud login screen. Database cleaned up using occ commands and I’m back.

    My restic repo sits on my main NAS, and a copy of it on another system. It holds all of my host’s aforementioned directories for easy setup and restore from either bad luck or dumb ideas.

    …usually dumb ideas.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    ZFS. Since it backs up at block level, moving giant files means you onoy send a few K for the backup, unlike other solutions like rsync, etc.

  • starshipwinepineapple@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 hours ago

    Haven’t used all of those but my recommendation would be to just start trying them. Start small, get a feel for it and expand usage or try a different backup solution. You should be able to do automatic backups for any of them either directly or setting up your own timer/cron jobs (which is how i do it with rsync).

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    10 hours ago

    There’s a balance to pick between ease of use, ease of recovery, and security. You have to define exactly what you want, and then look at what solutions are available to do that.

    I wrote my own bash script for rsync that simply pulls copies of the vital folders and files over SSH from the machines I want backups from. Then it pushes a copy of all of that to an offsite (in-city friends house) location. There is no encryption at rest, because I choose easy of recovery over security. I also trust my friend, and there really isn’t anything that would compromise me totally, if that harddrive became available on the internet. There also aren’t multiple versions of old files, and if a file is deleted, then it is gone, because I don’t need that feature from my backup system.

    Define your needs, then shop around. No one solution does everything easily.

    • PlutoniumAcid@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 hours ago

      Simple file copying is easy and smart.

      What do you do about databases? I’m guessing you are running some containers that have a database, like paperless and many others.

      • drkt@scribe.disroot.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        I’m not backing up any databases that are so intensively used that I can’t live-copy them. Most of my databases (SQLite) sit idle until I explicitly do something to them. SQLite doesn’t really care about it unless it’s actively writing to the database.

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 hours ago

          This is why I switched everything to single disk ZFS. The ability to snapshot everything with zero downtime, including data in case I ever misconfig something, as well as replicate all of it to other ZFS drives offsite in the most efficient way possible — including encrypted data without transferring the keys — was a no brainier.

          It isn’t a full backup strategy, but it has features that no other backup software can do anywhere near as easily or efficiently.

      • confusedpuppy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        Container databases seem as simple as shutting down a container, running a backup and then starting the container again. Although my my experience is only from hosting a Lemmy/PieFed instance. I did make many backups and restores with no issues to the database. It all worked as I intended it to work.

        I would imagine a similar process for non container databases. Stop, backup, restart. Although someone with more experience would be better to answer that.

  • nfreak@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 hours ago

    Not sure about other options but Backrest has worked wonderfully for me since day 1. Basically just a GUI for Restic. My only complaints are that jobs can’t be assigned to multiple repos and you can’t edit a job’s name or repo once created. Aside from those quirks, it works fine - I have daily, weekly, monthly, and manual jobs set up across both servers and my desktop, basically just set it and forget it.

    • VeryFrugal@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      It’s a GUI and a bit more than that. Restic’s biggest pain point is that periodic runs have to be set up manually. Backrest does that for you.

      Although I’m using resticprofile, just because it also works(and has a nice web hook support) but I also use backrest and I highly recommend it.

  • FlexibleToast@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    10 hours ago

    I use borg with borgmatic. Heck, if you’re using Nextcloud AIO, borg is built in. It uses rsync, and takes incremental, deduplicated backups. I like it because it’s mostly just setting up ssh and a config file.

    More specifically I use Nextcloud’s built in borg and these two containers:

    Borgmatic

    Borgserver

    Edit: I forgot that for my personal devices I use PikaBackup, which also uses Borg.

  • The Zen Cow Says Mu@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 hours ago

    I use borg (borgmatic) to back up home server to home nas. The only major disadvantage of borg is that it requires running borg also on the receiving end, so it doesn’t work with a lot of cloud storage providers like S3.

    Restic can work with most everything as a backup target.

  • doeknius_gloek@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    11 hours ago

    The question you’re asking is too broad. Every tool somehow differs from the others, but listing all differences requires in-depth knowledge of each tool and a lot of time.

    At the end of the day, every tool somehow backs up your data. CLI interfaces, encryption algorithms, deduplication logic, supported backends, underlying programming languages and a lot more may differ. Identify what’s most important to you, test different solutions and then use the tool that works best for your use-case.

  • uranibaba@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    11 hours ago

    I am currently looking into borg because it can take incremental backups. I just need figure out how I should handle a running system, if I need to turn of all my docker images or if there is some kind of snapshot function I can use.

    From what I read on their FAQ, Borg cannot verify the integrity so I would need to turn everything off during the backup process. A filesystem like ZFS could have solved that problem (cannot find the link, something about shadow copy I think?) but since I don’t have a backup yet nor physical access, I need to work with what I have.

    I think I will set it to take a backup every night.

    EDIT: Maybe it can verify integrity? Still trying to find information on my use case. https://borgbackup.readthedocs.io/en/stable/usage/check.html

  • confusedpuppy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 hours ago

    I personally use rsync since I do most my work by command line these days. It’s taken nearly half a year really understand it but it offers the flexibility I desire.

    I have a small network with only a handful of devices. I keep all my incremental backups on encrypted partitions and encrypted detachable SSD’s which I manually decrypt. Rsync is set up to use SSH so there’s some form of encrypted transfers but that’s not actually a priority for me, just an added benefit.

    I also use rsync to sync files and directories while maintaining additional system attributes across multiple systems. That is to say, what’s root or user accessible stays root or user accessible after the transfer is complete.

    If I desired more protection, I’d probably look into Borg backup. Currently I just use encryption as an annoyance deterrence method. I also stick to the base Rsync command because every other option I tried brought with it complexities which have all failed me. I at least have a high level confidence in my backup/restore process now.

  • Fermiverse@gehirneimer.de
    link
    fedilink
    arrow-up
    3
    ·
    11 hours ago

    syncthing fom mobile direct sync to zfs mirror homeserver as soon as within home wlan. Same with PC files.

    rclone sync via cron weekly to cloud server, encrypted.

    security cams direct encrypted sync to zfs and cloud as soon as something is recorded.

  • JoeKrogan@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 hours ago

    I do monthly backups with cron and tar and syncthing for my containers.

    I do quarterly backups of my server (14TB) to external USB HDDs. This is done via a script that mounts the drives, runs rsync to copy, then unmounts the drives again and emails me when it is done. I dont bother encrypting them as it ia mainly just media.