I just got my home server up and running and was wondering what you guys recommend for backups. I figure it will probably be worth having backups on cloud servers tjay are external, are there any good services yall use for that?
Regardless of service, if you don’t test your backups, you have none.
Ehhh I would say then you have probabilistic backups. There’s some percent chance they’re okay, and some percent chance they’re useless. (And maybe some percent chance they’re in between those extremes.) With the odds probably not in your favor. 😄
Schrodinger’s backups.
Exactly.
Not so much about testing, but one time I really needed to get to my backups I lost password to the repository (I’m using restic). Luckily a copy of it was stored in bitwarden, but until I remembered it, were perhaps one of the worst moments.
Needless to say, please test backups and store secrets in more then one place.
Restic or Kopia, both to Backblaze.
I second restic. Have been using it for a year now and have been generally very happy. Actually had to use it in a couple occasions to restore directory content and even recover a complete workstation drive. I have had relatively easy success in both scenarios.
I’ve always found them pretty similar. How’d you chose one or another?
I know Restic before Kopia and made a set of systemd units to run Restic backups on my home server and office workstation (both online 24/7).
Kopia seems much nicer for a regular user, so I use it on my and family laptops. I used to use Duplicati there, but that project seems dead.
Thank you :)
Restic and then rclone to backblaze? Or is there a way to restic directly to backblaze?
I use Restic + Resticprofile to back up everything and store it on my local HDD.
Then, I use Rclone to sync the local repository to Backblaze B2.
Here’s my general setup:
/.config/restic/ ├── logs │ ├── statuses │ │ ├── restic-status-20230202T020202.json │ │ └── restic-status-20230101T010101.json │ ├── restic-check-20230202T020202.log │ └── restic-backup-20230101T010101.log ├── config │ ├── profiles.yaml │ ├── excludes.txt │ ├── rclone.conf │ └── password.txt ├── bin │ ├── restic_0.15.2_linux_arm64 │ ├── rclone_1.63.1_linux_arm64 │ └── resticprofile_0.22.0_linux_arm64
version: "1" # Schedules (https://www.freedesktop.org/software/systemd/man/systemd.time.html#Calendar%20Events) {{ $SCHEDULE_RESTIC_BACKUP := "*-*-* 22:00:00" }} # Daily at 10PM {{ $SCHEDULE_RESTIC_CHECK := "Sat *-*-* 04:00:00" }} # Weekly at 4AM on Saturday {{ $SCHEDULE_SYNC_BACKUP := "Sun *-*-* 21:30:00" }} # Weekly at 11.30PM on Sunday {{ $SCHEDULE_POSTGRES_BACKUP := "Fri *-*-* 20:00:00" }} # Weekly at 8PM on Friday # Directories {{ $LOCATION_RESTIC_BINARY := "/home/deck/Desktop/.config/restic/bin/restic_0.15.2_linux_arm64" }} {{ $LOCATION_RESTIC_REPO := "/home/deck/Desktop/restic-repo" }} {{ $LOCATION_RESTIC_LOG := "/home/deck/Desktop/.config/restic/logs" }} {{ $LOCATION_RESTIC_STATUS := "/home/deck/Desktop/.config/restic/logs/statuses" }} {{ $LOCATION_RESTIC_BLOCKED_FILE := "/home/deck/Desktop/.config/restic/BLOCKED" }} {{ $LOCATION_RCLONE_BINARY := "/home/deck/Desktop/.config/restic/bin/rclone_1.63.1_linux_arm64" }} {{ $LOCATION_RCLONE_REPO := "bucket:restic-backup-12345" }} {{ $LOCATION_RCLONE_CONFIG := "/home/deck/Desktop/.config/restic/config/rclone.conf" }} {{ $LOCATION_RESTICPROFILE_LOCK := "/tmp/resticprofile-default.lock" }} {{ $LOCATION_POSTGRES_DUMP := "/home/deck/Desktop/dumps" }} {{ $LOCATION_PRIMARY_BACKUP_SOURCE := "/home/deck/Desktop/" }} # Configs {{ $CONFIG_CURRENT_TIME := .Now.Format "20060102T150405" }} {{ $CONFIG_RESTIC_PASSWORD := "/home/deck/Desktop/.config/restic/config/password.txt" }} {{ $CONFIG_RESTIC_EXCLUDE := "/home/deck/Desktop/.config/restic/excludes.txt" }} global: default-command: snapshots # Run 'snapshots' when no command is specified initialize: false # Do not initialize a repository if none exists priority: low # Use priority class on Windows and "nice" on Unixes min-memory: 100 # Minimum required RAM for Resticprofile to start restic-lock-retry-after: 5m # Retry failed restic command acquisition every 5 minutes restic-stale-lock-age: 10h # Unlock stale lock if age exceeds 10 hours restic-binary: '{{ $LOCATION_RESTIC_BINARY }}' # Location of the Restic binary default: lock: '{{ $LOCATION_RESTICPROFILE_LOCK }}' # Local lockfile to prevent concurrent profile runs force-inactive-lock: true # Detect and remove stale locks initialize: true # Initialize repository if it doesn't exist repository: '{{ $LOCATION_RESTIC_REPO }}' # Path to Restic repository password-file: '{{ $CONFIG_RESTIC_PASSWORD }}' # File containing repository password status-file: '{{ $LOCATION_RESTIC_STATUS }}/{{ $CONFIG_CURRENT_TIME }}-restic-status.json' # Output status file compression: 'max' # Maximum compression level run-after-fail: # Block syncing if there was a failure. TODO: Add an email - 'echo "The command ${PROFILE_COMMAND} has failed in ${PROFILE_NAME}. Please check the logs." > {{ $LOCATION_RESTIC_BLOCKED_FILE }}' backup: run-before: # Bring down Docker before backup - 'systemctl stop docker.socket' - 'systemctl stop docker' run-finally: - 'grep --invert-match -E "^unchanged|\(0 B added, 0 B stored\)|\(0 B added\)" {{ tempFile "backup.log" }} > {{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-backup.log' # Copy log file, stripping out any unchanced files - 'systemctl start docker' # Bring Docker back online after backup one-file-system: false # Exclude other file systems no-error-on-warning: true # Don't consider warnings as backup failures source: # Directories to back up - '{{ $LOCATION_PRIMARY_BACKUP_SOURCE }}' exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns exclude-caches: true # Exclude cache files schedule: '{{ $SCHEDULE_RESTIC_BACKUP }}' # Backup schedule schedule-permission: system # Schedule permission schedule-lock-wait: 10m # Wait time for the lock during schedule schedule-log: '{{ tempFile "backup.log" }}' # Log file to /tmp. This contains all information, including unchanged files which we do not care about verbose: 2 # Log details about processed files check: schedule: '{{ $SCHEDULE_RESTIC_CHECK }}' # Verification schedule schedule-permission: system # Schedule permission schedule-lock-wait: 10m # Wait time for the lock during schedule schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-restic-check.log' # Log file read-data: true # Verify data during check prune: dry-run: true # Only prune if safe to do so, change manually repack-uncompressed: true # Repack all uncompressed data forget: dry-run: true # Only forget if safe to do so, change manually rewrite: dry-run: true # Only rewrite if safe to do so, change manually forget: true # Remove original snapshots after creating new ones exclude-file: '{{ $CONFIG_RESTIC_EXCLUDE }}' # File containing exclude patterns mount: allow-other: true # Allow other users to access the mount point rebuild-index: read-all-packs: true # Read all pack files to generate new index from scratch # The following shell profiles are simply to run other shell scripts at a scheduled time # We do not actually run the primary Restic commands listed, as we exit the process early shell-postgres: # Profile to run shell scripts only. We exit the current process before Restic can run. backup: schedule: '{{ $SCHEDULE_POSTGRES_BACKUP }}' # Postgres backup schedule schedule-permission: system # Schedule permission schedule-lock-mode: ignore # Ignore locks, if any schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-postgres-backup.log' # Log file dry-run: true # Don't write data run-before: # Dump postgres databases - 'chmod 777 /var/run/docker.sock' - 'docker exec -t immich-postgres pg_dumpall -c -U postgres | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Immich database: {{ $LOCATION_POSTGRES_DUMP }}/immich-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"' - 'docker exec -t joplin-postgres pg_dumpall -c -U joplin | gzip > "{{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz" && echo "Dumped Joplin database: {{ $LOCATION_POSTGRES_DUMP }}/joplin-dump-{{ $CONFIG_CURRENT_TIME }}.sql.gz"' - 'kill $$' shell-sync: backup: schedule: '{{ $SCHEDULE_SYNC_BACKUP }}' # Sync backup schedule schedule-permission: system # Schedule permission schedule-lock-mode: ignore # Ignore locks, if any schedule-log: '{{ $LOCATION_RESTIC_LOG }}/{{ $CONFIG_CURRENT_TIME }}-rsync-backup.log' # Log file dry-run: true # Don't write data run-before: # Sync the Restic repo, after checking if the repository is in good health - 'if [ -f "{{ $LOCATION_RESTIC_BLOCKED_FILE }}" ]; then echo "There has been a problem with the Restic repository, please check the logs. If everything is okay, delete the BLOCKED file." && kill $$; fi' - '{{ $LOCATION_RCLONE_BINARY }} -v sync {{ $LOCATION_RESTIC_REPO }} {{ $LOCATION_RCLONE_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }} --b2-hard-delete' - '{{ $LOCATION_RCLONE_BINARY }} cleanup {{ $LOCATION_RESTIC_REPO }} --config={{ $LOCATION_RCLONE_CONFIG }}' - 'kill $$'
Resticprofile doesn’t let me run other shell commands on a schedule, and because I wanted everything in a single configuration, I just created two new profiles which call the backup command. I then made the shell commands run before Restic, and then finally killed the instance before it got to actually run, which effectively does what I needed.
It’s the first time I hear about resticprofile and it looks nice. So far I’ve been using crestic for configuration files. Do you know how they compare?
It seems like they have the same objectives - allow for easier configuration of Restic. I’ve never heard of Crestic until now. I’d say stick with what you’re comfortable with
I use restic to backup my raspberry Pi’s to my Synology NAS and backup my NAS to backblaze.
Backblaze b2, borgbase.com. There are also programs like dejadup that will let you backup to popular cloud drives. The alternatives are limitless.
What’s the 2-2-1 rule?
32 different copies of the data in 2 different locations is 1 actual backup (it’s actually 3-2-1…)
Backblaze.
I use wasabi s3, I back up to that using restic.
Removed by mod
That is great for hardware failures, but what about disasters? I would hate to lose my house to a fire and all the data (including things not replaceable, like family photos) I have on my server at the same time because my primary and backup were both destroyed.
Removed by mod
While I agree with you, hard drives do have a shelf life. How many years seems to be up for debate but it does exist. If you don’t have multiple drives that are of different ages you may be in a world of hurt one day.
I have a hot storage NAS that backups to a warm storage NAS.
I backup every week and scrub every month.
I have 2 x ZFS1 pools that contains 3 x 20TB disks each.
With ECC ram, scrubbing, and independent pools, it’ll take a house fire to kill my local storage.
I also have a constant backing to Backblaze and yearly encrypted backup that I ship to a friend across the world.
Backblaze B2 for automatic syncing of all the little files
Glacier for long term archiving of old big files that never change
Duplicati, to a friend’s home server who lives in another town.
I hate to ask the scary question, but have you tried to restore your backups before? I used Duplicati and discovered that none of my backups were usable and ended up switching to Duplicacy.
It works just fine for me, but I’ve heared scary storries so now Im using:
- Kopia to backblaze b2 (all data)
- Kopia to local disk (all data)
- Duplicati to google drive (only 1 folder)
How would one realistically go about testing their backup? Do you need a bunch of empty drives?
An important question though.
I have, when I first set it up, and again once when I needed to.
I used to have everything backed up to a 2TB USB drive. Which I accidentally dropped down the stairs. I lost thousands of family photos and documents. That changed my backup perspective.
I now have a Synology NAS, with 12TB in a RAID5 array (for a bit of disk redundancy). All my home devices, Proxmox servers etc back up here. The NAS also holds a few TB of media. Attached to it I have a USB hard drive (also 12TB). The NAS gets fully backed up to the USB drive nightly.
I also have a remote Raspberry Pi with a smaller USB drive (4TB) attached to it at my brother’s house (in another country), where I backup most of the contents of my home NAS. I don’t back up the media, just the important stuff. I might have to upgrade to a larger drive…
Tears… Natural, salty, wet tears…
- restic > backblaze b2, nightly & automatic
- restic > normally unplugged drive, every couple weeks (manual, recurring reminder)