I recently implemented a backup workflow for me. I heavily use restic for desktop backup and for a full system backup of my local server. It works amazingly good. I always have a versioned backup without a lot of redundant data. It is fast, encrypted and compressed.

But I wondered, how do you guys do your backups? What software do you use? How often do you do them and what workflow do you use for it?

  • poinck@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    14 days ago

    This looks a bit like borgbackup. It is also versioned and stores everything deduplicated, supports encryption and can be mounted using fuse.

    • Zenlix@lemm.eeOP
      link
      fedilink
      arrow-up
      5
      ·
      14 days ago

      Thanks for your hint towards borgbackup.

      After reading the Quick Start of Borg Backup they look very similar. But as far as I can tell, borg can be encrypted and compressed while restic is always. You can mounting your backups in restic to. It also seems that restic supports more repository locations such as several cloud storages and via a special http server.

      I also noticed that borg is mainly written in python while restic is written in go. That said I assume that restic is a bit faster based on the language (I have not tested that).

      • drspod@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        It was a while ago that I compared them so this may have changed, but one of the main differences that I saw was that borg had to backup over ssh, while restic had a storage backend for many different storage methods and APIs.

      • ferric_carcinization@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        I haven’t used either, much less benchmarked them, but the performance differences should be negligible due to the IO-bound nature of the work. Even with compression & encryption, it’s likely that either the language is fast enough or that it’s implemented in a fast language.

        Still, I wouldn’t call the choice of language insignificant. IIRC, Go is strongly typed while Python isn’t. Even if type errors are rare, I would rather trust software written to be immune to them. (Same with memory safety, but both languages use garbage collection, so it’s not really relevant in this case.)

        Of course, I could be wrong. Maybe one of the tools cannot fully utilize the network or disk. Perhaps one of them uses multithreaded compression while the other doesn’t. Architectual decisions made early on could also cause performance problems. I’d just rather not assume any noticeable performance differences caused by the programming language used in this case.

        Sorry for the rant, this ended up being a little longer than I expected.

        Also, Rust rewrite when? :P

  • blade_barrier@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    14 days ago

    Since most of the machines I need to backup are VMs, I do it by the means of hypervisor. I’d use borg scheduled in crontab for physical ones.

  • Vintor@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    14 days ago

    I’ve found that the easiest and most effective way to backup is with an rsync cron job. It’s super easy to setup (I had no prior experience with either rsync or cron and it took me 10 minutes) and to configure. The only drawback is that it doesn’t create differential backups, but the full task takes less than a minute every day so I don’t consider that a problem. But do note that I only backup my home folder, not the full system.

    For reference, this is the full line I use: sync -rau --delete --exclude-from=‘/home/<myusername>/.rsync-exclude’ /home/<myusername> /mnt/Data/Safety/rsync-myhome

    “.rsync-exclude” is a file that lists all files and directories I don’t want to backup, such as temp or cache folders.

    (Edit: two stupid errors.)

    • dihutenosa@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 days ago

      Rsync can do incremental backups with a command-line switch and some symlink jugglery. I’m using it to back up my self-hosted stuff.

    • everett@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      14 days ago

      only drawback is that it doesn’t create differential backups

      This is a big drawback because even if you don’t need to keep old versions of files, you could be replicating silent disk corruption to your backup.

      • suicidaleggroll@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 days ago

        It’s not a drawback because rsync has supported incremental versioned backups for over a decade, you just have to use the --link-dest flag and add a couple lines of code around it for management.

          • suicidaleggroll@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 days ago

            They didn’t provide an rsync example until later in the post, the comment about not supporting differential backups is in reference to using rsync itself, which is incorrect, because rsync does support differential backups.

            I agree with you that not doing differential backups is a problem, I’m simply commenting that this is not a drawback of using rsync, it’s an implementation problem on the user’s part. It would be like somebody saying “I like my Rav4, it’s just problematic because I don’t go to the grocery store with it” and someone else saying “that’s a big drawback, the grocery store has a lot of important items and you need to be able to go to it”. While true, it’s based on a faulty premise, because of course a Rav4 can go to the grocery store like any other car, it’s a non-issue to begin with. OP just needs to fix their backup script to start doing differential backups.

            • everett@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              13 days ago

              My one and only purpose was to warn them that their “drawback” is more of a gator pit. It’s noble that you’re here defending rsync’s honor, but maybe let them know instead? My preferred backup tool has “don’t eat my data” mode on by default.

  • suicidaleggroll@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    13 days ago

    My KVM hosts use “virsh backup begin” to make full backups nightly.

    All machines, including the KVM hosts and laptops, use rsync with --link-dest to create daily incremental versioned backups on my main backup server.

    The main backup server pushes client-side encrypted backups which include the latest daily snapshot for every system to rsync.net via Borg.

    I also have 2 DASs with 2 22TB encrypted drives in each. One of these is plugged into the backup server while the other one sits powered off in a drawer in my desk at work. The main backup server pushes all backups to this DAS weekly and I swap the two DASs ~monthly so the one in my desk at work is never more than a month or so out of date.

  • ColdWater@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    13 days ago

    I use external drive for my important data and if my system is borked (which never happen to me) I just reinstall the OS

  • privateX@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    I keep all of my documents on a local server so all that is on any of my computers is software. So if I need to reinstall Linux I cab just do it without wording about losing anything.

  • heythatsprettygood@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    I use Pika Backup (GUI that uses Borg Backup on the backend) to back up my desktop to my home server daily, then overnight that server has a daily backup using Borg to a Hetzner Storage Box. It’s easy to set it and forget it (other than maybe verifying the backups every once in a while), and having that off site back up gives me peace of mind.

  • rutrum@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    I use borg the same way you describe. Part of my nixos config builds a systemd unit that starts a backup on various directories on my machine at midnight every day. I have 2 repos: one to store locally and on a cloud backup provider (borgbase) and another thats just stored locally. That is, another computer in my house. That local only is for all my home media. I havent yet put the large dataset of photos and videos on the cloud or offsite.

  • Pika@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    for my server I use proxmox backup server to an external HDD for my containers, and I back up media monthly to an encrypted cold drive.

    For my desktop? I use a mix of syncthing (which goes to the server) and windows file history(if I logged into the windows partition) and I want to get timeshift working I just have so much data that it’s hard to manage so currently I’ll just shed some tears if my Linux system fails

  • melfie@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    I currently use rclone with encryption to iDrive e2. I’m considering switching to Backrest, though.

    I originally tried Backblaze b2, but exceeded their API quotas in their free tier and iDrive has “free” API calls, so I recently bought a year’s worth. I still have a 2 year Proton subscription and tried rclone with Proton drive, but it was too slow.

  • Random Dent@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    I use BorgBackup with Vorta for a GUI, and I keep the 3-2-1 backup rule for important stuff (IE: 3 copies, 2 on different media, 1 off-site.)