I’m trying to plan a better backup solution for my home server. Right now I’m using Duplicati to back up my 3 external drives, but the backup is staying on-site and on the same kind of media as the original. So, what does your backup setup and workflow look like? Discs at a friend’s house? Cloud backup at a commercial provider? Magnetic tape in an underground bunker?

  • Object@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    24 days ago

    I dump my encrypted data to someone who probably practices 3-2-1 rule (which is Backblaze for me). I mean, these guys back up data for a living.

  • emerald@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    23 days ago

    “3! 2! 1!” Is just what I say when doing some potentially deleterious action after rsyncing a few key directories to a separate volume

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    24 days ago
    • Primary ZFS pool with automatic snapshots
      • Provides 3+ copies of the files via snapshots (3)
    • Secondary ZFS pool at a different location replicates the primary
      • Provides more copies of the files (3)
      • Provides second media (2)
      • Is off-site (1)

    Does this make sense?

    • CrazyLikeGollum@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      24 days ago

      I don’t think this meets the definition of 3-2-1. Which isn’t a problem if it meets your requirements. Hell, I do something similar for my stuff. I have my primary NAS backed up to a secondary NAS. Both have BTRFS snapshots enabled, but the secondary has a longer retention period for snapshots. (One month vs one week). Then I have my secondary NAS mirrored to a NAS at my friends house for an offsite backup.

      This is more of a 4-1-1 format.

      But 3-2-1 is supposed to be:

      • Three total copies of the data. Snapshots don’t count here, but the live data does.

      • On two different types of media. I.e. one backup on HDD and another on optical media or tape.

      • With at least one backup stored off site.

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 days ago

        I’ve always understood 2 as 2 physically different media - i.e., copies in different folders or partitions of the same disk is not enough to protect against failure of that disk, but a copy on a different disk does. Ideally 2 physically different systems, so failure/fire in the primary system won’t corrupt/damage the backup.

        Used to be that HDDs were expensive and using them as backup media would have been economically crazy, so most systems evolved backup media to be slower and cheaper. The main thing is that having /home/user/critical, /home/user/critical-backup, and /home/user/critical-backup2 satisfies 3 copies, but not 2 media.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        Hm I wonder why snapshots wouldn’t satisfy 3. Copies on the same disk like /file, /backup1/file, /backup2/file should satisfy 3. Why wouldn’t snapshots be equivalent if 3 doesn’t guard against filesystem or hardware failure? Just thinking and curious to see opinion.

        • CrazyLikeGollum@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          23 days ago

          If I’m reading your example right, I don’t think that would satisfy three either. Three copies of the data on the same filesystem or even the same system doesn’t satisfy the “three backups” rule. Because the only thing you’re really protecting against is maybe user error. I.e. accidental deletion or modification. You’re not protecting against filesystem corruption or system failure.

          For a (little bit hyperbolic) example, if you put the system that has your live data on it through a wood chipper, could you use one of the other copies to recover your critical data? If yes, it counts. If no, it doesn’t.

          Snapshots have the same issue, because at the root a snapshot is just an additional copy of the data. There’s additional automation, deduplication, and other features baked into the snapshot process but it’s basically just a fancy copy function.

          Edit: all of the above is also why the saying “RAID is not a backup” holds true.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            23 days ago

            Right so I guess the question of 3 is whether it means 3 backups or 3 copies. If we take it literally - 3 copies, then it does protect from user error only. If 3 backups, it protects against hardware failure too.

            E: Seagate calls them copies and explicitly says the implementer can choose how the copies are distributed across the 2 media. The woodchipper scenario would be handled by the 2 media requirement.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    23 days ago

    My current plan once new migration is completed:

    Primary pool - 1x ZFS (couldn’t afford redundancy but no different to my RPI server). My goal is to get a few more drives and set up a RAIDZ1/2.

    Weekly backup of critical data (eg. nextcloud) from primary pool to a secondary pool. Goal here is to get a mirror but will only be one drive for now.

    Weekly upload of secondary pool to hetzner storage box via rsync.


    Current server

    1x backup to secondary drive (rpi) 1x backup to hetzner storage box via rsync

  • harsh3466@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    24 days ago

    I’ve a nightly cronjob that runs backup using rsync for my local, and an external HDD that I stash in my work locker that I bring home once a week or so to connect to the server, run a backup script (more rsync), then take it back to work. It’s not super sophisticated, but it works, and I have tested and restored from both the local and offsite backups.

  • SirMaple__@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    24 days ago

    I use Proxmox Backup Server for my backups. Everything backups to 1 system at home. I then sync the data store to a little NAS I have at a family members house across town and also to a cheap storage VPS on the other side of the country. I also do a manual sync of the data store to a single external drive that I manually connect and disconnect.

    None of my data hoarding files are backed up as that would cost way too much. That could change if I ever find a killer deal on an LTO8 or better drive and tapes.

    I know that Hetzner has some decently priced Storage Boxes that you can mount using rclone and then backup to. Keep in mind that latency will be a factor so it could be slow.

  • tiny_ice_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 days ago

    My nas is a second copy of all my data, nothing only exists on the nas. The nas is also is slowly uploading to backblaze, data limits are slowing my progress. My photos which I feel are the least replaceable are automatically backed up to my nas , Google photos, and amazon photos, with manual backup to my desktop, and manual backup to an external hard drive that is stored in a fire resistant box.

  • pory@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    24 days ago

    All my video media that’s easier to replace than preserve is on my NAS running openmediavault with mergerfs. If I lose a drive I can always just, you know, torrent the tv show again.

    My main PC (everything except the Steam game install directory) is backed up through KopiaUI to a folder on that mergerfs array that contains media that’s difficult/impossible to replace. Daily incremental backups.

    That folder is mounted on my PC through DOKAN, which tells Windows OS that it’s a local resource (it does this more thoroughly than just assigning a drive letter to a NAS folder through Windows’ built-in system). The PC, including the “sensitive NAS media” folder, is then backed up to Backblaze’s personal backup service ($99/yr, unlimited size with one-year versioning). The DOKAN step is required for this, since Backblaze doesn’t support mounted NAS drives or non-Windows systems (presumably they don’t want to use space on versioned encrypted backups of hundred-terabyte pirate movie collections).

    Oh, and my phone does one-way Syncthing to my PC, thus putting its files on the PC for Kopia and Backblaze to do their thing.

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    24 days ago

    All persistent storage from my dockers are in a folder. All I have to backup everything is backup this one folder along with my docker compose files (in git).

    Locally there are zfs snapshots (autosnapshot) and for remote I use borgmatic.

    Borg to :

    1. Local server
    2. Friends server
    3. Borgbase
      • Lem453@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 days ago

        Its an automation software for borg backup to run on a schedule and keep a certain number of backups while deleting old ones etc.

  • 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    23 days ago

    Atm main sys is a ZFS RAIDZ1 on 3 SSDs
    Weekly-ish backup onto 1TB external HDD.
    Sync encrypted important stuff to Cloud.
    Syncthing some stuff to smartphone.

  • Dave@lemmy.nz
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 days ago

    Wow, a lot of variation in this thread!

    I get all my data to my server, then from there I have borgmatic do incremental backups to a backup drive on the same machine (nightly cronjob).

    From there I use Rclone to get the encrypted borg backup to Backblaze B2 for cloud storage.

    So for 3 2 1, my 3 copies are the original, the local backup, and the cloud backup.

    My 2 media are local hard drives and cloud storage (I think it’s fair to consider this a different kind of media).

    And my 1 offsite is the cloud backup.

    Now I’m dumb and have a fear of screwing something up so I have also started burning M-Discs of my critical data (everything except TV/movie/music stuff I can redownload). Though this was a lot more expensive than I was expecting, because of aforementioned me being dumb I already screwed up two discs (they are write once). I’m also doing two copies of each disc.

    Also I have photos/home videos additionally stored in ente, they are super important to me and I wanted a separated copy someone else is looking after.

  • BlueBockser@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 days ago

    I use Backblaze B2 for one offsite backup in “the cloud” and have two local HDDs. Using restic with rclone as storage interface, the whole thing is pretty easy.

    A cronjob makes daily backups to B2, and once per month I copy the most current snapshot from B2 to my two local HDDs.

    I have one planned improvement: Since my server needs programmatic access to B2, malware on it could wipe both the server and B2, leaving me with the potentially one-month old local backups. Therefore I want to run a Raspberry Pi at my parents’ place that mirrors the B2 repository daily but is basically air-gapped from the server. Should the B2 repository be wiped, the Raspberry Pi would still retain its snapshots.