1. 18

I’m asking mostly for personal data. What’s your backup strategy? Which tools and services do you use? How do you keep everything up to date?

  1.  

  2. 5

    For my personal stuff, I try not to get attached to worldly things, and hope that having a few copies of my music library at a few different places will be sufficient to prevent losing it all.

    1. 4

      I use Arq Backup to backup on my S3 account https://www.arqbackup.com/.

      1. 3

        I use a mix of things at the moment:

        • For Macs, backup via TimeMachine to a Debian VM running Netatalk.
        • For Unix systems, backups via BackupPC to the aforementioned Debian VM.

        I don’t currently do offsite backups (eek).

        I’m in the process of moving to the following:

        • Migrating from the Debian VM to a bare metal backup server running FreeBSD with ZFS. It will also run Netatalk and BackupPC.
        • For Macs, also doing image based backups using Carbon Copy Cloner. I’d ideally like to be able to boot from the backups so will probably back up to a set of external 2.5" drives and rotate them.

        Offsite backups are something I still need to look at. I may go the route of Tarsnap or perhaps just use an S3 bucket (after GPG encrypting). I’d also like some kind of snapshot-based backup for my Dovecot server but haven’t had a chance to look at what’s available.

        1. 1

          Netatalk

          Why Netatalk, unless you have some old Power Macs hiding somewhere? OS X seems mostly happy with SMB and AFP is a dead end.

          1. 1

            Mostly for the built-in TimeMachine support (I use SMB for other file sharing). I know it’s possible to configure TimeMachine to use SMB but I’ve not tried it (I know several years ago their were “issues” with it but I haven’t tried it recently).

        2. 3

          Tarsnap.

          1. 2

            I’ll second tarsnap, even though I use it rather infrequently. Never scripted it, so my backups are somewhat scattered, incomplete, and old. And I’d probably lose my key before my data, but yolo.

            The best part is for modest needs, you can pay for tarsnap by finding bugs in anything Colin writes. No bug too small. Great way to hone skills.

          2. 2

            I use CrashPlan for all of my machines. It runs on WIndows and Linux (the only two OSs in the house). I have it set to backup most of my files to an external hard drive for quick local restores but everything gets uploaded to their offsite backup.

            1. 2

              tar, local fileserver, and then gpg and backblaze b2 for offsite.

              Currently I do it by hand because the majority of my personal work is actually in git repositories that get pushed to a private offsite git server. Tar has the advantages of universal support, very mature free implementations, and easy scriptability; I have deep philosophical objections to a backup strategy which requires non-free software or even just more software than I’m likely to find on a bare-bones unix system.

              (As a technical detail, I actually take incremental backups based on file checksums using this tool. The functionality could easily be scripted, though.)

              1. 2

                rdumpfs and borg (for encrypted off-site backup).

                1. 2

                  I use tarsnap because it’s so cheap and very redundant. I don’t think it’s a perfect solution though; it nukes my internet connection whenever I’m trying to upload or download anything (I get the impression that it creates many simultaneous connections, but I’ve never investigated it), and of course, private key management is always a concern.

                  1. 1

                    With regards to private key management: you could always put a really hard passphrase on the key and then not worry too much. ;)

                    1. 2

                      The problem is that unlike many systems, the key itself needs to be backed up somewhere. It’s completely impossible, via any means, to bootstrap a new system into restoring without a copy of the key. It could be offsite or etched in stone tablets or whatever, but there’s zero chance you’re going to memorize everything you need.

                      1. 2

                        My solution was to create two keys. One can only write new backups and can’t read them. The other can read backups and is passphrase protected using high scrypt parameters, so it takes around a minute to decrypt, and that key is stored on a cloud storage thing that I could still access if my house burned down.

                        1. 2

                          Sounds interesting; have you ever written a blog post about that setup, I’d love to hear more about it!

                          1. 1

                            Is there anything in particular you wanted to know more about? It’s a fairly basic setup once the keys are made. The hardened key is just something like:

                            tarsnap-keymgmt -wrd --outkeyfile master.key --passphrased --passphrase-mem XXX --passphrase-time YYY tarsnap.key

                            The hardest part is figuring out tarsnap/scrypt’s rules for resource usage, as the options specify “up to” what resources to use. The KDF uses the memory you specify but tarsnap allocates only half of that, so “–passphrase-mem 8589934592” uses 4GB, and it seems to be quite bad at determining the CPUs scrypt performance, on my machine at least “–passphrase-time 600” makes it take around 1 minute 10 seconds.

                            One caveat is that tarsnap will refuse to allocate more than half of the available system memory. I have one machine with 16GB RAM and one with 8GB, and the key requires 4GB to decrypt, so the key can’t be decrypted on the machine with 8GB (tarsnap will just complain that the file requires too much memory to decrypt and exit), at least not without modifying the tarsnap source (apparently there’s an option coming in a future tarsnap release to override this).

                            1. 1

                              Awesome, cheers. I might write a blog post about that myself. Would you mind?

                              1. 2

                                Not at all, feel free. I look forward to reading it.

                                1. 1

                                  Here it is.

                                  TL;DR: I don’t actually want to back up very much stuff, git works fine for source code, and tarsnap for everything else.

                  2. 2

                    I have a local server running camlistore. I only put stuff I really care about into it (kids photos/videos, important documents). Less important stuff (eg side project code) goes in github as a cheap & cheerful backup.

                    I’ve also got a publicly accessible (https+password) camlistore instance running in ec2 which pulls down everything I add to the s3 bucket.

                    To test backups, I periodically:

                    • Check that the number of objects matches in s3 / local / ec2
                    • Destroy the ec2 instance, start a new one and check that everything restores correctly

                    This also has the advantage of having a publicly accessible copy of all my photos and stuff.

                    When I first tried restoring it didn’t work but since sorting out that issue it’s remained consistent & hasn’t lost data.

                    1. 1

                      CrashPlan and SpiderOakONE for “continuous” file backup. Macrium Reflect for weekly images to NAS.

                      1. 1

                        On-site, we have an Apple Time Capsule which we back up to using Time Machine. We also do off-site backups to BackBlaze.

                        1. 1

                          Been using http://www.duplicati.com/ on Linux with encrypted cloud sync to gdrive. So far so good (~4GB compressed data)

                          1. 1

                            Rule 2 local copies and one in a different location (two for coding)

                            Tools: 1 local server (OpenSUSE), CrashPlan and git

                            1. Everything backs up to my server using CrashPlan client (Creates the second copy)
                            2. Server then is backing up to CrashPlan (Creates away location)
                            3. 95% of my code is a single user myself. I use private repos on Bitbucket
                            1. 1

                              Crashplan over the LAN to a server that stores the data on ZFS mirrored pairs. Photos & Music live in (Apple’s) cloud as well, so those have a sort of offsite backup. (They have a copy that resides offsite at least. Make of that what you will.)

                              I need to figure out how much of the data I want to back up offsite. Also want to replace crashplan with something else on all the machines, as it’s a java monstrosity and I’d prefer something open source for sure. (That said, Crashplan does just work for me. Can’t fault it for that.) Finding something else that I can run a local server for, that has client apps for OS X & Windows, and encrypts client side has proved difficult though.

                              1. 1

                                I dont currently keep a snapshot of my computers current state, but all my projects, book exercices and most of my config files are backed on a central server with git. Works for now

                                1. 1

                                  I use rsync to keep snapshots of data on my local system. rsync is also used to push those snapshots to a LAN fileserver. A few Windows machines back up data to the same fileserver.

                                  That fileserver then pushes off-site backups to rsync.net.

                                  I have a few terabytes of FLAC files I copied to a few friends for safe keeping.

                                  Backups of (encrypted) encryption keys are kept in multiple secret locations.

                                  1. 1

                                    rsnapshot into a LUKS-encrypted filesystem image stored at rsync.net.

                                    1. 1

                                      I back my Macs via Time Machine up to a networked ZFS-backed AFS share for those “oh shit” moments; via CCC to a Thunderbolt-attached spinning HDD; and via Backblaze for off-premises. In addition, down-resolution copies of my pictures and music collection are stored in Apple’s cloud.