Home‎ > ‎

Quick Start

Here's a quick "Getting Started" guide for HashBackup.  Run these commands
on your own system to get a feel for using HashBackup, then read the online
documentation for the details.

For this example, hb-1538-mac-64bit.tar.gz was downloaded from the
HashBackup Download page to /Users/jim.  Here we go!

What's in the tar file?

[jim@mb ~]$ tar -tf hb-1538*
hb-1538/
hb-1538/CHANGELOG
hb-1538/doc/
hb-1538/hb
hb-1538/README
hb-1538/doc/backup-bouncer.out
hb-1538/doc/backup-bouncer.sh
hb-1538/doc/backup-bouncer.txt
hb-1538/doc/CREDIT
hb-1538/doc/dedup.info
hb-1538/doc/dest.conf.examples/
hb-1538/doc/inex.conf.example
hb-1538/doc/mount.info
hb-1538/doc/security
hb-1538/doc/dest.conf.examples/dest.conf.b2
hb-1538/doc/dest.conf.examples/dest.conf.cloudfiles
hb-1538/doc/dest.conf.examples/dest.conf.dav
hb-1538/doc/dest.conf.examples/dest.conf.dir
hb-1538/doc/dest.conf.examples/dest.conf.ftp
hb-1538/doc/dest.conf.examples/dest.conf.glac
hb-1538/doc/dest.conf.examples/dest.conf.google
hb-1538/doc/dest.conf.examples/dest.conf.imap
hb-1538/doc/dest.conf.examples/dest.conf.openstack
hb-1538/doc/dest.conf.examples/dest.conf.rclone
hb-1538/doc/dest.conf.examples/dest.conf.rsync
hb-1538/doc/dest.conf.examples/dest.conf.s3
hb-1538/doc/dest.conf.examples/dest.conf.shell
hb-1538/doc/dest.conf.examples/dest.conf.ssh
hb-1538/doc/dest.conf.examples/exdirshell.py
hb-1538/doc/dest.conf.examples/rclone.py
hb-1538/doc/dest.conf.examples/README

Expand tar file to create hb-1538 directory

[jim@mb ~]$ tar -xzf hb-1538-mac-64bit.tar.gz

Change to the HashBackup install directory

[jim@mb ~]$ cd hb-1538

What's in the install directory?
- CHANGELOG is the complete log of changes since June 2009
- README is a quick (short) overview
- doc has detailed examples for setting up destinations
- hb is the executable program file

[jim@mb hb-1538]$ ls
CHANGELOG      README        doc        hb

Become root to install the executable file to /usr/local/bin

[jim@mb hb-1538]$ sudo sh
Password:
sh-3.2# cp hb /usr/local/bin
sh-3.2# exit  (control D)

Create a play backup directory named testbackup in the install directory

[jim@mb hb-1538]$ hb init -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Permissions set for owner access only
Created key file /Users/jim/hb-1538/testbackup/key.conf
Key file set to read-only
Setting include/exclude defaults: /Users/jim/hb-1538/testbackup/inex.conf

VERY IMPORTANT: your backup is encrypted and can only be accessed with
the encryption key, stored in the file:
    /Users/jim/hb-1538/testbackup/key.conf
You MUST make copies of this file and store them in a secure location,
separate from your computer and backup data.  If your hard drive fails,
you will need this key to restore your files.  If you setup any
remote destinations in dest.conf, that file should be copied too.
       
Backup directory initialized

Here's the backup directory testbackup shown inside the install directory

[jim@mb hb-1538]$ ls
CHANGELOG      README        testbackup        doc        hb

What's in a HashBackup backup directory?

[jim@mb hb-1538]$ ls testbackup
cacerts.crt    hash.db        hb.db        hb.lock        inex.conf    key.conf

What does a key file look like?  MAKE A COPY OF IT FOR REAL BACKUPS!

[jim@mb hb-1538]$ cat testbackup/key.conf
# HashBackup Key File - DO NOT EDIT!
Version 1
Build 1538
Created Wed Jul 13 14:34:51 2016 1468434891.27
Host Darwin | mb | 10.8.0 | Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 | i386
Keyfrom random
Key 2f93 2f07 2b38 0cf5 6eff c454 6ea6 572e eadd 452a d784 0766 be85 9893 6800 611e

The inex.conf file is an editable list of files excluded from the backup

[jim@mb hb-1538]$ cat testbackup/inex.conf
ex /.fseventsd
ex /.hotfiles.btree
ex /.Spotlight-V100
ex /.Trashes
ex /Users/*/.bash_history
ex /Users/*/.emacs.d
ex /Users/*/Library/Application Support/MobileSync
ex /Users/*/Library/Application Support/SyncServices
ex /Users/*/Library/Caches/
ex /Users/*/Library/PubSub/Database
ex /Users/*/Library/PubSub/Downloads
ex /Users/*/Library/PubSub/Feeds
ex /Volumes/
ex /cores/
ex *.vmem
ex /private/tmp/
ex /private/var/db/dyld/dyld_*
ex /private/var/db/Spotlight-V100
ex /private/var/vm/
ex /tmp/
ex /var/tmp/

As a test, backup the current directory (the install directory)
NOTE: the backup directory itself is always excluded by HB

[jim@mb hb-1538]$ hb backup -c testbackup .
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Copied HB program to /Users/jim/hb-1538/testbackup/hb#1538
This is backup version: 0
Dedup is not enabled
/Users/jim/hb-1538
/Users/jim/hb-1538/CHANGELOG
/Users/jim/hb-1538/README
/Users/jim/hb-1538/doc
/Users/jim/hb-1538/doc/CREDIT
/Users/jim/hb-1538/doc/backup-bouncer.out
/Users/jim/hb-1538/doc/backup-bouncer.sh
/Users/jim/hb-1538/doc/backup-bouncer.txt
/Users/jim/hb-1538/doc/dedup.info
/Users/jim/hb-1538/doc/dest.conf.examples
/Users/jim/hb-1538/doc/dest.conf.examples/README
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.b2
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.cloudfiles
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.dav
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.dir
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.ftp
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.glac
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.google
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.imap
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.openstack
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.rclone
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.rsync
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.s3
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.shell
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.ssh
/Users/jim/hb-1538/doc/dest.conf.examples/exdirshell.py
/Users/jim/hb-1538/doc/dest.conf.examples/rclone.py
/Users/jim/hb-1538/doc/inex.conf.example
/Users/jim/hb-1538/doc/mount.info
/Users/jim/hb-1538/doc/security
/Users/jim/hb-1538/hb

Time: 0.7s
Checked: 34 paths, 13601007 bytes, 13 MB
Saved: 34 paths, 13601007 bytes, 13 MB
Excluded: 1
Dupbytes: 0
Compression: 49%, 2.0:1
Space: 6.9 MB, 7.0 MB total
No errors

Now back it up again - backups are always incremental

[jim@mb hb-1538]$ hb backup -c testbackup .
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
This is backup version: 1
Dedup is not enabled

Time: 0.1s
Checked: 34 paths, 13601007 bytes, 13 MB
Saved: 3 paths, 0 bytes, 0
Excluded: 1
No errors

Create newfile with a line of test data

[jim@mb hb-1538]$ echo some test data>newfile

Do another backup of the whole install directory.
HashBackup only saves the changes.

[jim@mb hb-1538]$ hb backup -c testbackup .
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
This is backup version: 2
Dedup is not enabled
/Users/jim/hb-1538
/Users/jim/hb-1538/newfile

Time: 0.1s
Checked: 35 paths, 13601022 bytes, 13 MB
Saved: 5 paths, 15 bytes, 15 B
Excluded: 1
Dupbytes: 0
Space: 64 B, 7.0 MB total
No errors

Show the latest version of files in the backup

[jim@mb hb-1538]$ hb ls -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 2
Showing most recent version
/  (parent, partial)
/Users  (parent, partial)
/Users/jim  (parent, partial)
/Users/jim/hb-1538
/Users/jim/hb-1538/CHANGELOG
/Users/jim/hb-1538/README
/Users/jim/hb-1538/doc
/Users/jim/hb-1538/doc/CREDIT
/Users/jim/hb-1538/doc/backup-bouncer.out
/Users/jim/hb-1538/doc/backup-bouncer.sh
/Users/jim/hb-1538/doc/backup-bouncer.txt
/Users/jim/hb-1538/doc/dedup.info
/Users/jim/hb-1538/doc/dest.conf.examples
/Users/jim/hb-1538/doc/dest.conf.examples/README
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.b2
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.cloudfiles
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.dav
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.dir
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.ftp
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.glac
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.google
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.imap
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.openstack
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.rclone
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.rsync
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.s3
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.shell
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.ssh
/Users/jim/hb-1538/doc/dest.conf.examples/exdirshell.py
/Users/jim/hb-1538/doc/dest.conf.examples/rclone.py
/Users/jim/hb-1538/doc/inex.conf.example
/Users/jim/hb-1538/doc/mount.info
/Users/jim/hb-1538/doc/security
/Users/jim/hb-1538/hb
/Users/jim/hb-1538/newfile

Remove a file (in this case a directory) from the backup.  The complete path is required.

[jim@mb hb-1538]$ hb rm -c testbackup /Users/jim/hb-1538
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 2
Dedup enabled, 0% of current
Removing all versions of requested files
Removing path /Users/jim/hb-1538
Removed: 6.9 MB
Space: -6.9 MB, 139 KB total

Now what's in the backup?  We deleted almost everything!
NOTE: you can disable the rm command with a config option.

[jim@mb hb-1538]$ hb ls -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 2
Showing most recent version
/  (parent, partial)
/Users  (parent, partial)
/Users/jim  (parent, partial)

Backup a single file

[jim@mb hb-1538]$ hb backup -c testbackup newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
This is backup version: 3
Dedup is not enabled
/Users/jim/hb-1538/newfile

Time: 0.1s
Checked: 5 paths, 15 bytes, 15 B
Saved: 5 paths, 15 bytes, 15 B
Excluded: 0
Dupbytes: 0
Space: 64 B, 139 KB total
No errors

Back it up again; nothing is saved since it wasn't changed

[jim@mb hb-1538]$ hb backup -c testbackup newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
This is backup version: 4
Dedup is not enabled

Time: 0.1s
Checked: 5 paths, 15 bytes, 15 B
Saved: 4 paths, 0 bytes, 0
Excluded: 0
No errors

Add a line of data to the test file

[jim@mb hb-1538]$ echo more test data>>newfile

Back it up again, this time it is saved again

[jim@mb hb-1538]$ hb backup -c testbackup newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
This is backup version: 5
Dedup is not enabled
/Users/jim/hb-1538/newfile

Time: 0.1s
Checked: 5 paths, 30 bytes, 30 B
Saved: 5 paths, 30 bytes, 30 B
Excluded: 0
Dupbytes: 0
Space: 80 B, 139 KB total
No errors

Do another backup listing with -a to show all versions.
The version number of each item is on the left.

[jim@mb hb-1538]$ hb ls -c testbackup -a
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Showing all versions
  1 /  (parent, partial)
  1 /Users  (parent, partial)
  1 /Users/jim  (parent, partial)
  3 /Users/jim/hb-1538  (parent, partial)
  3 /Users/jim/hb-1538/newfile
  5 /Users/jim/hb-1538/newfile

Set aside the original test file for comparison

[jim@mb hb-1538]$ mv newfile newfile.bak

Restore a copy of the test file

[jim@mb hb-1538]$ hb get -c testbackup /Users/jim/hb-1538/newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Restoring most recent version

Restoring newfile to /Users/jim/hb-1538
/Users/jim/hb-1538/newfile
Restored /Users/jim/hb-1538/newfile to /Users/jim/hb-1538/newfile
No errors

Does it match the copy we set aside?  Yep

[jim@mb hb-1538]$ cmp newfile.bak newfile

Show contents of the test file again

[jim@mb hb-1538]$ cat newfile
some test data
more test data

Restore the first version of the test file using the -r option

[jim@mb hb-1538]$ hb get -r3 -c testbackup /Users/jim/hb-1538/newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Restoring from version: 3

Restoring newfile to /Users/jim/hb-1538
Path already exists and is newer than backup file: /Users/jim/hb-1538/newfile
  Existing file last modified on: 2016-07-13 14:39:48
  Backup file last modified on:   2016-07-13 14:38:18
Warning: existing file will be deleted after restore!
Restore? yes
/Users/jim/hb-1538/newfile
Restored /Users/jim/hb-1538/newfile to /Users/jim/hb-1538/newfile
No errors

Now what did we get?  The original version of the test file

[jim@mb hb-1538]$ cat newfile
some test data

Set the original version aside

[jim@mb hb-1538]$ mv newfile newfile.bak

Restore from version 4.  Since there is no version 4 of this file,
HashBackup restores from the next version down - version 3.

[jim@mb hb-1538]$ hb get -r4 -c testbackup /Users/jim/hb-1538/newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Restoring from version: 4

Restoring newfile to /Users/jim/hb-1538
/Users/jim/hb-1538/newfile
Restored /Users/jim/hb-1538/newfile to /Users/jim/hb-1538/newfile
No errors

With -r4 (or -r3) we get the original file, not the current version

[jim@mb hb-1538]$ cmp newfile newfile.bak

[jim@mb hb-1538]$ cat newfile
some test data

Show backup contents, again with -a to show all versions of files

[jim@mb hb-1538]$ hb ls -c testbackup -a
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Showing all versions
  1 /  (parent, partial)
  1 /Users  (parent, partial)
  1 /Users/jim  (parent, partial)
  3 /Users/jim/hb-1538  (parent, partial)
  3 /Users/jim/hb-1538/newfile
  5 /Users/jim/hb-1538/newfile

Run retain -m1 (max 1 copy) to keep only 1 version of every file

[jim@mb hb-1538]$ hb retain -c testbackup -m1
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Default retention time: all
Option -x (deleted file retention) is required when -t and -s are omitted

Uh, -x is the time to keep files that backup notices have been deleted

[jim@mb hb-1538]$ hb retain -c testbackup -m1 -x1
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Default retention time: all
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Dedup enabled, 0% of current
Backup finished at: 2016-07-13 14:39:52
Unrecognized time.  Use one of: _y _q _m _w _d _h _n _s, where _ is a number > 0 and
y = years  q = quarters  m = months  w = weeks  d = days  h = hours  n = minutes  s = seconds
Check -x option

Okay, okay - keep deleted files in the backup for 1 day.  Sheesh...

[jim@mb hb-1538]$ hb retain -c testbackup -m1 -x1d
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Default retention time: all
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Dedup enabled, 0% of current
Backup finished at: 2016-07-13 14:39:52
Deleted file retention time: 1d (keep files since 2016-07-12 14:39:52)
Maximum copies retained: 1
Removed: 64 B
Space: -64 B, 139 KB total
12 files deleted, 5 files retained

12 files deleted -- huh?  They were actually just directory path stubs.
Now what's in the backup?  Only 1 version of the test file

[jim@mb hb-1538]$ hb ls -c testbackup -a
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 5
Showing all versions
  5 /  (parent, partial)
  5 /Users  (parent, partial)
  5 /Users/jim  (parent, partial)
  5 /Users/jim/hb-1538  (parent, partial)
  5 /Users/jim/hb-1538/newfile

HashBackup has more config settings, let's display them.
Read the Config page online for details

[jim@mb hb-1538]$ hb config -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Current config version: 6

admin-passphrase
arc-size-limit 100mb
audit-commands
backup-linux-attrs False
cache-size-limit -1
copy-executable False
dbrev 20
dedup-mem 0
disable-commands
enable-commands
hfs-compress False
no-backup-ext
no-backup-tag
no-compress-ext
no-dedup-ext
pack-age-days 30
pack-bytes-free 1MB
pack-percent-free 50
pack-remote-archives False
remote-update normal
simulated-backup False

Activate dedup for this backup with a config setting.
You can also use the -D<mem> backup command line option.
HashBackup will only use what it needs, not all of it at once.
IMPORTANT: don't use more than half your free memory!

[jim@mb hb-1538]$ hb config -c testbackup dedup-mem 1gb
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Current config version: 6

Set dedup-mem to 1gb (was 0) for next backup

See that dedup-mem has been changed

[jim@mb hb-1538]$ hb config -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Current config version: 6

admin-passphrase
arc-size-limit 100mb
audit-commands
backup-linux-attrs False
cache-size-limit -1
copy-executable False
dbrev 20
dedup-mem 1gb
disable-commands
enable-commands
hfs-compress False
no-backup-ext
no-backup-tag
no-compress-ext
no-dedup-ext
pack-age-days 30
pack-bytes-free 1MB
pack-percent-free 50
pack-remote-archives False
remote-update normal
simulated-backup False

Before doing production backups, please read all of the documentation on the HashBackup web site.

For an automated production backup of a whole system:

- su to root
- as root, create a new backup directory with hb init, probably at the root level (this example uses /hbdata)
- set the dedup-mem config option
- edit inex.conf and add files and directories you don't want to save
- the backup command would be: # hb backup -c /hbdata /
- to automate nightly backups, use this root crontab entry (all on 1 line):
  00 02 * * * root /usr/local/bin/hb log backup -c /hbdata /; /usr/local/bin/hb log retain -c /hbdata -s30d12m; /usr/local/bin/hb log selftest -c /hbdata -v4 --inc 1d/30d
- this will:
  * run a backup at 2am every day
  * retain the last 30 days of backups + one every month for the last 12 months
  * do a selftest to check the backup
  * download (if necessary) and verify all backup data over 30 days
  * WARNING: this selftest -v4 can expensive for large backups because of download fees!  Adjust accordingly.
  * the dest verify command does faster & cheaper remote verification, but is less thorough
  * log all output to /hbdata/logs with timestamps for every line


SENDING BACKUP DATA OFFSITE

One of HashBackup's strengths is sending backups offsite to protect against disaster.  Since disk space is cheap these days, it is recommended that you keep a local backup on site as well as having a remote offsite backup.  Keeping a local copy makes HashBackup operations more efficient, especially if you lose a disk and have to restore a lot of data.  If you don't want to keep a complete local copy, use the cache-size-limit config option.

To setup remote backups, a dest.conf text file is created in the backup directory.  The doc/dest.conf.examples directory has templates for many different storage systems such as Amazon S3, Google Storage, Backblaze B2, and others.  Be sure to read doc/dest.conf.examples/README first, then the example for your specific destination type.

For this quick start, we will continue to use the testbackup backup directory and newfile data file we created earlier.  Here we go:

What's in the backup directory now?

[jim@mb hb-1538]$ ls testbackup
HBID        arc.5.0        cacerts.crt    dest.db        hash.db        hb#1538        hb.db        hb.lock        inex.conf    key.conf

That file arc.5.0 is the backup data file created by the backup command.  Arc files contain compressed, encrypted user data.

Let's setup an Amazon S3 destination first.  To do this, go to http://aws.amazon.com and create a free trial account.  At some point you will get  an access key/id and a secret key.  These are your S3 access credentials.  Only the secret key needs to be protected; the access id is like a user id.  Now we use those to create a HashBackup destination.  Your S3 "bucket" name must be unique worldwide.  It's a good idea to use the dir option to separate your different backups within 1 bucket.

[jim@mb hb-1538]$ cat - >testbackup/dest.conf
destname s3      
type s3
accesskey xxxaccesskeyxxx
secretkey xxxsecretkeyxxx
bucket somerandomuniquename

dir mb
^d (use control d to exit)

Backup the doc directory again (we deleted it earlier with the rm command).  Notice the lines about copying to S3.  HashBackup creates a new arc file for this backup and copies all backup data to S3, including arc.5.0 from an earlier backup.  Now there is a local copy of the backup (in testbackup) and a remote copy on S3.

[jim@mb hb-1538]$ hb backup -c testbackup doc
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Using destinations in dest.conf
This is backup version: 6
Dedup enabled, 0% of current, 0% of max
/Users/jim/hb-1538/doc
/Users/jim/hb-1538/doc/CREDIT
/Users/jim/hb-1538/doc/backup-bouncer.out
/Users/jim/hb-1538/doc/backup-bouncer.sh
/Users/jim/hb-1538/doc/backup-bouncer.txt
/Users/jim/hb-1538/doc/dedup.info
/Users/jim/hb-1538/doc/dest.conf.examples
/Users/jim/hb-1538/doc/dest.conf.examples/README
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.b2
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.cloudfiles
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.dav
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.dir
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.ftp
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.glac
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.google
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.imap
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.openstack
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.rclone
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.rsync
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.s3
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.shell
/Users/jim/hb-1538/doc/dest.conf.examples/dest.conf.ssh
/Users/jim/hb-1538/doc/dest.conf.examples/exdirshell.py
/Users/jim/hb-1538/doc/dest.conf.examples/rclone.py
/Users/jim/hb-1538/doc/inex.conf.example
/Users/jim/hb-1538/doc/mount.info
/Users/jim/hb-1538/doc/security
Copied arc.5.0 to s3 (80 B 0s 481 B/s)
Copied arc.6.0 to s3 (58 KB 1s 31 KB/s)
Copied hb.db.0 to s3 (8.4 KB 0s 34 KB/s)
Copied dest.db to s3 (4.1 KB 0s 41 KB/s)

Time: 1.3s
Checked: 31 paths, 166774 bytes, 166 KB
Saved: 31 paths, 166774 bytes, 166 KB
Excluded: 0
Dupbytes: 0
Compression: 59%, 2.5:1
Space: 66 KB, 66 KB total
No errors


Make a copy of our key.conf file and a copy of dest.conf, put them in the install directory.

[jim@mb hb-1538]$ cp testbackup/key.conf testbackup/dest.conf .

[jim@mb hb-1538]$ ls
CHANGELOG    README        dest.conf    doc        hb        key.conf    newfile        testbackup

Now remove the local copy of the backup data, like we lost the whole disk containing the backup

[jim@mb hb-1538]$ rm -rf testbackup
[jim@mb hb-1538]$ ls
CHANGELOG    README        dest.conf    doc        hb        key.conf    newfile

Yikes, it's gone - no backup!  Wait, we have a remote copy on S3.  Here's how to get it back:

[jim@mb hb-1538]$ mkdir testbackup
[jim@mb hb-1538]$ cp key.conf dest.conf testbackup
[jim@mb hb-1538]$ hb recover -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Using destinations in dest.conf

Recovering backup files from destination: s3
Files will be copied to: /Users/jim/hb-1538/testbackup

Proceed with recovery? yes

Removed /Users/jim/hb-1538/testbackup/dest.db
Getting dest.db from s3
Getting hb.db from s3
  Queueing /Users/jim/hb-1538/testbackup/hb.db.0
  Waiting /Users/jim/hb-1538/testbackup/hb.db.0
  Loading /Users/jim/hb-1538/testbackup/hb.db.0
  Verified signature
Verified hb.db signature
Download size: 58432 Files: 2
Queueing arc.6.0 58 KB from s3 @ 10:54:36
Queueing arc.5.0 80 B from s3 @ 10:54:36

Backup files recovered to: /Users/jim/hb-1538/testbackup
Verify your backup is intact with the selftest command:
hb selftest -c testbackup

Did we get our backup data back?  Yep

[jim@mb hb-1538]$ ls testbackup
arc.5.0        arc.6.0        cacerts.crt    dest.conf    dest.db        hb.db        hb.db.0        hb.lock        hb.sig        key.conf

It says to run selftest...

[jim@mb hb-1538]$ hb selftest -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 6
Using destinations in dest.conf
Level -v2 check; higher -v levels check more backup data
Checking all versions
Checking database readable
Checked  database readable
Checking database integrity
Checked  database integrity
Checking paths I
Checked  paths I
Checking keys
Checked  keys
Checking arcs I
Checked  arcs I
Checking blocks I
Checked  26 blocks I    
Checking refs I
Checked  26 refs I    
Checking arcs II
Checked  arcs II
Checking dedup table
Checked  dedup table
Checking files
Checked  36 files
Checking paths II
Checked  paths II
Checking blocks II
Checked  blocks II
No errors

An S3 offsite backup came to the rescue.  Here's how to add a Google offsite backup:

1. Go to the Google Storage web page, https://cloud.google.com/storage
2. Create an account
3. Go to your Google Console
4. Go to the Google API section
5. Look around until you find Simple API key or Developer Keys (Google keeps changing the site)
6. You need 2 things: an access key and a secret key.  HB does not use OAuth; it uses the S3-compatible interface.
7. add the Google destination to your dest.conf

[jim@mb hb-1538]$ cat - >>testbackup/dest.conf
 

destname google
type gs
accesskey mygoogleaccesskey
secretkey mygooglesecretkey
bucket uniquegooglebucket
dir mb
^d

Now do another doc backup.  It shouldn't save anything, but other things happen:

[jim@mb hb-1538]$ hb backup -c testbackup doc
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Using destinations in dest.conf
Copied HB program to /Users/jim/hb-1538/testbackup/hb#1538
Setting include/exclude defaults: /Users/jim/hb-1538/testbackup/inex.conf
This is backup version: 7
Sizing backup for dedup
Updating dedup information
Copied arc.5.0 to google (80 B 2s 37 B/s)
Copied arc.6.0 to google (58 KB 2s 26 KB/s)
Copied hb.db.0 to google (8.4 KB 0s 21 KB/s)
Writing hb.db.1
Copied hb.db.1 to google (8.6 KB 0s 16 KB/s)
Copied hb.db.1 to s3 (8.6 KB 0s 11 KB/s)
Copied dest.db to s3 (4.1 KB 0s 16 KB/s)
Copied dest.db to google (4.1 KB 0s 16 KB/s)
Removed hb.db.0 from s3
Removed hb.db.0 from google

Time: 2.7s
Checked: 31 paths, 166774 bytes, 166 KB
Saved: 4 paths, 0 bytes, 0
Excluded: 0
No errors

HashBackup copied all old backup data to the new Google destination, including arc.5.0 and arc.6.0.  This is an automatic sync that occurs at the beginning of every backup.  A similar thing happens if a destination is down during a backup: it will be "caught up" in the next backup.

Save a new file, see how that goes:

[jim@mb hb-1538]$ hb backup -c testbackup CHANGELOG
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Using destinations in dest.conf
This is backup version: 8
Dedup enabled, 0% of current, 0% of max
/Users/jim/hb-1538/CHANGELOG
Copied arc.8.0 to s3 (105 KB 2s 45 KB/s)
Copied arc.8.0 to google (105 KB 2s 43 KB/s)
Writing hb.db.2
Copied hb.db.2 to google (8.4 KB 0s 21 KB/s)
Copied hb.db.2 to s3 (8.4 KB 0s 11 KB/s)
Copied dest.db to s3 (4.1 KB 0s 42 KB/s)
Copied dest.db to google (4.1 KB 0s 13 KB/s)

Time: 1.2s
Checked: 5 paths, 291725 bytes, 291 KB
Saved: 5 paths, 291725 bytes, 291 KB
Excluded: 0
Dupbytes: 0
Compression: 60%, 2.6:1
Space: 114 KB, 181 KB total
No errors

This time a new arc file was created, arc.8.0, containing the backup data for the CHANGELOG file.  The new data was sent to both destinations.  No old data was sent since both destinations already had everything.

HashBackup keeps destinations synchronized when removing data too.  Here we remove the original test file:

[jim@mb hb-1538]$ hb rm -c testbackup `pwd`/newfile
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 8
Using destinations in dest.conf
Dedup enabled, 0% of current
Removing all versions of requested files
Removing path /Users/jim/hb-1538/newfile
Writing hb.db.3
Removed: 80 B
Space: 7.7 KB, 189 KB total
Copied hb.db.3 to google (7.8 KB 0s 11 KB/s)
Copied hb.db.3 to s3 (7.8 KB 0s 10 KB/s)
Copied dest.db to google (6.1 KB 0s 10 KB/s)
Copied dest.db to s3 (6.1 KB 0s 10 KB/s)
Removed arc.5.0 from s3
Removed arc.5.0 from google

It's easy to migrate to a new storage account: just add it to the dest.conf file, do a backup, and HashBackup will copy all backup data to the new destination.  We're going to move everything to Backblaze B2, then delete everything from S3 and Google.  First add the B2 destination:

[jim@mb hb-1538]$ cat ->>testbackup/dest.conf

destname bb
type b2
bucket b2globaluniquebucketname
dir quickstart
accountid 012345789
appkey 0123456789ABCDEF

^d

We'll use the dest command to sync everything, though any backup will do the same thing.

[jim@mb hb-1538]$ hb dest -c testbackup sync
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Using destinations in dest.conf
Writing hb.db.4
Copied hb.db.4 to s3 (2.6 KB 0s 8.3 KB/s)
Copied hb.db.4 to google (2.6 KB 0s 5.9 KB/s)
Copied arc.8.0 to bb (105 KB 4s 25 KB/s)
Copied hb.db.1 to bb (8.6 KB 1s 4.5 KB/s)
Copied hb.db.2 to bb (8.4 KB 1s 5.6 KB/s)
Copied hb.db.3 to bb (7.8 KB 1s 6.6 KB/s)
Waiting for destinations: bb
Copied arc.6.0 to bb (58 KB 10s 5.6 KB/s)
Copied hb.db.4 to bb (2.6 KB 1s 1.6 KB/s)
Copied dest.db to s3 (7.1 KB 1s 6.7 KB/s)
Copied dest.db to google (7.1 KB 1s 6.0 KB/s)
Copied dest.db to bb (7.1 KB 3s 1.8 KB/s)

Remove the backup from S3 and Google:

[jim@mb hb-1538]$ hb dest -c testbackup clear s3 google
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Using destinations in dest.conf

WARNING: this will delete all files from destinations: s3 google
Proceed? yes
Removed arc.6.0 from s3
Removed arc.8.0 from s3
Removed hb.db.1 from s3
Removed hb.db.3 from s3
Removed hb.db.2 from s3
Removed hb.db.4 from s3
Removed dest.db from s3
Removed DESTID from s3
Removed arc.8.0 from google
Removed hb.db.1 from google
Removed hb.db.2 from google
Removed arc.6.0 from google
Removed hb.db.3 from google
Removed hb.db.4 from google
Removed dest.db from google
Removed DESTID from google
Copied dest.db to bb (7.1 KB 3s 2.1 KB/s)

You now need to edit the dest.conf file to:
- add the off keyword to S3 and Google destinations
- or remove them completely from dest.conf
If you don't, the next backup will sync everything back to both of them.

After disabling the S3 and Google destinations, make sure we can still recover our backup directory from B2.
Don't forget to first make a copy of the new dest.conf with S3 and Google disabled and B2 added.

[jim@mb hb-1538]$ cp testbackup/dest.conf .
[jim@mb hb-1538]$ rm -rf testbackup
[jim@mb hb-1538]$ mkdir testbackup
[jim@mb hb-1538]$ cp dest.conf key.conf testbackup
[jim@mb hb-1538]$ hb recover -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Using destinations in dest.conf
Warning: destination is disabled: s3
Warning: destination is disabled: google

Recovering backup files from destination: bb
Files will be copied to: /Users/jim/hb-1538/testbackup

Proceed with recovery? yes

Removed /Users/jim/hb-1538/testbackup/dest.db
Getting dest.db from bb
Getting hb.db from bb
  Queueing /Users/jim/hb-1538/testbackup/hb.db.1
  Queueing /Users/jim/hb-1538/testbackup/hb.db.2
  Queueing /Users/jim/hb-1538/testbackup/hb.db.3
  Queueing /Users/jim/hb-1538/testbackup/hb.db.4
  Waiting /Users/jim/hb-1538/testbackup/hb.db.1
  Loading /Users/jim/hb-1538/testbackup/hb.db.1
  Verified signature
  Loading /Users/jim/hb-1538/testbackup/hb.db.2
  Verified signature
  Waiting /Users/jim/hb-1538/testbackup/hb.db.3
  Loading /Users/jim/hb-1538/testbackup/hb.db.3
  Verified signature
  Loading /Users/jim/hb-1538/testbackup/hb.db.4
  Verified signature
Verified hb.db signature
Download size: 164224 Files: 2
Queueing arc.8.0 105 KB from bb @ 15:07:29
Queueing arc.6.0 58 KB from bb @ 15:07:29

Backup files recovered to: /Users/jim/hb-1538/testbackup
Verify your backup is intact with the selftest command:
hb selftest -c testbackup

[jim@mb hb-1538]$ hb selftest -c testbackup
HashBackup build #1538 Copyright 2009-2016 HashBackup, LLC
Backup directory: /Users/jim/hb-1538/testbackup
Most recent backup version: 8
Using destinations in dest.conf
Warning: destination is disabled: s3
Warning: destination is disabled: google
Level -v2 check; higher -v levels check more backup data
Checking all versions
Checking database readable
Checked  database readable
Checking database integrity
Checked  database integrity
Checking paths I
Checked  paths I
Checking keys
Checked  keys
Checking arcs I
Checked  arcs I
Checking blocks I
Checked  32 blocks I    
Checking refs I
Checked  32 refs I    
Checking arcs II
Checked  arcs II
Checking dedup table
Checked  dedup table
Checking files
Checked  40 files
Checking paths II
Checked  paths II
Checking blocks II
Checked  blocks II
No errors

Looks good - we've migrated all our backup data to a new storage provider using a few HashBackup commands.
Comments