Home‎ > ‎

Quick Start

Here's a quick "Getting Started" guide for HashBackup.  Run these commands
on your own system to get a feel for using HashBackup, then read the online
documentation for the details.

For this example, hb-mac-64bit.tar.gz was downloaded from the
HashBackup Download page to /Users/jim.  Here we go!

What's in the tar file?

[jim@mb ~]$ tar -tf hb-mac*
hb

This is the HashBackup installer.  Expand the tar file to create the installer:

[jim@mb ~]$ tar -xzf hb-mac-64bit.tar.gz

Run the installer to download the real HashBackup program

[jim@mb ~]$ ./hb
HashBackup installer #6 Copyright 2009-2019 HashBackup, LLC
Downloading http://upgrade.hashbackup.com/2428/hb.r2428.Darwin.i386.bz2
Verified file signature
Installed #2428 as /Users/jim/hb

Become root to install the executable file to /usr/local/bin (Linux or OSX)

[jim@mb]$ sudo sh
Password:
sh-3.2# cp hb /usr/local/bin
sh-3.2# exit  (control D)

Create a play backup directory named testbackup in the home directory

[jim@mb]$ hb init -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Permissions set for owner access only
Created key file /Users/jim/testbackup/key.conf
Key file set to read-only
Setting include/exclude defaults: /Users/jim/testbackup/inex.conf

VERY IMPORTANT: your backup is encrypted and can only be accessed with
the encryption key, stored in the file:

    /Users/jim/testbackup/key.conf

You MUST make copies of this file and store them in secure locations,
separate from your computer and backup data.  If your hard drive fails, 
you will need this key to restore your files.  If you have setup remote
destinations in dest.conf, that file should be copied too.
        
Backup directory initialized

What's in a HashBackup backup directory?

[jim@mb]$ ls testbackup
cacerts.crt    hash.db        hb.db        hb.lock        inex.conf    key.conf

What does a key file look like?  MAKE A COPY OF IT FOR REAL BACKUPS!

[jim@mb]$ cat testbackup/key.conf
# HashBackup Key File - DO NOT EDIT!
Version 1
Build 2428
Created Wed Aug 21 16:34:42 2019 1566419682.16
Host Darwin | mb | 10.8.0 | Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 | i386
Keyfrom random
Key 0403 56b8 9f93 dc37 50e6 f7d5 40ce e71e 26dc a602 2e21 507d 314f eabc b9ac 3c78

The inex.conf file is an editable list of files excluded from the backup

[jim@mb]$ cat testbackup/inex.conf
ex /.fseventsd
ex /.hotfiles.btree
ex /.Spotlight-V100
ex /.Trashes
ex /Users/*/.bash_history
ex /Users/*/.emacs.d
ex /Users/*/Library/Application Support/MobileSync
ex /Users/*/Library/Application Support/SyncServices
ex /Users/*/Library/Caches/
ex /Users/*/Library/PubSub/Database
ex /Users/*/Library/PubSub/Downloads
ex /Users/*/Library/PubSub/Feeds
ex /Volumes/
ex /cores/
ex *.vmem
ex /private/tmp/
ex /private/var/db/dyld/dyld_*
ex /private/var/db/Spotlight-V100
ex /private/var/vm/
ex /tmp/
ex /var/tmp/

Create and backup a data directory

[jim@mb ~]$ mkdir data
[jim@mb ~]$ echo Hello There >data/myfile
[jim@mb ~]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 16:39:30
Copied HB program to /Users/jim/testbackup/hb#2428
This is backup version: 0
Dedup not enabled; use -Dmemsize to enable
/
/Users
/Users/jim
/Users/jim/data
/Users/jim/data/myfile
/Users/jim/testbackup
/Users/jim/testbackup/inex.conf

Time: 0.2s
CPU:  0.1s, 56%
Mem:  56 MB
Checked: 7 paths, 534 bytes, 534 bytes
Saved: 7 paths, 534 bytes, 534 bytes
Excluded: 0
Dupbytes: 0
Compression: 49%, 2.0:1
Efficiency: 0.00 MB reduced/cpusec
Space: +272 bytes, 147 KB total
No errors

Now back it up again - backups are always incremental

[jim@mb]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 16:40:45
This is backup version: 1
Dedup not enabled; use -Dmemsize to enable
/
/Users
/Users/jim
/Users/jim/testbackup

Time: 0.1s
CPU:  0.0s, 89%
Mem:  55 MB
Checked: 7 paths, 534 bytes, 534 bytes
Saved: 4 paths, 0 bytes, 0 bytes
Excluded: 0
No errors

myfile wasn't saved since it didn't change.  Create newfile with some test data

[jim@mb ~]$ echo more testing >data/newfile

Do another backup of the data directory.
HashBackup only saves the changes.

[jim@mb ~]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 16:47:02
This is backup version: 2
Dedup not enabled; use -Dmemsize to enable
/
/Users
/Users/jim
/Users/jim/data
/Users/jim/data/newfile
/Users/jim/testbackup

Time: 0.1s
CPU:  0.0s, 88%
Mem:  55 MB
Checked: 8 paths, 547 bytes, 547 bytes
Saved: 6 paths, 13 bytes, 13 bytes
Excluded: 0
Dupbytes: 0
Space: +16 bytes, 147 KB total
No errors

Show the latest version of files in the backup

[jim@mb ~]$ hb ls -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 2
Showing most recent version, use -ad for all
/  (parent, partial)
/Users  (parent, partial)
/Users/jim  (parent, partial)
/Users/jim/data
/Users/jim/data/myfile
/Users/jim/data/newfile
/Users/jim/testbackup  (parent, partial)
/Users/jim/testbackup/inex.conf

Remove a file from the backup.  The complete path is required.
NOTE: you can disable the rm command with a config option.

[jim@mb]$ hb rm -c testbackup /Users/jim/data/myfile
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 2
Dedup loaded, 0% of current size
Removing all versions of requested files
Removing path /Users/jim/data/myfile
Packing deferred until: 2019-08-28 16:39:30 (see pack-age-days config option)
Mem: 37 MB
Removed: 0 bytes, 1 files, 0 arc files
Space: +0 bytes, 147 KB total

Now what's in the backup?

[jim@mb ~]$ hb ls -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 2
Showing most recent version, use -ad for all
/  (parent, partial)
/Users  (parent, partial)
/Users/jim  (parent, partial)
/Users/jim/data
/Users/jim/data/newfile
/Users/jim/testbackup  (parent, partial)
/Users/jim/testbackup/inex.conf

Add a line of data to the test file

[jim@mb]$ echo more test data>>newfile

Back up data directory again

[jim@mb ~]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 16:54:36
This is backup version: 3
Dedup not enabled; use -Dmemsize to enable
/
/Users
/Users/jim
/Users/jim/data/myfile
/Users/jim/data/newfile
/Users/jim/data
/Users/jim/testbackup

Time: 0.1s
CPU:  0.0s, 86%
Mem:  55 MB
Checked: 8 paths, 562 bytes, 562 bytes
Saved: 7 paths, 40 bytes, 40 bytes
Excluded: 0
Dupbytes: 0
Space: +48 bytes, 147 KB total
No errors

Do another backup listing with -a to show all versions.
The backup version number of each item is on the left.

[jim@mb ~]$ hb ls -c testbackup -a
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 3
Showing all versions
   0 /  (parent, partial)
   0 /Users  (parent, partial)
   0 /Users/jim  (parent, partial)
   0 /Users/jim/data
   3 /Users/jim/data/myfile
   2 /Users/jim/data/newfile
   3 /Users/jim/data/newfile
   0 /Users/jim/testbackup  (parent, partial)
   0 /Users/jim/testbackup/inex.conf

Restore a copy of the test file to the current directory /Users/jim
To put files back in their original location, use --orig

[jim@mb ~]$ hb get -c testbackup /Users/jim/data/newfile
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 3
Restoring most recent version

Restoring newfile to /Users/jim
/Users/jim/newfile
Restored /Users/jim/data/newfile to /Users/jim/newfile
No errors

Does it match the original?  Yep

[jim@mb]$ cmp newfile data/newfile

Show contents of the restored test file again

[jim@mb hb-1538]$ cat newfile
more testing
more test data

Restore the first version of the test file using the -r option

[jim@mb ~]$ hb get -c testbackup /Users/jim/data/newfile -r2
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 3
Restoring from version: 2

Restoring newfile to /Users/jim
Path already exists and is newer than backup file: /Users/jim/newfile
  Existing file last modified on: 2019-08-21 16:54:22
  Backup file last modified on:   2019-08-21 16:46:48
Warning: existing file will be overwritten!
Restore? yes
/Users/jim/newfile
Restored /Users/jim/data/newfile to /Users/jim/newfile
No errors

Now what did we get?  The original version of  newfile

[jim@mb]$ cat newfile
more testing

Show backup contents, again with -a to show all versions of files

[jim@mb]$ hb ls -c testbackup -a
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 3
Showing all versions
   0 /  (parent, partial)
   0 /Users  (parent, partial)
   0 /Users/jim  (parent, partial)
   0 /Users/jim/data
   3 /Users/jim/data/myfile
   2 /Users/jim/data/newfile
   3 /Users/jim/data/newfile
   0 /Users/jim/testbackup  (parent, partial)
   0 /Users/jim/testbackup/inex.conf

Run retain -m1 (max 1 copy) to keep only 1 version of every file

[jim@mb ~]$ hb retain -c testbackup -m1
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 3
Dedup loaded, 0% of current size
Backup finished at: 2019-08-21 16:54:36
Retention time: keep all
Keep all deleted files (use -x to limit)
Max copies: 1
Checking files
Checked 23 files
Checking 14 directories
Packing deferred until: 2019-08-28 16:39:30 (see pack-age-days config option)
Mem: 37 MB
Removed: 64 bytes, 11 files, 1 arc files
Space: -64 bytes, 147 KB total
23 files, 12 52% kept, 11 47% deleted

11 files deleted?  They were actually just directory path stubs.
Now what's in the backup?  Only 1 version of  newfile

[jim@mb ~]$ hb ls -c testbackup -a
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 3
Showing all versions
   0 /  (parent, partial)
   0 /Users  (parent, partial)
   0 /Users/jim  (parent, partial)
   3 /Users/jim/data
   3 /Users/jim/data/myfile
   3 /Users/jim/data/newfile
   0 /Users/jim/testbackup  (parent, partial)
   0 /Users/jim/testbackup/inex.conf

HashBackup has more config settings, let's display them.
Read the Config page for details

[jim@mb]$ hb config -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Current config version: 4

arc-size-limit 100mb
backup-linux-attrs False
block-size 32K
block-size-ext 
cache-size-limit -1
copy-executable False
db-check-integrity selftest
db-history-days 3
dbid 858c-4799 (read-only)
dbrev 32 (read-only)
dedup-mem 0
disable-commands 
enable-commands 
no-backup-ext 
no-backup-tag 
no-compress-ext 
no-dedup-ext 
pack-age-days 30
pack-bytes-free 1MB
pack-combine-min 1MB
pack-download-limit 950MB
pack-percent-free 50
remote-update normal
retain-extra-versions True
shard-id  (read-only)
shard-output-days 30
simulated-backup False

Activate dedup for this backup with a config setting.
You can also use the -D<mem> backup command line option.
HashBackup will only use what it needs, not all of it at once.
IMPORTANT: don't use more than half your free memory!

[jim@mb]$ hb config -c testbackup dedup-mem 1gb
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Current config version: 4

Set dedup-mem to 1gb (was 0) for next backup

See that dedup-mem has been changed

[jim@mb ~]$ hb config -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Current config version: 4

arc-size-limit 100mb
backup-linux-attrs False
block-size 32K
block-size-ext 
cache-size-limit -1
copy-executable False
db-check-integrity selftest
db-history-days 3
dbid 858c-4799 (read-only)
dbrev 32 (read-only)
dedup-mem 1gb             <== changed it
disable-commands 
enable-commands 
no-backup-ext 
no-backup-tag 
no-compress-ext 
no-dedup-ext 
pack-age-days 30
pack-bytes-free 1MB
pack-combine-min 1MB
pack-download-limit 950MB
pack-percent-free 50
remote-update normal
retain-extra-versions True
shard-id  (read-only)
shard-output-days 30
simulated-backup False

Before doing production backups, please read all of the documentation on the HashBackup web site.

For an automated production backup of a whole system:

- su to root
- as root, create a new backup directory with hb init, probably at the root level (this example uses /hbdata)
- set the dedup-mem config option
- edit inex.conf and add files and directories you don't want to save
- the backup command would be: # hb backup -c /hbdata /
- to automate nightly backups, use this root crontab entry (all on 1 line):
  00 03 * * * root /usr/local/bin/hb log backup -c /hbdata /; /usr/local/bin/hb log retain -c /hbdata -s30d12m; /usr/local/bin/hb log selftest -c /hbdata -v4 --inc 1d/30d
- this will:
  * run a backup at 3am every day
  * retain the last 30 days of backups + one every month for the last 12 months. Adjust as needed
  * do a selftest to check the backup
  * download (if necessary) and verify all backup data over 30 days
  * WARNING: selftest -v4 can expensive for large backups because of download fees!  Adjust accordingly.
  * the dest verify command does faster & cheaper remote verification, but is less thorough
  * log all output to /hbdata/logs with timestamps for every line


SENDING BACKUP DATA OFFSITE

One of HashBackup's strengths is sending backups offsite to protect against disaster.  Since disk space is cheap these days, it is recommended that you keep a local backup on site as well as having a remote offsite backup.  Keeping a local copy makes HashBackup operations more efficient, especially if you lose a disk and have to restore a lot of data.  If you don't want to keep a complete local copy, use the cache-size-limit config option.

To setup remote backups, a dest.conf text file is created in the backup directory.  The Destinations page has more information and examples for many different storage systems such as Amazon S3, Google Storage, Backblaze B2, and others.

For this quick start, we will continue to use the testbackup backup directory and newfile data file we created earlier.  Here we go:

What's in the backup directory now?

[jim@mb ~]$ ls testbackup
arc.0.0 cacerts.crt hash.db hb.db inex.conf
arc.3.0 dest.db hb#2428 hb.lock key.conf

The arc.V.N files contain the deduplicated, compressed, encrypted user data created by the backup command.

Let's setup an Amazon S3 destination first.  To do this, go to http://aws.amazon.com and create a free trial account.  At some point you will get  an access key/id and a secret key.  These are your S3 access credentials.  Only the secret key needs to be protected; the access id is like a user id.  Now we use those to create a HashBackup destination.  Your S3 "bucket" name must be unique worldwide.  It's a good idea to use the dir option to separate your different backups within 1 bucket.

[jim@mb]$ cat - >testbackup/dest.conf
destname s3      
type s3
accesskey xxxaccesskeyxxx
secretkey xxxsecretkeyxxx
bucket somerandomuniquename
dir mb
^d (use control d to exit)

Create a new file and backup the data directory again.  Notice the lines about copying to S3.  HashBackup creates a new arc file for this backup and copies all backup data to S3, including arc files from an earlier backup.  Now there is a local copy of the backup (in testbackup) and a remote copy on S3.

[jim@mb ~]$ echo another test >data/file3
[jim@mb ~]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 18:40:36
Using destinations in dest.conf
This is backup version: 4
Dedup enabled, 0% of current size, 0% of max size
Updating dedup information
/
/Users
/Users/jim
/Users/jim/data
/Users/jim/data/file3
/Users/jim/testbackup
Copied arc.0.0 to s3 (320 bytes 0s 1.9 KB/s)
Copied arc.3.0 to s3 (96 bytes 0s 503 bytes/s)
Copied arc.4.0 to s3 (64 bytes 0s 577 bytes/s)
Writing hb.db.0
Copied hb.db.0 to s3 (6.3 KB 0s 27 KB/s)
Copied dest.db to s3 (36 KB 1s 21 KB/s)

Time: 0.6s
CPU:  0.1s, 18%
Mem:  62 MB
Checked: 9 paths, 575 bytes, 575 bytes
Saved: 6 paths, 13 bytes, 13 bytes
Excluded: 0
Dupbytes: 0
Space: +16 bytes, 37 KB total
No errors

Make a copy of our key.conf file and a copy of dest.conf

[jim@mb]$ cp testbackup/key.conf testbackup/dest.conf .

Now remove the local copy of the backup data, like we lost the whole disk containing the backup

[jim@mb]$ rm -rf testbackup

Yikes, it's gone - no backup at all!  Wait, we have a remote copy on S3.  Here's how to get it back:

[jim@mb]$ mkdir testbackup
[jim@mb]$ cp key.conf dest.conf testbackup
[jim@mb ~]$ hb recover -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Using destinations in dest.conf

Recovering backup files from destination: s3
Files will be copied to: /Users/jim/testbackup

Proceed with recovery? yes

Removed /Users/jim/testbackup/dest.db
Getting dest.db from s3
Getting hb.db from s3
Queueing hb.db files

Waiting for /Users/jim/testbackup/hb.db.0
Loading hb.db.0
Verified hb.db.0 signature

Verified hb.db signature
Checking db integrity
Removing hb.db.N files
Queueing arc files from s3
Waiting for 1 arc files...       

Backup files recovered to: /Users/jim/testbackup
Verify the backup with the selftest command:
  $ hb selftest -c testbackup
If inex.conf was customized, restore it with the hb get command.

Did we get our backup data back?  Yep

[jim@mb ~]$ ls testbackup
arc.0.0 arc.3.0 arc.4.0 cacerts.crt dest.conf dest.db hb.db hb.lock key.conf

It says to run selftest...

[jim@mb]$ hb selftest -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 4
Using destinations in dest.conf
Level -v2 check; higher -v levels check more backup data
Checking all versions
Checking database readable
Checked  database readable
Checking database integrity
Checked  database integrity
Checking dedup table
Checked  dedup table
Checking paths I
Checked  paths I
Checking keys
Checked  keys
Checking arcs I
Checked  arcs I
Checking blocks I
Checked  7 blocks I     
Checking refs I
Checked  4 refs I     
Checking arcs II
Checked  arcs II
Checking files
Checked  18 files
Checking paths II
Checked  paths II
Checking blocks II
Checked  blocks II
No errors

An S3 offsite backup came to the rescue.  Here's how to add a Google offsite backup:

1. Go to the Google Storage web page, https://cloud.google.com/storage
2. Create an account
3. Go to your Google Console
4. Go to the Google API section
5. Look around until you find Simple API key or Developer Keys
6. You need 2 things: an access key and a secret key.  HB does not use OAuth; it uses the S3-compatible interface.
7. add the Google destination to your dest.conf

[jim@mb]$ cat - >>testbackup/dest.conf
 
destname google
type gs
accesskey mygoogleaccesskey
secretkey mygooglesecretkey
bucket uniquegooglebucket
dir mb
^d

Now do another data backup.  It shouldn't save any files since they didn't change, but other things happen:

[jim@mb ~]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 18:53:54
Using destinations in dest.conf
This is backup version: 5
Dedup enabled, 0% of current size, 0% of max size
/
/Users
/Users/jim
/Users/jim/testbackup
/Users/jim/testbackup/inex.conf
Copied arc.5.0 to s3 (320 bytes 0s 2.1 KB/s)
Copied arc.3.0 to google (96 bytes 0s 157 bytes/s)
Copied arc.0.0 to google (320 bytes 0s 523 bytes/s)
Copied arc.5.0 to google (320 bytes 0s 973 bytes/s)
Copied arc.4.0 to google (64 bytes 0s 159 bytes/s)
Writing hb.db.3
Copied hb.db.3 to s3 (6.6 KB 0s 11 KB/s)
Copied hb.db.3 to google (6.6 KB 0s 10 KB/s)
Copied dest.db to s3 (36 KB 1s 27 KB/s)
Copied dest.db to google (36 KB 1s 23 KB/s)

Time: 1.6s
CPU:  0.1s, 8%
Mem:  62 MB
Checked: 9 paths, 575 bytes, 575 bytes
Saved: 5 paths, 522 bytes, 522 bytes
Excluded: 0
Dupbytes: 0
Compression: 47%, 1.9:1
Efficiency: 0.00 MB reduced/cpusec
Space: +272 bytes, 54 KB total
No errors

HashBackup copied all old backup data to the new Google destination.  This is an automatic sync that occurs at the beginning of every backup.  A similar thing happens if a destination is down during a backup: it will be "caught up" in the next backup.

Save a new file, see how that goes:

[jim@mb ~]$ echo testing is fun >data/file4
[jim@mb ~]$ hb backup -c testbackup data
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Backup start: 2019-08-21 18:56:14
Using destinations in dest.conf
This is backup version: 6
Dedup enabled, 0% of current size, 0% of max size
/
/Users
/Users/jim
/Users/jim/data
/Users/jim/data/file4
/Users/jim/testbackup
Copied arc.6.0 to s3 (64 bytes 0s 379 bytes/s)
Copied arc.6.0 to google (64 bytes 0s 98 bytes/s)
Writing hb.db.4
Copied hb.db.4 to s3 (6.6 KB 0s 10 KB/s)
Copied hb.db.4 to google (6.6 KB 0s 7.2 KB/s)
Copied dest.db to s3 (36 KB 1s 25 KB/s)
Copied dest.db to google (36 KB 1s 23 KB/s)

Time: 1.0s
CPU:  0.1s, 10%
Mem:  62 MB
Checked: 10 paths, 590 bytes, 590 bytes
Saved: 6 paths, 15 bytes, 15 bytes
Excluded: 0
Dupbytes: 0
Space: +16 bytes, 61 KB total
No errors

This time a new arc file was created, arc.6.0, containing the backup data for file4.  The new data was sent to both destinations.  No old data was sent since both destinations already had everything.

It's easy to migrate to a new storage account: just add it to the dest.conf file, do a backup, and HashBackup will copy all backup data to the new destination.  We're going to copy everything to Backblaze B2, then delete everything from S3 and Google.  First add the B2 destination:

[jim@mb]$ cat ->>testbackup/dest.conf

destname bb
type b2
bucket b2globaluniquebucketname
dir quickstart
accountid 012345789
appkey 0123456789ABCDEF

^d

We'll use the dest command to sync everything, though any backup will do the same thing.

[jim@mb ~]$ hb dest -c testbackup sync
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Using destinations in dest.conf
Writing hb.db.7
Copied hb.db.7 to s3 (7.0 KB 0s 19 KB/s)
Copied hb.db.7 to google (7.0 KB 1s 5.2 KB/s)
Copied hb.db.7 to b2 (7.0 KB 2s 2.6 KB/s)
Copied arc.0.0 to b2 (320 bytes 1s 254 bytes/s)
Copied arc.4.0 to b2 (64 bytes 0s 124 bytes/s)
Copied arc.5.0 to b2 (320 bytes 0s 1.1 KB/s)
Copied arc.6.0 to b2 (64 bytes 0s 254 bytes/s)
Copied arc.3.0 to b2 (96 bytes 4s 23 bytes/s)
Copied dest.db to google (45 KB 2s 17 KB/s)
Copied dest.db to s3 (45 KB 2s 17 KB/s)
Copied dest.db to b2 (45 KB 3s 11 KB/s)

Remove the backup from S3 and Google:

[jim@mb]$ hb dest -c testbackup clear s3 google
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Using destinations in dest.conf

WARNING: this will delete all files from destinations: s3 google
Proceed? yes
Removed arc.4.0 from s3
Removed arc.0.0 from s3
Removed arc.3.0 from s3
Removed arc.6.0 from s3
Removed hb.db.5 from s3
Removed hb.db.3 from s3
Removed hb.db.2 from s3
Removed hb.db.1 from s3
Removed hb.db.0 from s3
Removed hb.db.7 from s3
Removed hb.db.6 from s3
Removed arc.5.0 from s3
Removed hb.db.4 from s3
Removed DESTID from s3
Removed dest.db from s3
Removed arc.0.0 from google
Removed arc.3.0 from google
Removed arc.6.0 from google
Removed hb.db.5 from google
Removed arc.4.0 from google
Removed hb.db.3 from google
Removed hb.db.6 from google
Removed hb.db.7 from google
Removed hb.db.4 from google
Removed arc.5.0 from google
Removed dest.db from google
Removed DESTID from google
Waiting for destinations: b2
Copied dest.db to b2 (45 KB 5s 8.5 KB/s)

You now need to edit the dest.conf file to:
- add the off keyword to S3 and Google destinations
- or remove them completely from dest.conf
If you don't, the next backup will sync everything back to both of them.

After disabling the S3 and Google destinations, make sure we can still recover our backup directory from B2.  Don't forget to first make a copy of the new dest.conf with S3 and Google disabled and B2 added.

[jim@mb]$ cp testbackup/dest.conf .
[jim@mb]$ rm -rf testbackup
[jim@mb]$ mkdir testbackup
[jim@mb]$ cp dest.conf key.conf testbackup
[jim@mb]$ hb recover -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Using destinations in dest.conf
Destinations you have setup are: s3 google b2
Specify a destination to use for recovering backup files

This error means we forgot to disable the s3 and google destinations.  Do that, then try again

[jim@mb]$ hb recover -c testbackup
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Using destinations in dest.conf

Recovering backup files from destination: b2
Files will be copied to: /Users/jim/testbackup

Proceed with recovery? yes

Removed /Users/jim/testbackup/dest.db
Getting dest.db from b2
Getting hb.db from b2
Queueing hb.db files

Waiting for /Users/jim/testbackup/hb.db.7
Loading hb.db.7
Verified hb.db.7 signature

Verified hb.db signature
Checking db integrity
Removing hb.db.N files
Queueing arc files from b2
Waiting for 3 arc files...       

Backup files recovered to: /Users/jim/testbackup
Verify the backup with the selftest command:
  $ hb selftest -c testbackup
If inex.conf was customized, restore it with the hb get command.

Try a selftest, this time with -v4 to download and verify all remote arc files.  We have 2 copies, one on B2, one locally.  If there are any problems, HashBackup will merge the two copies to try to correct the error

[jim@mb ~]$ hb selftest -c testbackup -v4
HashBackup #2428 Copyright 2009-2019 HashBackup, LLC
Backup directory: /Users/jim/testbackup
Most recent backup version: 6
Using destinations in dest.conf
Checking all versions
Checking database readable
Checked  database readable
Checking database integrity
Checked  database integrity
Checking dedup table
Checked  dedup table
Checking paths I
Checked  paths I
Checking keys
Checked  keys
Checking arcs I
Checked  arcs I
Checking blocks I
Getting arc.0.0 from b2
Checking arc.0.0
Checked  arc.0.0 from b2
Checked  arc.0.0 from (local)
Getting arc.3.0 from b2
Checking arc.3.0
Checked  arc.3.0 from b2
Checked  arc.3.0 from (local)
Getting arc.4.0 from b2
Checking arc.4.0
Checked  arc.4.0 from b2
Checked  arc.4.0 from (local)
Getting arc.5.0 from b2
Checking arc.5.0
Checked  arc.5.0 from b2
Checked  arc.5.0 from (local)
Getting arc.6.0 from b2
Checking arc.6.0
Checked  arc.6.0 from b2
Checked  arc.6.0 from (local)
Checked  10 blocks I     
Checking refs I
Checked  5 refs I     
Checking arcs II
Checked  arcs II
Checking files
Checked  28 files
Checking paths II
Checked  paths II
Checking blocks II
Checked  blocks II
No errors

Looks good - we've migrated all our backup data to a new storage provider using just a few HashBackup commands.
Comments