Benchmarks

These benchmarks are designed to give a feel for how HashBackup performs in different situations.  Many of these tests are not "real world" tests, but are designed to be easily compared with other backup software to test limits, memory usage, disk usage, deduplication effectiveness, and/or raw performance.  More benchmarks will be added over time.

Backup 16 x 8M files individually & combined to measure deduplication efficiency

Summary: HashBackup deduplicates 96% of the combined file

[jim@mb /]$ mkdir /hb


[jim@mb /]$ cd /hb


NOTE: Mac OSX requires 1m rather than 1M for the dd command


[jim@mb hb]$ for i in {00..15}; do dd if=/dev/urandom of=$i bs=1M count=8; done

8+0 records in

8+0 records out

8388608 bytes transferred in 0.915088 secs (9166997 bytes/sec)

...

8+0 records in

8+0 records out

8388608 bytes transferred in 0.912198 secs (9196038 bytes/sec)


[jim@mb hb]$ hb init -c backup

HashBackup #2490 Copyright 2009-2020 HashBackup, LLC

Backup directory: /hb/backup

Permissions set for owner access only

Created key file /hb/backup/key.conf

Key file set to read-only

Setting include/exclude defaults: /hb/backup/inex.conf


VERY IMPORTANT: your backup is encrypted and can only be accessed with

the encryption key, stored in the file:


    /hb/backup/key.conf


You MUST make copies of this file and store them in secure locations,

separate from your computer and backup data.  If your hard drive fails, 

you will need this key to restore your files.  If you have setup remote

destinations in dest.conf, that file should be copied too.

        

Backup directory initialized


[jim@mb hb]$ hb backup -c backup -D1g [0-9]*

HashBackup #2490 Copyright 2009-2020 HashBackup, LLC

Backup directory: /hb/backup

Backup start: 2020-05-24 10:46:13

Copied HB program to /hb/backup/hb#2490

This is backup version: 0

Dedup enabled, 0% of current size, 0% of max size

/

/hb

/hb/0

/hb/1

/hb/10

/hb/11

/hb/12

/hb/13

/hb/14

/hb/15

/hb/2

/hb/3

/hb/4

/hb/5

/hb/6

/hb/7

/hb/8

/hb/9

/hb/backup

/hb/backup/inex.conf


Time: 2.8s

CPU:  3.4s, 120%

Mem:  84 MB

Checked: 20 paths, 134218250 bytes, 134 MB

Saved: 20 paths, 134218250 bytes, 134 MB

Excluded: 0

Dupbytes: 0

Space: +134 MB, 134 MB total

No errors


No dedup occurs above on 16 files of random data, which is expected.  Now combine the 16 files into 1 large file and back it up to see how well it is deduplicated.


[jim@mb hb]$ cat [0-9]* > combined


[jim@mb hb]$ hb backup -c backup -D1g combined

HashBackup #2490 Copyright 2009-2020 HashBackup, LLC

Backup directory: /hb/backup

Backup start: 2020-05-24 10:46:40

This is backup version: 1

Dedup enabled, 0% of current size, 0% of max size

/

/hb

/hb/backup

/hb/combined


Time: 1.8s

CPU:  1.5s, 82%

Mem:  78 MB

Checked: 5 paths, 134218250 bytes, 134 MB

Saved: 4 paths, 134217728 bytes, 134 MB

Excluded: 0

Dupbytes: 129038102, 129 MB, 96%   <==== 96% of new data was deduped

Compression: 96%, 25.9:1

Efficiency: 82.48 MB reduced/cpusec

Space: +5.1 MB, 139 MB total

No errors



Backup 100 x 1M files individually & combined to measure deduplication efficiency

This test is a bit harder than the previous test of 8MB files because the files are now only 1MB.  Many backup programs are not able to dedup this data because they use a large block size, which reduces deduplication.  In this test, the combined file is backed up first, then the individual files.

Summary: HashBackup deduplicates 67% of the individual file data

[jim@mb hb]$ rm -rf /hb/*


[jim@mb hb]$ for i in {00..100}; do dd if=/dev/urandom of=$i bs=1M count=1; done

1+0 records in

1+0 records out

1048576 bytes transferred in 0.115422 secs (9084714 bytes/sec)

...

1+0 records in

1+0 records out

1048576 bytes transferred in 0.116055 secs (9035163 bytes/sec)


[jim@mb hb]$ cat [0-9]* > combined


[jim@mb hb]$ hb init -c backup

HashBackup #2490 Copyright 2009-2020 HashBackup, LLC

Backup directory: /hb/backup

Permissions set for owner access only

Created key file /hb/backup/key.conf

Key file set to read-only

Setting include/exclude defaults: /hb/backup/inex.conf


VERY IMPORTANT: your backup is encrypted and can only be accessed with

the encryption key, stored in the file:


    /hb/backup/key.conf


You MUST make copies of this file and store them in secure locations,

separate from your computer and backup data.  If your hard drive fails, 

you will need this key to restore your files.  If you have setup remote

destinations in dest.conf, that file should be copied too.

        

Backup directory initialized


[jim@mb hb]$ hb backup -c backup -D1g combined

HashBackup #2490 Copyright 2009-2020 HashBackup, LLC

Backup directory: /hb/backup

Backup start: 2020-05-24 10:55:47

Copied HB program to /hb/backup/hb#2490

This is backup version: 0

Dedup enabled, 0% of current size, 0% of max size

/

/hb

/hb/backup

/hb/backup/inex.conf

/hb/combined


Time: 2.2s

CPU:  2.7s, 123%

Mem:  83 MB

Checked: 5 paths, 105906698 bytes, 105 MB

Saved: 5 paths, 105906698 bytes, 105 MB

Excluded: 0

Dupbytes: 0

Space: +105 MB, 106 MB total

No errors


[jim@mb hb]$ hb backup -c backup -D1g [0-9]*

HashBackup #2490 Copyright 2009-2020 HashBackup, LLC

Backup directory: /hb/backup

Backup start: 2020-05-24 10:56:09

This is backup version: 1

Dedup enabled, 0% of current size, 0% of max size

/

/hb

/hb/0

/hb/1

...

/hb/98

/hb/99

/hb/backup


Time: 1.6s

CPU:  1.7s, 104%

Mem:  77 MB

Checked: 105 paths, 105906698 bytes, 105 MB

Saved: 104 paths, 105906176 bytes, 105 MB

Excluded: 0

Dupbytes: 71054149, 71 MB, 67%

Compression: 67%, 3.0:1        <===== 67% is deduplicated

Efficiency: 39.86 MB reduced/cpusec

Space: +34 MB, 140 MB total

No errors



Backup 1M zero-length files in a single directory

Summary: 3m 14s to backup 1M zero-length files, using 113MB of RAM and creating a backup database of around 118MB.

[jim@ms ~]$ /usr/bin/time -l hb log backup -c hb bigdir -v0

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb

Backup start: 2019-12-25 13:51:01

Copied HB program to /Users/jim/hb/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 194.3s, 3m 14s

CPU:  183.4s, 3m 3s, 94%

Mem:  113 MB

Checked: 1000006 paths, 522 bytes, 522 bytes

Saved: 1000006 paths, 522 bytes, 522 bytes

Excluded: 0

Dupbytes: 0

Compression: 47%, 1.9:1

Efficiency: 0.00 MB reduced/cpusec

Space: +272 bytes, 147 KB total

No errors

      194.60 real       117.94 user        65.97 sys

 113348608  maximum resident set size

     67448  page reclaims

     69712  block input operations

        53  block output operations

     69823  voluntary context switches

       540  involuntary context switches

[jim@ms ~]$ ls -l hb

total 270584

-rw-r--r--  1 jim  staff        320 Dec 25 13:54 arc.0.0

-rw-r--r--  1 jim  staff     282097 Dec 25 13:51 cacerts.crt

-rw-r--r--  1 jim  staff      36864 Dec 25 13:51 dest.db

-rw-r--r--  1 jim  staff    3146512 Dec 25 13:54 hash.db

-rwxr-xr-x  1 jim  staff   17168072 Dec 25 13:50 hb#2461

-rw-r--r--  1 jim  staff  117882880 Dec 25 13:54 hb.db

-rw-r--r--  1 jim  staff          6 Dec 25 13:51 hb.lock

-rw-r--r--  1 jim  staff        522 Dec 25 13:49 inex.conf

-r--------  1 jim  staff        338 Dec 25 13:49 key.conf

drwxr-xr-x  3 jim  staff        102 Dec 25 13:54 logs


Backup 1M zero-length files in one directory using 8 shards

This is similar to the previous test, but uses shards to run 8 parallel backups.  This system is a 4-core (8 hyperthread) Intel i7 Mac Mini.  The -p0 backup option is used to disable threading in each backup process since this is handled via sharding.  Sharding is transparent in HashBackup after it is enabled with the init command.

HashBackup evenly divides the 1M files in a large directory between 8 backup processes.  This division of work is stable, even when files are added or deleted, so that incremental backups work as expected.
 
Summary: 56 seconds to save 1M empty files using 8 parallel backup processes.  Each process used around 80MB of RAM, or 640MB altogether.  Each backup database is around 15MB, or 120MB altogether - similar to the non-sharded backup.

A test with 4 shards took 69 seconds, with each process using 104MB of RAM, or 416MB altogether.

[jim@ms ~]$ hb init -c hb --shards 8

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb

Permissions set for owner access only

Created key file /Users/jim/hb/key.conf

Key file set to read-only

Setting include/exclude defaults: /Users/jim/hb/inex.conf

Initializing shards

Shards 1-8 initialized


VERY IMPORTANT: your backup is encrypted and can only be accessed with

the encryption key, stored in the file:


    /Users/jim/hb/key.conf


You MUST make copies of this file and store them in secure locations,

separate from your computer and backup data.  If your hard drive fails, 

you will need this key to restore your files.  If you have setup remote

destinations in dest.conf, that file should be copied too.

        

Backup directory initialized

[jim@ms ~]$ /usr/bin/time -l hb log backup -c hb bigdir -v0 -p0

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC


Shard output: /Users/jim/hb/sout/backup.1

Start shard #1: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s1

Start shard #2: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s2

Start shard #3: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s3

Start shard #4: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s4

Start shard #5: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s5

Start shard #6: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s6

Start shard #7: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s7

Start shard #8: hb backup bigdir -v0 -p0 -c /Users/jim/hb/s8


--------

Shard #1

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s1

Shard #1, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s1/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 55.1s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 124816 paths, 0 bytes, 0 bytes

Saved: 124816 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #1 successful


--------

Shard #2

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s2

Shard #2, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s2/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 55.1s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 124829 paths, 0 bytes, 0 bytes

Saved: 124829 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #2 successful


--------

Shard #3

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s3

Shard #3, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s3/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 54.6s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 124764 paths, 0 bytes, 0 bytes

Saved: 124764 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #3 successful


--------

Shard #4

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s4

Shard #4, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s4/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 55.1s

CPU:  46.0s, 83%

Mem:  80 MB

Checked: 125700 paths, 0 bytes, 0 bytes

Saved: 125700 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #4 successful


--------

Shard #5

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s5

Shard #5, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s5/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 55.0s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 125135 paths, 0 bytes, 0 bytes

Saved: 125135 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #5 successful


--------

Shard #6

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s6

Shard #6, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s6/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 54.7s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 125306 paths, 522 bytes, 522 bytes

Saved: 125306 paths, 522 bytes, 522 bytes

Excluded: 0

Dupbytes: 0

Compression: 47%, 1.9:1

Efficiency: 0.00 MB reduced/cpusec

Space: +272 bytes, 147 KB total

No errors

Shard #6 successful


--------

Shard #7

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s7

Shard #7, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s7/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 55.0s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 124901 paths, 0 bytes, 0 bytes

Saved: 124901 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #7 successful


--------

Shard #8

--------

HashBackup #2461 Copyright 2009-2019 HashBackup, LLC

Backup directory: /Users/jim/hb/s8

Shard #8, 12% of files

Backup start: 2019-12-25 14:19:56

Using destinations in dest.conf

Copied HB program to /Users/jim/hb/s8/hb#2461

This is backup version: 0

Dedup not enabled; use -Dmemsize to enable

Backup started


Time: 55.0s

CPU:  45.8s, 83%

Mem:  80 MB

Checked: 124583 paths, 0 bytes, 0 bytes

Saved: 124583 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

Shard #8 successful


Shard summary for backup: 8 worked, 0 failed


       56.19 real       262.95 user       107.33 sys

  80199680  maximum resident set size

    313104  page reclaims
     70261  block input operations

        40  block output operations

    245803  voluntary context switches

    922071  involuntary context switches

[jim@ms ~]$ du -ksc hb/*

276 hb/cacerts.crt

4 hb/dest.conf

3076 hb/hash.db

144 hb/hb.db

4 hb/hb.lock

4 hb/inex.conf

4 hb/key.conf

12 hb/logs

34236 hb/s1

34236 hb/s2

34228 hb/s3

34332 hb/s4

34268 hb/s5

34304 hb/s6

34260 hb/s7

34212 hb/s8

32 hb/sout

277632 total

[jim@ms ~]$ ls -l hb/s1

total 68472

lrwxr-xr-x  1 jim  staff        14 Dec 25 14:19 cacerts.crt -> ../cacerts.crt

lrwxr-xr-x  1 jim  staff        12 Dec 25 14:19 dest.conf -> ../dest.conf

-rw-r--r--  1 jim  staff     36864 Dec 25 14:19 dest.db

-rw-r--r--  1 jim  staff   3146512 Dec 25 14:20 hash.db

-rwxr-xr-x  1 jim  staff  17168072 Dec 25 13:50 hb#2461

-rw-r--r--  1 jim  staff  14680064 Dec 25 14:20 hb.db

-rw-r--r--  1 jim  staff         6 Dec 25 14:19 hb.lock

lrwxr-xr-x  1 jim  staff        12 Dec 25 14:19 inex.conf -> ../inex.conf

lrwxr-xr-x  1 jim  staff        11 Dec 25 14:19 key.conf -> ../key.conf


Backup 2.5 million 100-byte files then read 30 random files via FUSE (hb mount)

Summary: randomly accessing 30 100-byte files from a 2.5M file backup through an HB FUSE mount uses less than 100MB of RAM, downloads 30MB, takes 0.25 seconds on average per file (max 1.3 seconds) to a remote Linode ssh server, and takes 0.5 seconds on average per file (max 5 seconds) to Backblaze B2.
 
A Python script creates 5 directories with 500000 100-byte files in each - 2.5M files.  (Thanks to @bowensong on GitHub for the test scripts!)  This is backed up with HashBackup to an SSH server, with cache-size-limit set to 0 so that all backup data is remote.  Then hb mount is used to create a FUSE local directory of the backup.  30 files are randomly accessed via the FUSE mount.  The Python scripts are attached at the bottom of the page.  This test was run on a 2GB single-CPU VM with 50GB SSD at Vultr in Atlanta, with the remote SSH server located at Linode in New Jersey.  Ping time between the two sites was about 19ms.

This test can be somewhat difficult to setup because the Linux ext4 filesystem accomodates ~60K files per GB of disk space.  To get enough inodes for 1M small files requires ~17GB of disk space, or use mkfs -i to create more inodes.  This test was reduced from the original 2M files per directory (10M total) to 500K per directory (2.5M total) for that reason.  For HashBackup, performance would be similar regardless of the number of files in the backup since it only downloads the data required from the remote for each file rather than entire arc files.

[root@hbfuse ~]# time python3 gen.py -d 5 -f 500000 -s 100 -p ~/bigdir


real2m44.867s

user0m26.339s

sys1m29.954s


[root@hbfuse ~]# hb init -c hb

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Permissions set for owner access only

Created key file /root/hb/key.conf

Key file set to read-only

Setting include/exclude defaults: /root/hb/inex.conf


VERY IMPORTANT: your backup is encrypted and can only be accessed with

the encryption key, stored in the file:


    /root/hb/key.conf


You MUST make copies of this file and store them in secure locations,

separate from your computer and backup data.  If your hard drive fails, 

you will need this key to restore your files.  If you have setup remote

destinations in dest.conf, that file should be copied too.

        

Backup directory initialized


[root@hbfuse ~]# cat dest.conf

destname linode

type ssh

host linode

dir hbfuse


[root@hbfuse ~]# cp dest.conf hb


[root@hbfuse ~]# hb config -c hb cache-size-limit 0

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Current config version: 0


Set cache-size-limit to 0 (was -1) for future backups


[root@hbfuse ~]# hb config -c hb dedup-mem 1g

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Current config version: 0


Set dedup-mem to 1g (was 0) for future backups


[root@hbfuse ~]# /usr/bin/time -v hb backup -c hb bigdir -v1

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Backup start: 2020-02-21 20:40:22

Using destinations in dest.conf

Increased cache to 220 MB

Copied HB program to /root/hb/hb#2488

This is backup version: 0

Dedup enabled, 0% of current size, 0% of max size

Backing up: /root/bigdir

Copied arc.0.0 to linode (100 MB 3s 30 MB/s)

Backing up: /root/hb/inex.conf

Copied arc.0.1 to linode (100 MB 3s 30 MB/s)

Copied arc.0.2 to linode (48 MB 1s 30 MB/s)

Writing hb.db.0

Waiting for destinations: linode

Copied hb.db.0 to linode (207 MB 5s 38 MB/s)

Copied dest.db to linode (3.6 MB 0s 16 MB/s)


Time: 955.9s, 15m 55s

CPU:  632.1s, 10m 32s, 66%

Wait: 18.9s

Mem:  144 MB

Checked: 2500010 paths, 250000104 bytes, 250 MB

Saved: 2500010 paths, 250000104 bytes, 250 MB

Excluded: 0

Dupbytes: 0

Space: +250 MB, 250 MB total

No errors

    Command being timed: "hb backup -c hb bigdir -v1"

    User time (seconds): 483.33

    System time (seconds): 162.44

    Percent of CPU this job got: 66%

    Elapsed (wall clock) time (h:mm:ss or m:ss): 16:14.97

    Maximum resident set size (kbytes): 140752

    Major (requiring I/O) page faults: 21

    Minor (reclaiming a frame) page faults: 224954

    Voluntary context switches: 2555879

    Involuntary context switches: 389926

    File system inputs: 24846808

    File system outputs: 2320032

    Exit status: 0


[root@hbfuse ~]# /usr/bin/time -v hb mount -c hb mnt

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Mounting backup at: /root/mnt

Unmount with: fusermount -u /root/mnt

Using destinations in dest.conf

Increased cache to 220 MB

Backup mounted in foreground; use Ctrl-\ to abort, followed by unmount command listed above


[NOTE: see below for random read test results from mount.  The following stats are for the hb mount command]


^\

Command terminated by signal 3

    Command being timed: "hb mount -c hb mnt"

    User time (seconds): 0.75

    System time (seconds): 0.56

    Percent of CPU this job got: 3%

    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:38.27

    Maximum resident set size (kbytes): 72808

    Major (requiring I/O) page faults: 2

    Minor (reclaiming a frame) page faults: 71068

    Voluntary context switches: 1854

    Involuntary context switches: 70

    File system inputs: 1224

    File system outputs: 64928

    Exit status: 0


[root@hbfuse ~]# umount mnt


In another window, run the random read test while mount is active.  Average file access is 0.25 seconds, max is 1.4 seconds.  Total downloaded data is 30MB.


[root@hbfuse ~]# /usr/bin/time -v python3 read.py -p ./mnt/latest/root/bigdir -d 5 -f 500000 -t30

Test started at 2020-02-21T21:02:50.821143

..............................

Test ended at 2020-02-21T21:02:58.320248

Test accessing 30 files took 7.499105 seconds

Fastest file took 0.001474 seconds

Slowest file took 1.382141 seconds

    Command being timed: "python3 read.py -p ./mnt/latest/root/bigdir -d 5 -f 500000 -t30"

    User time (seconds): 0.05

    System time (seconds): 0.02

    Percent of CPU this job got: 1%

    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.58

    Maximum resident set size (kbytes): 13264

    Major (requiring I/O) page faults: 11

    Minor (reclaiming a frame) page faults: 1591

    Voluntary context switches: 209

    Involuntary context switches: 74

    File system inputs: 10808

    Exit status: 0


Add B2 destination to dest.conf, then do an incremental backup.  This will download previous backup files from the ssh server, upload them to B2, then upload any new backup files to both destinations:


[root@hbfuse ~]# cat hb/dest.conf

destname b2

type b2

accountid xxx

appkey xxx

bucket hashbackup

dir hbfuse


destname linode

type ssh

host linode

dir hbfuse


[root@hbfuse ~]# /usr/bin/time -v hb backup -c hb bigdir -v1

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Backup start: 2020-02-22 03:04:59

Using destinations in dest.conf

Increased cache to 220 MB

Getting arc.0.0

Wait for arc.0.0 - 100 MB 3s

Getting arc.0.1

Wait for arc.0.1 - 100 MB 4s

Getting arc.0.2

Copied arc.0.1 to b2 (100 MB 11s 8.9 MB/s)

This is backup version: 1

Dedup enabled, 0% of current size, 0% of max size

Backing up: /root/bigdir

Backing up: /root/hb/inex.conf

Copied arc.0.2 to b2 (48 MB 2s 18 MB/s)

Copied arc.0.0 to b2 (100 MB 31s 3.2 MB/s)

Writing hb.db.1

Waiting for destinations: b2, linode

Copied hb.db.1 to linode (207 MB 6s 33 MB/s)

Waiting for destinations: b2

Copied hb.db.1 to b2 (207 MB 27s 7.5 MB/s)

Copied dest.db to linode (3.6 MB 0s 11 MB/s)

Waiting for destinations: b2

Copied dest.db to b2 (3.6 MB 7s 497 KB/s)


Time: 127.2s, 2m 7s

CPU:  81.6s, 1m 21s, 64%

Wait: 46.3s

Mem:  330 MB

Checked: 2500010 paths, 250000104 bytes, 250 MB

Saved: 3 paths, 0 bytes, 0 bytes

Excluded: 0

No errors

    Command being timed: "hb backup -c hb bigdir -v1"

    User time (seconds): 60.61

    System time (seconds): 36.71

    Percent of CPU this job got: 55%

    Elapsed (wall clock) time (h:mm:ss or m:ss): 2:54.30

    Maximum resident set size (kbytes): 322400

    Major (requiring I/O) page faults: 31

    Minor (reclaiming a frame) page faults: 295410

    Voluntary context switches: 104518

    Involuntary context switches: 327343

    File system inputs: 5034984

    File system outputs: 1079040

    Exit status: 0


Run hb mount again.  Now files are fetched from B2 because it comes first in dest.conf.  Accessing the 30 random files uses 83MB of RAM in the mount process.  Total downloaded data is 30MB (from du -ksc hb/spans.tmp):


[root@hbfuse ~]# /usr/bin/time -v hb mount -c hb mnt

HashBackup #2488 Copyright 2009-2020 HashBackup, LLC

Backup directory: /root/hb

Mounting backup at: /root/mnt

Unmount with: fusermount -u /root/mnt

Using destinations in dest.conf

Increased cache to 220 MB

Backup mounted in foreground; use Ctrl-\ to abort, followed by unmount command listed above

^\

Command terminated by signal 3

    Command being timed: "hb mount -c hb mnt"

    User time (seconds): 1.04

    System time (seconds): 0.47

    Percent of CPU this job got: 1%

    Elapsed (wall clock) time (h:mm:ss or m:ss): 1:30.80

    Maximum resident set size (kbytes): 83668

    Major (requiring I/O) page faults: 17

    Minor (reclaiming a frame) page faults: 41452

    Voluntary context switches: 3279

    Involuntary context switches: 249

    File system inputs: 24568

    File system outputs: 60312

    Exit status: 0

[root@hbfuse ~]# umount mnt


Here's the random read test from another window using 13MB of RAM.  Average file access is 0.5 seconds, max is 5 seconds.  Latency is expected to be higher since HashBackup is running in Atlanta and Backblaze is in California:


[root@hbfuse ~]# /usr/bin/time -v python3 read.py -p ./mnt/latest/root/bigdir -d 5 -f 500000 -t30

Test started at 2020-02-22T03:27:45.188460

..............................

Test ended at 2020-02-22T03:28:00.021474

Test accessing 30 files took 14.833014 seconds

Fastest file took 0.001294 seconds

Slowest file took 4.997817 seconds

    Command being timed: "python3 read.py -p ./mnt/latest/root/bigdir -d 5 -f 500000 -t30"

    User time (seconds): 0.05

    System time (seconds): 0.02

    Percent of CPU this job got: 0%

    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:14.92

    Maximum resident set size (kbytes): 13340

    Major (requiring I/O) page faults: 11

    Minor (reclaiming a frame) page faults: 1592

    Voluntary context switches: 209

    Involuntary context switches: 75

    File system inputs: 10704

    Exit status: 0


ċ
Jim Wilcoxson,
Feb 21, 2020, 7:38 PM
ċ
Jim Wilcoxson,
Feb 21, 2020, 7:39 PM
Comments