HashBackup is designed to be easy to use, because if backup isn’t easy, it won’t happen at all!


  • Installs as a single static executable without dependencies or system config issues

  • Backup data is deduped, compressed, then encrypted locally with the AES encryption standard

  • Each backup is a complete snapshot for "point in time" restores

  • Full filesystem attribute support for accurate restores

  • Backs up to local and/or remote storage. Automatically synchronizes destinations & migrates backup data to new destinations

  • Easy, flexible file retention policies like last 7 days + 1 per month

  • Mount backup as a FUSE filesystem for easy, direct access to all versions. Mountable backup means no proprietary file-format lock in

  • Correctly handles deleted files

  • Backup admin can remove files & directories from the backup

  • Runs as root or regular userid


  • Backup key is created and stored locally only

  • Secure remove option removes confidential data

  • Backup admin can restrict commands with a password

  • Unalterable audit logs

  • Public key encryption option for write-only backups (readkey)


  • Incremental selftest verifies all backup data over time

  • Sampled selftest verifies random blocks now or over time

  • Layers of checksums ensure data integrity:

    • SHA1 hash on each block

    • SHA1 hash on each file

    • Service-specific hash on file uploads & downloads


  • Scales from backing up /etc to multi-TB backups of millions of files. The largest reported backup is 53M files and 121TB, requiring 8GB of RAM. Sharding enables parallel backups of huge filesystems. Dedup is memory-efficient and usable even with multi-TB backups

  • Supports: sftp, scp, ftp & ftps, rsync, WebDAV, Amazon S3 and compatibles, Backblaze B2, USB thumb drives & removable hard drives, mounted remote storage (NFS, Samba/CIFS, sshfs, WebDAV, "cloud" drives)

  • Scriptable shell destination for storage not supported natively

  • Rclone destination for access to anything Rclone supports: Google Drive, Microsoft One Drive, Dropbox, etc.


  • Backups are incremental for fast + small backups. Periodic full backups are an option

  • Incremental restore uses local file data to reduce downloads

  • Quick verify of remote data without downloading (dest verify)

  • Utilizes multiple CPU cores for fast backups and restores. All destination uploads and downloads are multithreaded

  • Backup to multiple destinations with one filesystem scan

  • Selective download retreives only the blocks needed (partial backup file download)

  • Extremely space-efficient incremental backups of virtual machine disk images: .vmdk, .hdd, .qcow, etc.

  • Backup and restore devices and LVM snapshots directly

  • Fifo backup directly saves program output and database dumps

  • Smart cache enables restoring large backups with minimal disk space

  • Efficiently handles "sparse" (thin-provisioned) files with OS support

  • Checkpoints allow restarting interrupted backups & restores


  • Dedups redundant data within and across files with variable block source dedup

  • Backup time limits and fast restarts handle huge multi-day initial backups

  • Upload bandwidth limiting

  • Simulated backups to model backup options over time