Would it be a problem if some files were falsely assumed to be unchanged in one scan?
I am currently pondering how to do /fast/ backups ( http://
One possibility that comes to my mind is using filesystem snapshot diffs like btrfs provides them to obtain a list of changed files before I even start the backup.
The general idea goes like this:
- Take an initial file system snapshot S1.
- Every time I want to do an incremental backup:
- create a new snapshot S2
- use some file system voodoo to get a list of changes or at least of the changed files between S1 and S2
- use my backup script/application to only perform a backup of those files since no other files changed between the snapshots
- reassign S2 to be the new 'current state', i.e. S1 <-- S2
As I gather it would not be a problem to pass such a list to rsync or thelike.
But duplicity doesn't support this so far, right?
Now I am wondering if passing such a list and making a mistake in the process would be a problem for duplicity.
I.e. if I hypothetically implemented a feature to only scan for changed files that are listed in a given file and this list was not complete, would duplicity pick up the other changed files on the next full scan without problems?
I am wondering because obviously speed should not come at the expense of completeness...
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Duplicity Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- edso
- Solved:
- Last query:
- Last reply: