backintime to an encfs fs

Asked by Christophe

I create an ext fs instead a (large) file. It will hold the data saved by backintime (this is to prevent the backup from growing endlessly, and also this is to prevent the creation of millions of files/links which makes the disk run slow if I use gparted on that machine at a later time)
this FS is loop-mounted, then inside it I create an encfs where
- BIT is the unencrypted view of .encBIT which is the crypted fs (with encfs)

I would expect this to work, but it does not; more precisely, backintime runs, but all files get copied at each backup (not good).
i have seen similar issues reported due to the interaction between encfs and rsync, but they are rather old so I would have expected them to have been solved.

Now my question:
- is that supposed to work ? what am I missing ? If not, is a "luks container" the solution to do what I am trying to do ?
- also, I have seen there is a encfs option for backintime, when invoked from the command line; I did not find a way to activate this with the GUI, Am I missing anything here ?

Thanks for hints

Question information

Language:
English Edit question
Status:
Solved
For:
Back In Time Edit question
Assignee:
No assignee Edit question
Solved by:
Christophe
Solved:
Last query:
Last reply:
Revision history for this message
Germar (germar) said :
#1

1) encfs should work but you need to use --standard during creating the encfs folder. With paranoid settings hard-links will not work.
2) starting from version 1.0.26 you will find mode 'Local encrypted' in Generals Tab (now it's 'Local'). This will create and mount the encfs folder automatically.

3) how did you check that BIT didn't use hard-links? Please take a look at FAQ https://answers.launchpad.net/backintime/+faq/2403
4) you don't need to create a virtual fs to prevent snapshots to fill up your drive. You can configure 'Auto remove' to delete the oldest snapshot if you have e.g. less than 200Gb free space. Also 'Smart Remove' is quite good for this.

Regards,
Germar

Revision history for this message
Christophe (chpnp) said :
#2

Thanks for your email.

1- I used paranoid; this explains that
2- ubuntu has a prior version. ... Shame on me.
3- du -ks gives the same size, then in the backup directory, ls -l gives the number of hard links per file; I can see it stays to "1" where it should be more
The method you propose gives different inode numbers
4- ok, i have been using that; but still that creates hundreds of thousand of files if not millions. I suspect this explains while gparted against that disk needs > 10 minutes (!) to start.... :(
Hoping to solve this performance issue by having only a single large file.

Will try the standard option, though I do not measure the security consequence.....

Revision history for this message
Christophe (chpnp) said :
#3

I did not come back back then, but happyily created a file loop-mounted where I store all the files in an encfs fiel system, which limits the overall size.
Has woked this way for 2 years and i have it run every 4 hours, since there is not much data to save in that timeframe anyway

Thanks for the hints and problem solved

Revision history for this message
Christophe (chpnp) said :
#4

solved as stated above