These backup times seem very excessive

Asked by Chris Stankaitis

I am backing up files from one RAID array to a 2nd (local disk) I.E. from /dev/sdb to /dev/sdc and I am seeing very excessive backup times. This is causing me to be unable to run daily incremental backups.

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Apr 3 13:43:15 2012
--------------[ Backup Statistics ]--------------
StartTime 1334198004.29 (Wed Apr 11 22:33:24 2012)
EndTime 1334504987.98 (Sun Apr 15 11:49:47 2012)
ElapsedTime 306983.69 (85 hours 16 minutes 23.69 seconds)
SourceFiles 203
SourceFileSize 314257699705 (293 GB)
NewFiles 6
NewFileSize 24576 (24.0 KB)
DeletedFiles 0
ChangedFiles 40
ChangedFileSize 239824252920 (223 GB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 46
RawDeltaSize 28361373902 (26.4 GB)
TotalDestinationSizeChange 19308097435 (18.0 GB)
Errors 0
-------------------------------------------------

and

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Apr 3 19:29:54 2012
--------------[ Backup Statistics ]--------------
StartTime 1334506498.81 (Sun Apr 15 12:14:58 2012)
EndTime 1334548211.79 (Sun Apr 15 23:50:11 2012)
ElapsedTime 41712.98 (11 hours 35 minutes 12.98 seconds)
SourceFiles 1047
SourceFileSize 211338608127 (197 GB)
NewFiles 8
NewFileSize 73728 (72.0 KB)
DeletedFiles 0
ChangedFiles 381
ChangedFileSize 208686327292 (194 GB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 389
RawDeltaSize 17216688797 (16.0 GB)
TotalDestinationSizeChange 3215297671 (2.99 GB)
Errors 0
-------------------------------------------------

What if anything can I do to speed this up?

The --asynchronous-upload flag as been marked as experimental for about as long as I have been using duplicity (3 years or so) is this still very risky? has the code been fixed and the docs not updated to reflect? what are the risks to using this flag?

Running on

RHEL6 x86_64
duplicity 0.6.17

Question information

Language:
English Edit question
Status:
Answered
For:
Duplicity Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
edso (ed.so) said :
#1

On 16.04.2012 21:30, Chris Stankaitis wrote:
> New question #193846 on Duplicity:
> https://answers.launchpad.net/duplicity/+question/193846
>
> I am backing up files from one RAID array to a 2nd (local disk) I.E. from /dev/sdb to /dev/sdc and I am seeing very excessive backup times. This is causing me to be unable to run daily incremental backups.
>
...
> What if anything can I do to speed this up?
>
> The --asynchronous-upload flag as been marked as experimental for about as long as I have been using duplicity (3 years or so) is this still very risky? has the code been fixed and the docs not updated to reflect? what are the risks to using this flag?
>
> Running on
>
> RHEL6 x86_64
> duplicity 0.6.17
>
>

please try 0.6.18 and see if the issue persists.

--asynchronous-upload wil probably stay marked experimental as long as nobody comes forward reporting it works flawlessly.. i for one use it since it's inception and never had issues. how about you?

..ede/duply.net

Revision history for this message
Kenneth Loafman (kenneth-loafman) said :
#2

Yes, please try 0.6.18. It may help. Also, turn on --asynchronous-upload.

A couple of questions...

Is this a software RAID?

Are /dev/sdb and /dev/sdc on the same controller?

Where is /tmp?

...Ken

Revision history for this message
Chris Stankaitis (cstankaitis) said :
#3

This is HW Raid. 3ware 9750 series.

2 - Drives /u0 RAID 1 for OS + temp
12 - Drives /u1 RAID 10 - where the db dump lives and where duplicity is copying from
12 Drives /u2 RAID 6 - where duplicity is copying to

I am going to upgrade from 0.17 to 0.18. I am sorry but I am not confident this will do much. I will run a normal inc on 0.6.18 to show that it didn't do much (if it does I'll be the first to admit I was wrong) and then I will run another incremental with the async-upload and we'll see what happens.

Revision history for this message
Chris Stankaitis (cstankaitis) said :
#4

for reference this is our cmd line.

duplicity -v8 --no-encryption --archive-dir "/var/duplicity" --tempdir "/var/tmp" dbsource/current file:///backup2/archive/dbsource/

Revision history for this message
edso (ed.so) said :
#5

On 17.04.2012 17:00, Chris Stankaitis wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Chris Stankaitis posted a new comment:
> This is HW Raid. 3ware 9750 series.
>
> 2 - Drives /u0 RAID 1 for OS + temp
> 12 - Drives /u1 RAID 10 - where the db dump lives and where duplicity is copying from
> 12 Drives /u2 RAID 6 - where duplicity is copying to
>
> I am going to upgrade from 0.17 to 0.18. I am sorry but I am not
> confident this will do much. I will run a normal inc on 0.6.18 to show
> that it didn't do much (if it does I'll be the first to admit I was
> wrong) and then I will run another incremental with the async-upload and
> we'll see what happens.
>

please try. there was a memleak issue fixed which had a huge impact on performance especially on big backups.

ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#6

Hey guys!

I have the same upload problem since a long time.
Each incremental backup has a size between 5 and 10 GB. Every single file has a size of 1GB.
Mainly the single files are created and transfered to the NAS (mountpoint) within 5 to 7 minutes.
But some files need more than an hour to be created and transfered.

Now an incremental backup is running for 12 hours.
One file of it needs more than 10 hours to be created, 16kb per second.
Server load is on 5.00 ...

Duplicity 0.6.08b-1 was installed on a Debian Squeeze machine with duply 1.5.2.3 as wrapper.
Two days ago I upgraded duplicity to 0.6.13 from debian-backports.

Would like to upgrade to 0.6.18 but how to do I via aptitude?
When I try to install from sid there are some problems with dependencies.

Revision history for this message
edso (ed.so) said :
#7

On 20.04.2012 11:15, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> Hey guys!
>
> I have the same upload problem since a long time.
> Each incremental backup has a size between 5 and 10 GB. Every single file has a size of 1GB.
> Mainly the single files are created and transfered to the NAS (mountpoint) within 5 to 7 minutes.
> But some files need more than an hour to be created and transfered.
>
> Now an incremental backup is running for 12 hours.
> One file of it needs more than 10 hours to be created, 16kb per second.
> Server load is on 5.00 ...
>
> Duplicity 0.6.08b-1 was installed on a Debian Squeeze machine with duply 1.5.2.3 as wrapper.
> Two days ago I upgraded duplicity to 0.6.13 from debian-backports.
>
> Would like to upgrade to 0.6.18 but how to do I via aptitude?
> When I try to install from sid there are some problems with dependencies.
>

try the mini howto under TIP here
http://duply.net/?title=Duply-documentation

install needed packages mentioned via aptitude first.

..ede

Revision history for this message
Chris Stankaitis (cstankaitis) said :
#8

Ok so the upgrade to 0.6.18 has helped a bit...

--------------[ Backup Statistics ]--------------
StartTime 1334754015.69 (Wed Apr 18 09:00:15 2012)
EndTime 1334870565.07 (Thu Apr 19 17:22:45 2012)
ElapsedTime 116549.39 (32 hours 22 minutes 29.39 seconds)
SourceFiles 203
SourceFileSize 319343112521 (297 GB)
NewFiles 6
NewFileSize 24576 (24.0 KB)
DeletedFiles 0
ChangedFiles 34
ChangedFileSize 244908956420 (228 GB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 40
RawDeltaSize 7674819255 (7.15 GB)
TotalDestinationSizeChange 5589184884 (5.21 GB)
Errors 0
-------------------------------------------------

This still however doesn't allow me to get in a daily incremental since it's taking more than a day to do an incremental. I am going to see what the async upload flag does to this timing. Will report back in a day or however long the next inc takes.

Revision history for this message
edso (ed.so) said :
#9

On 20.04.2012 15:20, Chris Stankaitis wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Chris Stankaitis posted a new comment:
> Ok so the upgrade to 0.6.18 has helped a bit...
>
> --------------[ Backup Statistics ]--------------
> StartTime 1334754015.69 (Wed Apr 18 09:00:15 2012)
> EndTime 1334870565.07 (Thu Apr 19 17:22:45 2012)
> ElapsedTime 116549.39 (32 hours 22 minutes 29.39 seconds)
> SourceFiles 203
> SourceFileSize 319343112521 (297 GB)
> NewFiles 6
> NewFileSize 24576 (24.0 KB)
> DeletedFiles 0
> ChangedFiles 34
> ChangedFileSize 244908956420 (228 GB)
> ChangedDeltaSize 0 (0 bytes)
> DeltaEntries 40
> RawDeltaSize 7674819255 (7.15 GB)
> TotalDestinationSizeChange 5589184884 (5.21 GB)
> Errors 0
> -------------------------------------------------
>
> This still however doesn't allow me to get in a daily incremental since
> it's taking more than a day to do an incremental. I am going to see
> what the async upload flag does to this timing. Will report back in a
> day or however long the next inc takes.
>

doing backups to a local file:// target and upload/sync with remote via an extra script usually does the trick for people with poor bandwidth.

ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#10

Ede, thanks for the quick response.
Hope upgrading to last version will help.

The bandwidth is not the reason for my problem.
Last night creating two temp files in /var/tmp/duplicity-xxx/ (local harddrive, 1GB each file) tooks nearly 10 hours.
Uploading files tooks about 5 minutes each file.

I guess the option --asynchronous-upload will not do the trick for me.
I'll keep you informed ...

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#11

Installed the last version of duplicity. Still the same problems :-(

Upload from temp dir to the target is still not the problem, 100 MB/s.
But creating temp files in /var/tmp is very slow.

Currently incremental backup is running.
The temp file size is increasing in 200kb/s steps.
However, some temp files are packed in less than 5 minutes.

Maybe it could be a problem caused by compression.
Is there an alternative compression method?

Greetz!

Log from first full backup:
--- Start running command BKP at 21:13:14.146 ---
Import of duplicity.backends.sshbackend Failed: No module named paramiko
Import of duplicity.backends.giobackend Failed: No module named gio
Reading globbing filelist /etc/duply/server/exclude
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Reuse configured PASSPHRASE as SIGN_PASSPHRASE
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1334949194.83 (Fri Apr 20 21:13:14 2012)
EndTime 1334967065.29 (Sat Apr 21 02:11:05 2012)
ElapsedTime 17870.47 (4 hours 57 minutes 50.47 seconds)
SourceFiles 1410078
SourceFileSize 99876357390 (93.0 GB)
NewFiles 1410078
NewFileSize 99799028328 (92.9 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 1410078
RawDeltaSize 99456103820 (92.6 GB)
TotalDestinationSizeChange 58708663941 (54.7 GB)
Errors 0
-------------------------------------------------

--- Finished state OK at 02:16:18.216 - Runtime 05:03:04.070 ---

Revision history for this message
Kenneth Loafman (kenneth-loafman) said :
#12

Hmm, the log shows the number of files, 1,410,078, and that's a lot. I'm thinking this may have more to do with it than anything else. They are averaging 70.8k apiece and don't seem to be compressible. I'm thinking that the memory use must be high, possibly swapping. Can you tell if swapping is occurring?

If swapping is occurring, split the backup into multiple parts that make sense and try again.

Revision history for this message
edso (ed.so) said :
#13

On 22.04.2012 22:30, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> Installed the last version of duplicity. Still the same problems :-(

ok, please check as Ken proposed if your system is swapping because duplicity's process fill up your ram

> Upload from temp dir to the target is still not the problem, 100 MB/s.
> But creating temp files in /var/tmp is very slow.

did you make sure that your /tmp filesystem is not the issue here?
>
> Currently incremental backup is running.
> The temp file size is increasing in 200kb/s steps.
> However, some temp files are packed in less than 5 minutes.

maybe it get's sporadically slow?

> Maybe it could be a problem caused by compression.
> Is there an alternative compression method?

duplicity leaves compression to gpg. if you add the appropriate --gpg-options you can change algorithm and level.

btw. it seems that nearly your whole backup changes on incrementals. in case of e.g. database dumps we usually suggest not to compress them as this makes the impossible for librsync to detect only the changed portions within a file. compression is then done during backup.

..ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#14

Swapping is not occurring cause the server has 24 GB of ram. Swap partition is 512 MB.
When backup is running ram load is not really high. About 4 GB is in use.
Is it possible telling duplicity to use more ram?
How can I prevent the swapping process?

> If swapping is occurring, split the backup into multiple parts that make sense and try again.
Split the backup into multiple parts means one backup profile for email, one profile for web, one profile for /etc ?

> did you make sure that your /tmp filesystem is not the issue here?
Changing temp dir to /var/tmp because /tmp is part of the system partition (/ mount point) with just 20 GB.
/var/tmp is mounted on a big partition, file system is ext3.

> maybe it get's sporadically slow?
Absolutely. Sometimes the duplicity temp file in /var/tmp needs just 5 minutes to be created.
Sometimes (another duplicity temp file) needs more than a couple of hours.

> btw. it seems that nearly your whole backup changes on incrementals. in case of e.g. database
> dumps we usually suggest not to compress them as this makes the impossible for librsync to
> detect only the changed portions within a file. compression is then done during backup.

Database dumps are not in the backup set of duplicity. Another process handles database backup.
And /var/lib/mysql is an excluded directory.

Revision history for this message
edso (ed.so) said :
#15

On 23.04.2012 15:50, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> Swapping is not occurring cause the server has 24 GB of ram. Swap partition is 512 MB.
> When backup is running ram load is not really high. About 4 GB is in use.
> Is it possible telling duplicity to use more ram?
> How can I prevent the swapping process?

you can't. swapping/paging occurs when ram is full and additional memory is needed. less used ram content is then moved from fast ram to slow swap disks. the alternative is that the kernel kills processes that it thinks are not essential.

>> If swapping is occurring, split the backup into multiple parts that make sense and try again.
> Split the backup into multiple parts means one backup profile for email, one profile for web, one profile for /etc ?

that's what ken meant, but obviously that's not needed.

>> did you make sure that your /tmp filesystem is not the issue here?
> Changing temp dir to /var/tmp because /tmp is part of the system partition (/ mount point) with just 20 GB.
> /var/tmp is mounted on a big partition, file system is ext3.

did you benchmark the filesystem over a longer period to make sure it is reliably fast?

>
>> maybe it get's sporadically slow?
> Absolutely. Sometimes the duplicity temp file in /var/tmp needs just 5 minutes to be created.
> Sometimes (another duplicity temp file) needs more than a couple of hours.

see above.

>
>> btw. it seems that nearly your whole backup changes on incrementals. in case of e.g. database
>> dumps we usually suggest not to compress them as this makes the impossible for librsync to
>> detect only the changed portions within a file. compression is then done during backup.
>
> Database dumps are not in the backup set of duplicity. Another process handles database backup.
> And /var/lib/mysql is an excluded directory.

that was an example. what kind of data do you back up? it seems to change to a big degree.

did you try the compression switches of gpg?

..ede/duply.net

Revision history for this message
Chris Stankaitis (cstankaitis) said :
#16

Unfortunately a reboot of one of our boxes killed my screen so I am restarting the process of gathering stats on the speed difference of using aysnc uploads. will report on that ASAP.

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#17

Swapping does not occur because of 24 GB ram.
It does not occur when server is really busy and it does not occur when backup is in progress.

I did not do a benchmark of the filesystem.
It only feels reliable fast :-) What benchmark logs do you like to see?

I only changed the volume size from 1GB to 250MB. But it does not the trick.
Later I will undo the changes and try another compression.

For now compression is default method. No additional parameter in the config.
Will change to --compress-algo=bzip2 and --bzip2-compress-level=9

Would you agree with the compress parameter or do you have another suggestion?

Revision history for this message
edso (ed.so) said :
#18

On 23.04.2012 17:05, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> Swapping does not occur because of 24 GB ram.
> It does not occur when server is really busy and it does not occur when backup is in progress.
>
> I did not do a benchmark of the filesystem.
> It only feels reliable fast :-) What benchmark logs do you like to see?

me? none ;) .. but how can you be sure the issue is with duplicity if you didn't even try to exclude your tmp space?

> I only changed the volume size from 1GB to 250MB. But it does not the trick.
> Later I will undo the changes and try another compression.
>
> For now compression is default method. No additional parameter in the config.
> Will change to --compress-algo=bzip2 and --bzip2-compress-level=9
>
> Would you agree with the compress parameter or do you have another
> suggestion?
>

yes, try to disable compression for testing purposes as it seems feasible that it could be the issue here.

try 0.6.18 as already suggested first.

..ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#19

I installed version 0.6.18 three days ago.

What I did so far:
- installed 0.6.18 and doing backups (full / incr.) with this version
-> since upgrade cpu load is never more than 2.00
-> before upgrade cpu load was about 5 and more

- changed config, volsize from 1000 to 250 ... and back.
-> nothing changed

- changed config, temp dir from /var/tmp (my default) to /tmp and an 1Gb/s network share ... and back to default
-> nth changed

- changed config, gpg options to '--compress-algo=bzip2 --bzip2-compress-level=9'; this config line was commented all the time
-> waiting for result but ...

... I carfully think since I upgraded to newest version there isn't really an issue (except cpu (over)load). I think there are no problems with duplicity and no probs with harddrive, ram, etc.
Kenneth could be very right. 1.5 million files are a lot. Mainly small files, mail and a lot of cms webpages.

Currently I configured one duply profile for everything I would like to backup (server config, mail and web).
Anything else is excluded. Maybe optimizing the backup process could be helpful.
Are there any experiences with backup processes in connection with duplicity?
I googled for that but there were only small examples ...

I will try my best. Try and error :-)
Thank you for your help and time.
I'll keep you informed ...

Revision history for this message
edso (ed.so) said :
#20

On 23.04.2012 22:30, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> I installed version 0.6.18 three days ago.
>
> What I did so far:
> - installed 0.6.18 and doing backups (full / incr.) with this version
> -> since upgrade cpu load is never more than 2.00
> -> before upgrade cpu load was about 5 and more

good to hear

...
> - changed config, gpg options to '--compress-algo=bzip2 --bzip2-compress-level=9'; this config line was commented all the time
> -> waiting for result but ...

remember to try without compression, you also try to leave out encryption completely --no-encryption for testing purposes, to pinpoint that gpg might be a problem here

> ... I carfully think since I upgraded to newest version there isn't really an issue (except cpu (over)load). I think there are no problems with duplicity and no probs with harddrive, ram, etc.
> Kenneth could be very right. 1.5 million files are a lot. Mainly small files, mail and a lot of cms webpages.
>
> Currently I configured one duply profile for everything I would like to backup (server config, mail and web).
> Anything else is excluded. Maybe optimizing the backup process could be helpful.
> Are there any experiences with backup processes in connection with duplicity?
> I googled for that but there were only small examples ...

yes try a smaller subset of your data, like 10GB, where duplicity is proven to be performant.

> I will try my best. Try and error :-)
> Thank you for your help and time.
> I'll keep you informed ...

please do.. ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#21

Sorry for the long delay.
Had very much to do last two weeks. No time for checking backup process :-(

But now there is another problem with duplicity.
I installed duplicity to /root/_apps/duplicity-0.6.18/ (see http://duply.net/?title=Duply-documentation under TIP).
After that I created a symlink to /usr/local/bin: ln -s /root/_apps/duplicity-0.6.18/bin/duplicity /usr/local/bin/duplicity

If I run duplicity directly from the root-dir (/root/_apps/duplicity-0.6.18/bin/duplicity --version) the output is "duplicity 0.6.18". Absolutly correct. But if I run duplicity from /usr/local/bin (using symlink) the output is:
"Traceback (most recent call last):
File "/usr/local/bin/duplicity", line 40, in <module>
from duplicity import log
ImportError: No module named duplicity"

With that problem I can't run duply correctly because I can't find a way to tell duply where to find duplicity (I my case under /root/_apps/duplicity-0.6.18/bin/duplicity ...)

Please tell me what I do wrong or how to install duplicity and duply correctly (not via aptitude because there are only very old versions of duply and duplicity available in stable repository).

Thank you very much for your help!

Revision history for this message
edso (ed.so) said :
#22

On 04.05.2012 11:20, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> Sorry for the long delay.
> Had very much to do last two weeks. No time for checking backup process :-(
>
> But now there is another problem with duplicity.
> I installed duplicity to /root/_apps/duplicity-0.6.18/ (see http://duply.net/?title=Duply-documentation under TIP).
> After that I created a symlink to /usr/local/bin: ln -s /root/_apps/duplicity-0.6.18/bin/duplicity /usr/local/bin/duplicity
>
> If I run duplicity directly from the root-dir (/root/_apps/duplicity-0.6.18/bin/duplicity --version) the output is "duplicity 0.6.18". Absolutly correct. But if I run duplicity from /usr/local/bin (using symlink) the output is:
> "Traceback (most recent call last):
> File "/usr/local/bin/duplicity", line 40, in <module>
> from duplicity import log
> ImportError: No module named duplicity"
>
> With that problem I can't run duply correctly because I can't find a way
> to tell duply where to find duplicity (I my case under
> /root/_apps/duplicity-0.6.18/bin/duplicity ...)
>
> Please tell me what I do wrong or how to install duplicity and duply
> correctly (not via aptitude because there are only very old versions of
> duply and duplicity available in stable repository).
>
> Thank you very much for your help!
>

don't symlink. use a shell wrapping script which executes duplicity from where it is installed. i use e.g.

#!/bin/bash
DUPLY=$(dirname "$0")/duply.sh
#DUPLY=~user/release/duply_1.5.2.3/duply
PATH=~user/_apps/duplicity-0.6.18.boxnet/bin:$PATH
"$DUPLY" "$@"

where i configure the location of duply. add the duplicity binary to the PATH and start duply with all given parameters. modify to your paths and save this to /usr/local/bin/duply and make it executable.
i should probably add it to TIP.

you could of course also add the location of the duplicity you want to use to your PATH var globally. but having a script like the above allows you to switch versions much more easily.

..ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#23

Hi ede!

Thank you very much for that very quick response.
I put your script to /usr/local/bin and changed the paths to my env.

After that I ran duply --version and all I get back:
duply version 1.5.5.5
(http://duply.net)
Using installed duplicity version 0.6.18, python 2.6.6, gpg 1.4.10 (Home: ~/.gnupg), awk 'mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan'.

I hope with your wrapper everything will work easily.
I will try to get the backup process running this eve ...

I'll keep you informed!

Revision history for this message
Chris Stankaitis (cstankaitis) said :
#24

so I am still getting unacceptable long backup.. using this command line

[<email address hidden> dailybackup]# duplicity --asynchronous-upload --no-encryption --gpg-options "compress-level=0" --archive-dir "/var/duplicity" --tempdir "/var/tmp" semtech-db-prod-s1-9907/current file:///backup2/archive/semtech-db-prod-s1-9907/

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Fri May 4 13:13:29 2012

--------------[ Backup Statistics ]--------------
StartTime 1336494556.51 (Tue May 8 12:29:16 2012)
EndTime 1336709802.21 (Fri May 11 00:16:42 2012)
ElapsedTime 215245.70 (59 hours 47 minutes 25.70 seconds)
SourceFiles 203
SourceFileSize 336245292477 (313 GB)
NewFiles 6
NewFileSize 24576 (24.0 KB)
DeletedFiles 0
ChangedFiles 34
ChangedFileSize 261802132524 (244 GB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 40
RawDeltaSize 17696623825 (16.5 GB)
TotalDestinationSizeChange 12006751376 (11.2 GB)
Errors 0
-------------------------------------------------

59 hours for an incremental is not good...

Revision history for this message
edso (ed.so) said :
#25

try to single out what the issue is..
usually it is the upload. run your backups with raised verbosity so you can see how long it takes to build the diff volumes.

why did you use --gpg-options "compress-level=0" (does that even exist?) and not --gpg-options "-z 0" ?

observe your system during backup. how much memory does duplicity use? is it swapping?

for comparision do a backup to a local file target and some incrementals to that later to find out if the general backup creation works flawlessly and performant enough.

..ede/duply.net

Revision history for this message
Kai-Alexander Ude (0cs935kb517wwmwa7m9428daadkyev88-mail-wz6bkyhu4uqpfausw0ege9b0y33ege6o) said :
#26

First of all thank you all for that great support!
Since about one week my problem has gone and I hope it will never comes back again.

I think the reason for that were the number and the amount of the files which were backuped every day.
After upgrading duplicity and duply to the latest version and dividing one single profile which managed the whole backup files in the past into four separate profiles everything works fine and very fast :-)
Now every profile "belongs" to a single service (web, mail, db, ...).
The incr backup takes less than an hour and not 10 hours and more.

But ten hours is nothing in comparison of cstankaitis' problem.
Are you trying to backup files which are already compressed?
I added this additional parameter:
--gpg-options "--compress-algo=bzip2 --bzip2-compress-level=1"
... in one special profile what is used for backing up already compressed files.

Are the files or could it be that the files are modified during the backup process?

Revision history for this message
edso (ed.so) said :
#27

On 11.05.2012 20:45, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Status: Open => Answered
>
> Kai-Alexander Ude proposed the following answer:
>
> First of all thank you all for that great support!
> Since about one week my problem has gone and I hope it will never comes back again.
>
> I think the reason for that were the number and the amount of the files which were backuped every day.
> After upgrading duplicity and duply to the latest version and dividing one single profile which managed the whole backup files in the past into four separate profiles everything works fine and very fast :-)
> Now every profile "belongs" to a single service (web, mail, db, ...).
> The incr backup takes less than an hour and not 10 hours and more.
>
> But ten hours is nothing in comparison of cstankaitis' problem.

splitting it up is probably an option here. give it a try Chris. when duplicity was started in an era of expensive data storage it never was developed with hundreds of gigabytes to test on.

on smaller datasets however it is proven stable and reliably fast.

..ede/duply.net

Can you help with this problem?

Provide an answer of your own, or ask Chris Stankaitis for more information if necessary.

To post a message you must log in.