Upload to S3 fails on larger files

Asked by Bernhard Truhnkolt

Hello!

I am trying to do a simple SQL backup on regular basis with Duplicity to S3.

Regularly, the whole database will be dumped into one file (about 200 MB), the file from the last run will be overwritten. Then, Duplicity should upload it to S3, but that fails because of "Broken Pipe" or "Connection reset by peer" while uploading the difftar.

When I replace the dump with a smaller file (1 MB) the upload works. When I replace it with a 200 MB file consisting of zeros, it also works, when I replace it with a 200 MB file consisting of urandom output, it fails.

Fiddling with volsize doesn't change anything.

I am using duplicity V 0.6.18.

Is this a bug or did I do anything wrong?

Question information

Language:
English Edit question
Status:
Answered
For:
Duplicity Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
edso (ed.so) said :
#1

On 23.11.2012 18:25, Bernhard Truhnkolt wrote:
> New question #215065 on Duplicity:
> https://answers.launchpad.net/duplicity/+question/215065
>
> Hello!
>
> I am trying to do a simple SQL backup on regular basis with Duplicity to S3.
>
> Regularly, the whole database will be dumped into one file (about 200 MB), the file from the last run will be overwritten. Then, Duplicity should upload it to S3, but that fails because of "Broken Pipe" or "Connection reset by peer" while uploading the difftar.
>
> When I replace the dump with a smaller file (1 MB) the upload works. When I replace it with a 200 MB file consisting of zeros, it also works, when I replace it with a 200 MB file consisting of urandom output, it fails.
>
> Fiddling with volsize doesn't change anything.
>
> I am using duplicity V 0.6.18.
>
>
> Is this a bug or did I do anything wrong?
>

you could ask on the mailing list if somebody had an issue like that before.

it would help if you could provide your complete error stacks for mentioned errors.

did you try to raise --num-retries?

..ede/duply.net

Revision history for this message
Dalton (dboutte) said :
#2

Have you tried the --s3-use-multiprocessing parameter? I found I had far fewer problems with uploads of large files using that method.

Revision history for this message
Bernhard Truhnkolt (meister-pfolmer) said :
#3

Sooo... Nothing has happened for a long time in this issue.

And I can say nothing more than that I hadn't had time to deal with it and just let it stay at my crontab and suddenly it started working. :-o

I'll let this call stay 'unanswered' so that it can be reused if someone else has this problem (or me, maybe).

Thank you for your replies!

Revision history for this message
Kenneth Loafman (kenneth-loafman) said :
#4

Which duplicity version?

Revision history for this message
Bernhard Truhnkolt (meister-pfolmer) said :
#5

At the moment it's 0.6.21, but unfortunately I do not have any records when I ran the updates.

Can you help with this problem?

Provide an answer of your own, or ask Bernhard Truhnkolt for more information if necessary.

To post a message you must log in.