Amazone S3 Connection Reset by peer
Hi Team,
When we are trying to upload large backup, nearly to 500+GB on Amazone S3, each times I get following reply:
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Giving up trying to upload s3://s3.
BackendException: Error uploading s3://s3.
2014-08-
2014-08-
Any solution for this?
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Duplicity Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Revision history for this message
|
#1 |
On 26.08.2014 08:16, Gaurav Ashtikar wrote:
> New question #253609 on Duplicity:
> https:/
>
> Hi Team,
> When we are trying to upload large backup, nearly to 500+GB on Amazone S3, each times I get following reply:
>
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Giving up trying to upload s3://s3.
> BackendException: Error uploading s3://s3.
> 2014-08-
> 2014-08-
>
>
> Any solution for this?
>
>
the connection is reset by the server. maybe timeout or such.
1. try raising --num-retries
2. check if your boto version is maybe outdated, might be a bug in the s3 access libraries
..ede/duply.net
Revision history for this message
|
#2 |
I just changed --num-retries to 10 still same error.
My boto is latest. Installed using pip install boto
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Upload 's3://s3.
Giving up trying to upload s3://s3.
Revision history for this message
|
#3 |
you can rise num-retries to any number you want.
if the connection does not recover there might be a deeper issue?
1. what's you duplicity version?
2. how many files/much data is your backup in total?
a signature file can grow quite big, are you sure it fits in s3 file size limit? the only current workaround to get a smaller signature file is to split your backup in to smaller portions.
..ede/duply.net
On 26.08.2014 15:27, Gaurav Ashtikar wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> Status: Answered => Open
>
> Gaurav Ashtikar is still having a problem:
> I just changed --num-retries to 10 still same error.
> My boto is latest. Installed using pip install boto
>
>
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Upload 's3://s3.
> Giving up trying to upload s3://s3.
>
Revision history for this message
|
#4 |
It may be related to this bug in botocore:
https:/
How big is the file you are trying to upload? There is a limit on S3 of
5GB I think.
On Tue, Aug 26, 2014 at 8:27 AM, Gaurav Ashtikar <
<email address hidden>> wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> Status: Answered => Open
>
> Gaurav Ashtikar is still having a problem:
> I just changed --num-retries to 10 still same error.
> My boto is latest. Installed using pip install boto
>
>
> Upload 's3://
> s3.amazonaws.
> failed (attempt #1, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #2, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #3, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #4, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #5, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #6, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #7, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #8, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #9, reason: error: [Errno 104] Connection reset by peer)
> Upload 's3://
> s3.amazonaws.
> failed (attempt #10, reason: error: [Errno 104] Connection reset by peer)
> Giving up trying to upload s3://
> s3.amazonaws.
> after 10 attempts
>
> --
> You received this question notification because you are a member of
> duplicity-team, which is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#5 |
@Kenneth,
Signature file size is nearly 6.2 GB. Any workaround to get duplicity with S3 works
Revision history for this message
|
#6 |
Not until 0.7.01. I'm working on splitting the signature and manifest
files still.
On Tue, Aug 26, 2014 at 8:56 AM, Gaurav Ashtikar <
<email address hidden>> wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> Gaurav Ashtikar posted a new comment:
> @Kenneth,
> Signature file size is nearly 6.2 GB. Any workaround to get duplicity with
> S3 works
>
> --
> You received this question notification because you are a member of
> duplicity-team, which is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#7 |
We're in the final stage of uploading a very large backup to S3 and while committing the signature file we started seeing "Connection reset by peer" (we pre-set a larger number of retries). While watching the retries continue to fail I found this post. Our files are quite a bit larger:
-rw------- 1 root root 736416 Oct 17 06:12 duplicity-
-rw------- 1 root root 21287868023 Oct 17 06:41 duplicity-
-rw------- 1 root root 27911208960 Oct 17 06:12 duplicity-
-rw-r--r-- 2 root root 0 Oct 11 02:49 lockfile.lock
This backup set took about a week and incurred high expense in data transfer cost from the source location.
I'm assuming the thousands of archive files are perfectly fine up in S3 now (250M each and went surprisingly well), but I expect the process to ultimately fail when the retry count is reached committing the signature file.
Is there a way I can manually commit this particular backup set to S3 and continue with this full backup going forward? It would be quite costly for us to have to restart this in smaller sets at this point.
Thank you.
Revision history for this message
|
#8 |
as stated above.. this is due to the signature file getting larger than allowed on S3..
so. as long as duplicity is not patched to split it up in volume size big parts there is nothing you can do except splitting up your big backup into smaller ones.
..ede/duply.net
Revision history for this message
|
#9 |
It looks like S3 may support uploads >5GB via multipart http://
A hint I found that boto perhaps supports it too now https:/
For TODAY, I'm hoping I will be able to transfer this 20GB+ sig file to S3 manually (I've started that process already). Assuming I can do it manually via AWS CLI utility, could you help me understand how to complete the final "commit" so I'm not guessing / missing something important; I was left with these three files in the cache directory:
duplicity-
duplicity-
duplicity-
e.g. rename .part files / what's the difference between .part and .gpg, is there a way i can see the final stats on this job (total files/size)
I'm very impressed that duplicity et al proved to be robust enough to get us to this point. Any help to get this first large full backup committed so we can continue with smaller incrementals forward would be most appreciated.
Revision history for this message
|
#10 |
It bothers me that you have both a .gpg and a .part file of the same
basename in the cache. Hang on to both if possible.
You will need to transfer the manifest file to S3 as well to make a
complete backup. Then you will need to check S3 to make sure that all the
volume files made it. S3 is notorious for reporting success when in fact
there was none.
Could you post the command line used to make the backup so I know what
purpose .part played.
On Fri, Oct 17, 2014 at 1:31 PM, Northrock <
<email address hidden>> wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> Northrock posted a new comment:
> It looks like S3 may support uploads >5GB via multipart
> http://
> upload.html
>
> A hint I found that boto perhaps supports it too now
> https:/
>
> For TODAY, I'm hoping I will be able to transfer this 20GB+ sig file to
> S3 manually (I've started that process already). Assuming I can do it
> manually via AWS CLI utility, could you help me understand how to
> complete the final "commit" so I'm not guessing / missing something
> important; I was left with these three files in the cache directory:
>
> duplicity-
> duplicity-
> duplicity-
>
> e.g. rename .part files / what's the difference between .part and .gpg,
> is there a way i can see the final stats on this job (total files/size)
>
> I'm very impressed that duplicity et al proved to be robust enough to
> get us to this point. Any help to get this first large full backup
> committed so we can continue with smaller incrementals forward would be
> most appreciated.
>
> --
> You received this question notification because you are a member of
> duplicity-team, which is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#11 |
As a BTW would I be correct in thinking that having the manifest and signature files on s3 are needed only if the local cache files are missing i.e. incremental backups and restoration can continue without those files saved remotely as long as a local cache exists? … al a local manifest and signatures with remote archives? (then I could nurse those files as necessary etc).
Notice duplicity-
Does .part mean that the file was just queued for transfer (locked state), or is it suspect? Should I just remove .part from the manifest filename?
My command:
duplicity \
--num-retries 20 \
--archive-dir "~/.cache/
--tempdir "/tmp" \
--s3-
--exclude "**cyrus.squat" \
--file-
--include-
--exclude "**" \
--encrypt-key {removed} \
--volsize 256 \
--verbosity 5 \
${SOURCE} ${DEST} > ${DAILYLOGFILE} 2>&1
I assume the stats from this first big job are lost.
PS: Thanks for the tip on re-checking the archives… there are about 3,000+ of them.
Thank you.
Revision history for this message
|
#12 |
You can remove the .part files at any time. Those were prior to applying
gpg and will be larger.
The naming convention is bad for those .part files. We should have used
something else that did not imply transmission. Normally, .part files
would appear on the receiving end and be renamed once the transfer was a
success.
On Sat, Oct 18, 2014 at 5:46 PM, Northrock <
<email address hidden>> wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> Northrock posted a new comment:
> As a BTW would I be correct in thinking that having the manifest and
> signature files on s3 are needed only if the local cache files are
> missing i.e. incremental backups and restoration can continue without
> those files saved remotely as long as a local cache exists? … al a
> local manifest and signatures with remote archives? (then I could nurse
> those files as necessary etc).
>
> Notice duplicity-
> duplicity-
> different sizes.
>
> Does .part mean that the file was just queued for transfer (locked
> state), or is it suspect? Should I just remove .part from the manifest
> filename?
>
> My command:
>
> duplicity \
> --num-retries 20 \
> --archive-dir "~/.cache/
> --tempdir "/tmp" \
> --s3-use-new-style \
> --exclude "**cyrus.squat" \
> --file-
> --include-
> --exclude "**" \
> --encrypt-key {removed} \
> --volsize 256 \
> --verbosity 5 \
> ${SOURCE} ${DEST} > ${DAILYLOGFILE} 2>&1
>
> I assume the stats from this first big job are lost.
>
> PS: Thanks for the tip on re-checking the archives… there are about
> 3,000+ of them.
>
> Thank you.
>
> --
> You received this question notification because you are a member of
> duplicity-team, which is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#13 |
Thank you edso,
Is the latest version of duplicity and duply resolve this problem or the only fix is to split the backup into small parts ?
I have upgraded my duplicity version to : Version: 0.7.01-
Revision history for this message
|
#14 |
On 13.02.2015 09:51, satish kumar wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> satish kumar posted a new comment:
> Thank you edso,
>
> Is the latest version of duplicity and duply resolve this problem or the only fix is to split the backup into small parts ?
>
> I have upgraded my duplicity version to : Version:
> 0.7.01-
> 1.5.5.4-1ubuntu0.1.
>
the problem is that the size of the signature file reaches the max. size value of s3. it is currently not splittable.
so the workaround is to have duplicity generate smaller signatures by having fewer files in the backup.
..ede/duply.net
Revision history for this message
|
#16 |
Hi Edeso,
I'm able to complete the backup (backup folder size is 275 GB) without any errors, I have edited a duplicity config file under /usr/share/
Now I'm not facing the : Upload 's3://s3.
Thanks
Satish
Revision history for this message
|
#17 |
On 26.02.2015 07:41, satish kumar wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> satish kumar posted a new comment:
> Hi Edeso,
>
> I'm able to complete the backup (backup folder size is 275 GB) without
> any errors, I have edited a duplicity config file under
> /usr/share/
> line s3_use_
this is what the '--s3-use-
http://
> have upgraded the duplicity version from : version 0.6.18 to
> 0.7.01-
>
> Now I'm not facing the : Upload 's3://s3.
> //duplicity-
> #1, reason: error: [Errno 104] Connection reset by peer) error.
>
good to hear. am sceptical though, what is the current size of your full signature file? you might just accidentially be just under maximum file size fore s3 now.
..ede/duply.net
Revision history for this message
|
#18 |
Hi Edeso,
Sorry for the late respone, the size of the full signature file is 5.6 Gb now
Thanks
Satish
Revision history for this message
|
#19 |
On 06.03.2015 04:31, satish kumar wrote:
> Question #253609 on Duplicity changed:
> https:/
>
> satish kumar posted a new comment:
> Hi Edeso,
>
> Sorry for the late respone, the size of the full signature file is 5.6
> Gb now
>
nP.
according to
https:/
the max. size is 5GB, which you seem to have passed already. the faq hints that multipart upload solves the issue, which you chose as well.
please everybody with this issue use '--s3-use-
..ede/duply.net
Revision history for this message
|
#20 |
Hi Ede,
1. I need your help / advice for increasing the full backups ( currently one full backup and followed by incremental backups (2 months)
2. I need a backup reporting option ( successfull backup / failed backup ) via mail, currently I'm using the cronjob feauture i.e
mail -s "duply backup-folder backup " <email address hidden>. Do you have any other better suggestion.
Thanks in advance.
Satish
Revision history for this message
|
#21 |
Hello Satish,
Are these two new questions related to the issue you were experiencing with the Amazon S3 connection reset by peer?
If not, can you please close this question and open two new questions? You may find that Launchpad will find an answer for you if you take this approach. Even if not, it will help future users with the same problem find whatever answer people provide to you.
You can always provide a link back to this question if you think that there is information here would be helpful in answering your question.
Revision history for this message
|
#22 |
again.. Aaron is right.. please join the mailinglist
https:/
and ask there
*or*
open a new question/answer ticket with a proper subject here on launchpad.
.. ede/duply.net
Revision history for this message
|
#23 |
I only just now discovered the `--s3-use-
Revision history for this message
|
#24 |
Hi, I have the same issue, I'm also using `--s3-use-
is there something else I have to add for the backup to complete?
```
Attempt 1 failed. BackendException: Multipart upload failed. Aborted.
Attempt 2 failed. BackendException: Multipart upload failed. Aborted.
Attempt 3 failed. BackendException: Multipart upload failed. Aborted.
Attempt 4 failed. BackendException: Multipart upload failed. Aborted.
Giving up after 5 attempts. BackendException: Multipart upload failed. Aborted.
07:59:39.521 Task 'BKP' failed with exit code '50'.
```
Can you help with this problem?
Provide an answer of your own, or ask Gaurav Ashtikar for more information if necessary.