Too many open files (again)
In testing a full restore, I am seeing an exception get raised for "Too many open files" from within Path.open:
I am running on macOS 10.12.4, duplicity 0.7.10
I have raised the ulimit for the number of files up to 12000.
lsof shows there are only 89 open files
sample shows a number of threads blocked on subprocesses (gpg processes)
sysctl -a shows:
kern.maxfiles: 24576
kern.
kern.num_files: 5356
There are currently only 17 incrementals following my latest full. We usually do monthly fulls.
Is anyone else seeing this recently?
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Duplicity Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Howard Kaye
- Solved:
- Last query:
- Last reply:
Revision history for this message
|
#1 |
Try:
kern.maxfiles: 131072
kern.
Also, check ulimit -n in the terminal, it should be 65536.
Revision history for this message
|
#2 |
You had a second problem in there, gpg processes blocked. Did you get
duplicity from homebrew, or where?
...Ken
On Fri, May 5, 2017 at 2:47 PM, Kenneth Loafman <
<email address hidden>> wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Answered
>
> Kenneth Loafman proposed the following answer:
> Try:
> kern.maxfiles: 131072
> kern.maxfilespe
>
> Also, check ulimit -n in the terminal, it should be 65536.
>
> --
> You received this question notification because your team duplicity-team
> is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#3 |
I got duplicity from http://
The blocked gpg processes seem to be related to the thread throwing the exception.
Why are these large values for maxfiles needed if there are only 89 open files in the process?
Revision history for this message
|
#4 |
It's just a safe number I use to keep from ever getting 'too many open
files'. Each file table node is small and won't amount to much overhead.
It's the duplicity process plus all the child processes that add up. You
could get by on less, but why bother?
On Fri, May 5, 2017 at 3:13 PM, Howard Kaye <
<email address hidden>> wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Howard Kaye posted a new comment:
> I got duplicity from http://
>
> The blocked gpg processes seem to be related to the thread throwing the
> exception.
>
> Why are these large values for maxfiles needed if there are only 89 open
> files in the process?
>
> --
> You received this question notification because your team duplicity-team
> is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#5 |
On 05.05.2017 20:13, Howard Kaye wrote:
> New question #631423 on Duplicity:
> https:/
>
> In testing a full restore, I am seeing an exception get raised for "Too many open files" from within Path.open:
>
> I am running on macOS 10.12.4, duplicity 0.7.10
>
> I have raised the ulimit for the number of files up to 12000.
> lsof shows there are only 89 open files
> sample shows a number of threads blocked on subprocesses (gpg processes)
> sysctl -a shows:
> kern.maxfiles: 24576
> kern.maxfilespe
> kern.num_files: 5356
>
> There are currently only 17 incrementals following my latest full. We usually do monthly fulls.
>
> Is anyone else seeing this recently?
>
there's going to be a fix in 0.7.13, replacing the use of os.system('cp ...') with the native python routine, which should reduce the open files count dramatically
http://
it's explained in more detail here
https:/
..ede/duply.net
Revision history for this message
|
#6 |
That does not seem to be related to the open files.
On Mon, May 8, 2017 at 6:43 AM, edso <email address hidden>
wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> edso proposed the following answer:
> On 05.05.2017 20:13, Howard Kaye wrote:
> > New question #631423 on Duplicity:
> > https:/
> >
> > In testing a full restore, I am seeing an exception get raised for "Too
> many open files" from within Path.open:
> >
> > I am running on macOS 10.12.4, duplicity 0.7.10
> >
> > I have raised the ulimit for the number of files up to 12000.
> > lsof shows there are only 89 open files
> > sample shows a number of threads blocked on subprocesses (gpg processes)
> > sysctl -a shows:
> > kern.maxfiles: 24576
> > kern.maxfilespe
> > kern.num_files: 5356
> >
> > There are currently only 17 incrementals following my latest full. We
> usually do monthly fulls.
> >
> > Is anyone else seeing this recently?
> >
>
> there's going to be a fix in 0.7.13, replacing the use of os.system('cp
> ...') with the native python routine, which should reduce the open files
> count dramatically
> http://
>
> it's explained in more detail here
> https:/
>
> ..ede/duply.net
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?answer_id=4
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#7 |
That bug applies only to runs made with --no-encryption and
--no-compression. He's got gpg processes, so he's not running that way.
Also, that bug should only affect the memory use, some.
Really should try my settings.
On Mon, May 8, 2017 at 9:58 AM, Howard Kaye <
<email address hidden>> wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Status: Answered => Open
>
> Howard Kaye is still having a problem:
> That does not seem to be related to the open files.
>
> On Mon, May 8, 2017 at 6:43 AM, edso <<email address hidden>
> >
> wrote:
>
> > Your question #631423 on Duplicity changed:
> > https:/
> >
> > edso proposed the following answer:
> > On 05.05.2017 20:13, Howard Kaye wrote:
> > > New question #631423 on Duplicity:
> > > https:/
> > >
> > > In testing a full restore, I am seeing an exception get raised for "Too
> > many open files" from within Path.open:
> > >
> > > I am running on macOS 10.12.4, duplicity 0.7.10
> > >
> > > I have raised the ulimit for the number of files up to 12000.
> > > lsof shows there are only 89 open files
> > > sample shows a number of threads blocked on subprocesses (gpg
> processes)
> > > sysctl -a shows:
> > > kern.maxfiles: 24576
> > > kern.maxfilespe
> > > kern.num_files: 5356
> > >
> > > There are currently only 17 incrementals following my latest full. We
> > usually do monthly fulls.
> > >
> > > Is anyone else seeing this recently?
> > >
> >
> > there's going to be a fix in 0.7.13, replacing the use of os.system('cp
> > ...') with the native python routine, which should reduce the open files
> > count dramatically
> > http://
> >
> > it's explained in more detail here
> > https:/
> >
> > ..ede/duply.net
> >
> > --
> > If this answers your question, please go to the following page to let us
> > know that it is solved:
> > https:/
> > confirm?answer_id=4
> >
> > If you still need help, you can reply to this email or go to the
> > following page to enter your feedback:
> > https:/
> >
> > You received this question notification because you asked the question.
> >
>
> --
> You received this question notification because your team duplicity-team
> is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#8 |
Increasing the number of open files by 2 orders of magnitude allowed this
particular restore to proceed.
In debugging the issue though, I tried catching the exception, doing an
os.open() in the handler, and this succeeds, though a subsequent call to
open() does not.
The process does not have an excessive number of open files.
So, the root cause of this does not appear to be actual open files, but
perhaps a bug somewhere else.
Rather than mask it by increasing the ulimit for open files, I'd prefer to
find the root cause.
On Mon, May 8, 2017 at 3:23 PM, Kenneth Loafman <
<email address hidden>> wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Answered
>
> Kenneth Loafman proposed the following answer:
> That bug applies only to runs made with --no-encryption and
> --no-compression. He's got gpg processes, so he's not running that way.
>
> Also, that bug should only affect the memory use, some.
>
> Really should try my settings.
>
>
> On Mon, May 8, 2017 at 9:58 AM, Howard Kaye <
> <email address hidden>> wrote:
>
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Answered => Open
> >
> > Howard Kaye is still having a problem:
> > That does not seem to be related to the open files.
> >
> > On Mon, May 8, 2017 at 6:43 AM, edso <question631423
> launchpad.net
> > >
> > wrote:
> >
> > > Your question #631423 on Duplicity changed:
> > > https:/
> > >
> > > edso proposed the following answer:
> > > On 05.05.2017 20:13, Howard Kaye wrote:
> > > > New question #631423 on Duplicity:
> > > > https:/
> > > >
> > > > In testing a full restore, I am seeing an exception get raised for
> "Too
> > > many open files" from within Path.open:
> > > >
> > > > I am running on macOS 10.12.4, duplicity 0.7.10
> > > >
> > > > I have raised the ulimit for the number of files up to 12000.
> > > > lsof shows there are only 89 open files
> > > > sample shows a number of threads blocked on subprocesses (gpg
> > processes)
> > > > sysctl -a shows:
> > > > kern.maxfiles: 24576
> > > > kern.maxfilespe
> > > > kern.num_files: 5356
> > > >
> > > > There are currently only 17 incrementals following my latest full.
> We
> > > usually do monthly fulls.
> > > >
> > > > Is anyone else seeing this recently?
> > > >
> > >
> > > there's going to be a fix in 0.7.13, replacing the use of os.system('cp
> > > ...') with the native python routine, which should reduce the open
> files
> > > count dramatically
> > > http://
> > >
> > > it's explained in more detail here
> > > https:/
> > >
> > > ..ede/duply.net
> > >
> > > --
> > > If this answers your question, please go to the following page to let
> us
> > > know that it is solved:
> > > https:/
> > > confirm?answer_id=4
> > >
> > > If you still need help, you can reply to this email or go to the
> > > following page to enter your feedback:
> > > https:/
> > >
> > > You received this question notification because you asked the question.
> > >
> >
> > --
> > You received this question notification because your team duplicity-team
> > is an answer contact for Duplicity.
> >
> > _______
> > Mailing list: https:/
> > Post to : <email address hidden>
> > Unsubscribe : https:/
> > More help : https:/
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?answer_id=6
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#9 |
Getting it to work is the major issue. Making it work the way it should is
kinda nice.
When you do an *'lsof -p NNNN | wc -l*', where NNNN is duplicity's pid,
you're getting a low count?
Unless something is broken, you should only have 2-3 gpg tasks, no more.
Where exactly did you break. Did you check lsof then?
What is your command line?
On Mon, May 8, 2017 at 2:38 PM, Howard Kaye <
<email address hidden>> wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Status: Answered => Open
>
> Howard Kaye is still having a problem:
> Increasing the number of open files by 2 orders of magnitude allowed this
> particular restore to proceed.
>
> In debugging the issue though, I tried catching the exception, doing an
> os.open() in the handler, and this succeeds, though a subsequent call to
> open() does not.
> The process does not have an excessive number of open files.
> So, the root cause of this does not appear to be actual open files, but
> perhaps a bug somewhere else.
>
> Rather than mask it by increasing the ulimit for open files, I'd prefer to
> find the root cause.
>
> On Mon, May 8, 2017 at 3:23 PM, Kenneth Loafman <
> <email address hidden>> wrote:
>
> > Your question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Open => Answered
> >
> > Kenneth Loafman proposed the following answer:
> > That bug applies only to runs made with --no-encryption and
> > --no-compression. He's got gpg processes, so he's not running that way.
> >
> > Also, that bug should only affect the memory use, some.
> >
> > Really should try my settings.
> >
> >
> > On Mon, May 8, 2017 at 9:58 AM, Howard Kaye <
> > <email address hidden>> wrote:
> >
> > > Question #631423 on Duplicity changed:
> > > https:/
> > >
> > > Status: Answered => Open
> > >
> > > Howard Kaye is still having a problem:
> > > That does not seem to be related to the open files.
> > >
> > > On Mon, May 8, 2017 at 6:43 AM, edso <question631423
> > launchpad.net
> > > >
> > > wrote:
> > >
> > > > Your question #631423 on Duplicity changed:
> > > > https:/
> > > >
> > > > edso proposed the following answer:
> > > > On 05.05.2017 20:13, Howard Kaye wrote:
> > > > > New question #631423 on Duplicity:
> > > > > https:/
> > > > >
> > > > > In testing a full restore, I am seeing an exception get raised for
> > "Too
> > > > many open files" from within Path.open:
> > > > >
> > > > > I am running on macOS 10.12.4, duplicity 0.7.10
> > > > >
> > > > > I have raised the ulimit for the number of files up to 12000.
> > > > > lsof shows there are only 89 open files
> > > > > sample shows a number of threads blocked on subprocesses (gpg
> > > processes)
> > > > > sysctl -a shows:
> > > > > kern.maxfiles: 24576
> > > > > kern.maxfilespe
> > > > > kern.num_files: 5356
> > > > >
> > > > > There are currently only 17 incrementals following my latest full.
> > We
> > > > usually do monthly fulls.
> > > > >
> > > > > Is anyone else seeing this recently?
> > > > >
> > > >
> > > > there's going to be a fix in 0.7.13, replacing the use of
> os.system('cp
> > > > ...') with the native python routine, which should reduce the open
> > files
> > > > count dramatically
> > > > http://
> > > >
> > > > it's explained in more detail here
> > > > https:/
> > > >
> > > > ..ede/duply.net
> > > >
> > > > --
> > > > If this answers your question, please go to the following page to let
> > us
> > > > know that it is solved:
> > > > https:/
> > > > confirm?answer_id=4
> > > >
> > > > If you still need help, you can reply to this email or go to the
> > > > following page to enter your feedback:
> > > > https:/
> > > >
> > > > You received this question notification because you asked the
> question.
> > > >
> > >
> > > --
> > > You received this question notification because your team
> duplicity-team
> > > is an answer contact for Duplicity.
> > >
> > > _______
> > > Mailing list: https:/
> > > Post to : <email address hidden>
> > > Unsubscribe : https:/
> > > More help : https:/
> > >
> >
> > --
> > If this answers your question, please go to the following page to let us
> > know that it is solved:
> > https:/
> > confirm?answer_id=6
> >
> > If you still need help, you can reply to this email or go to the
> > following page to enter your feedback:
> > https:/
> >
> > You received this question notification because you asked the question.
> >
>
> --
> You received this question notification because your team duplicity-team
> is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#10 |
Getting it work is great, but making it not break in the future is key.
I caught the exception thrown by open in path.py:567.
I had it run lsof, sysctl sample, checked the getrlimit settings, even
bumped them up and retried. The highest open file descriptor was 99.
There are 21 gpg processes.
command line is:
duplicity --archive-
--encrypt-
restore archivist:
/Users/
The way it breaks, I am starting to think there is a thread safety issue
somewhere in the file handling.
On Mon, May 8, 2017 at 3:57 PM, Kenneth Loafman <
<email address hidden>> wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Answered
>
> Kenneth Loafman proposed the following answer:
> Getting it to work is the major issue. Making it work the way it should is
> kinda nice.
>
> When you do an *'lsof -p NNNN | wc -l*', where NNNN is duplicity's pid,
> you're getting a low count?
>
> Unless something is broken, you should only have 2-3 gpg tasks, no more.
>
> Where exactly did you break. Did you check lsof then?
>
> What is your command line?
>
>
> On Mon, May 8, 2017 at 2:38 PM, Howard Kaye <
> <email address hidden>> wrote:
>
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Answered => Open
> >
> > Howard Kaye is still having a problem:
> > Increasing the number of open files by 2 orders of magnitude allowed this
> > particular restore to proceed.
> >
> > In debugging the issue though, I tried catching the exception, doing an
> > os.open() in the handler, and this succeeds, though a subsequent call to
> > open() does not.
> > The process does not have an excessive number of open files.
> > So, the root cause of this does not appear to be actual open files, but
> > perhaps a bug somewhere else.
> >
> > Rather than mask it by increasing the ulimit for open files, I'd prefer
> to
> > find the root cause.
> >
> > On Mon, May 8, 2017 at 3:23 PM, Kenneth Loafman <
> > <email address hidden>> wrote:
> >
> > > Your question #631423 on Duplicity changed:
> > > https:/
> > >
> > > Status: Open => Answered
> > >
> > > Kenneth Loafman proposed the following answer:
> > > That bug applies only to runs made with --no-encryption and
> > > --no-compression. He's got gpg processes, so he's not running that
> way.
> > >
> > > Also, that bug should only affect the memory use, some.
> > >
> > > Really should try my settings.
> > >
> > >
> > > On Mon, May 8, 2017 at 9:58 AM, Howard Kaye <
> > > <email address hidden>> wrote:
> > >
> > > > Question #631423 on Duplicity changed:
> > > > https:/
> > > >
> > > > Status: Answered => Open
> > > >
> > > > Howard Kaye is still having a problem:
> > > > That does not seem to be related to the open files.
> > > >
> > > > On Mon, May 8, 2017 at 6:43 AM, edso <question631423
> > > launchpad.net
> > > > >
> > > > wrote:
> > > >
> > > > > Your question #631423 on Duplicity changed:
> > > > > https:/
> > > > >
> > > > > edso proposed the following answer:
> > > > > On 05.05.2017 20:13, Howard Kaye wrote:
> > > > > > New question #631423 on Duplicity:
> > > > > > https:/
> > > > > >
> > > > > > In testing a full restore, I am seeing an exception get raised
> for
> > > "Too
> > > > > many open files" from within Path.open:
> > > > > >
> > > > > > I am running on macOS 10.12.4, duplicity 0.7.10
> > > > > >
> > > > > > I have raised the ulimit for the number of files up to 12000.
> > > > > > lsof shows there are only 89 open files
> > > > > > sample shows a number of threads blocked on subprocesses (gpg
> > > > processes)
> > > > > > sysctl -a shows:
> > > > > > kern.maxfiles: 24576
> > > > > > kern.maxfilespe
> > > > > > kern.num_files: 5356
> > > > > >
> > > > > > There are currently only 17 incrementals following my latest
> full.
> > > We
> > > > > usually do monthly fulls.
> > > > > >
> > > > > > Is anyone else seeing this recently?
> > > > > >
> > > > >
> > > > > there's going to be a fix in 0.7.13, replacing the use of
> > os.system('cp
> > > > > ...') with the native python routine, which should reduce the open
> > > files
> > > > > count dramatically
> > > > > http://
> > > > >
> > > > > it's explained in more detail here
> > > > > https:/
> > > > >
> > > > > ..ede/duply.net
> > > > >
> > > > > --
> > > > > If this answers your question, please go to the following page to
> let
> > > us
> > > > > know that it is solved:
> > > > > https:/
> > > > > confirm?answer_id=4
> > > > >
> > > > > If you still need help, you can reply to this email or go to the
> > > > > following page to enter your feedback:
> > > > > https:/
> > > > >
> > > > > You received this question notification because you asked the
> > question.
> > > > >
> > > >
> > > > --
> > > > You received this question notification because your team
> > duplicity-team
> > > > is an answer contact for Duplicity.
> > > >
> > > > _______
> > > > Mailing list: https:/
> > > > Post to : <email address hidden>
> > > > Unsubscribe : https:/
> > > > More help : https:/
> > > >
> > >
> > > --
> > > If this answers your question, please go to the following page to let
> us
> > > know that it is solved:
> > > https:/
> > > confirm?answer_id=6
> > >
> > > If you still need help, you can reply to this email or go to the
> > > following page to enter your feedback:
> > > https:/
> > >
> > > You received this question notification because you asked the question.
> > >
> >
> > --
> > You received this question notification because your team duplicity-team
> > is an answer contact for Duplicity.
> >
> > _______
> > Mailing list: https:/
> > Post to : <email address hidden>
> > Unsubscribe : https:/
> > More help : https:/
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?answer_id=8
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#11 |
Part of the problem is the number of gpg processes, you should have a max of 2 or 3, not 13. When you installed the current version, was there perhaps an older version on the system, maybe from ports or homebrew? If so, did you uninstall that version? The repo versions have different paths than the tarball version.
To double check, run "locate GnuPGInterface.py". If you have it, you probably have a very old duplicity hanging out there and will need to uninstall and purge it.
Revision history for this message
|
#12 |
The problem seems to be something going on in the tempfile Python library.
The following 1 line patch fixes the issue.
*** /Users/
09:03:43.000000000 -0400
--- patchdir.py 2017-05-09 14:27:38.000000000 -0400
***************
*** 496,502 ****
See https:/
discussion
of os.tmpfile() vs tempfile.
Linux.
"""
! if sys.platform.
else:
tempfile.
--- 496,502 ----
See https:/
discussion
of os.tmpfile() vs tempfile.
Linux.
"""
! if sys.platform.
else:
tempfile.
On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
<email address hidden>> wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Needs information
>
> Kenneth Loafman requested more information:
> Part of the problem is the number of gpg processes, you should have a
> max of 2 or 3, not 13. When you installed the current version, was
> there perhaps an older version on the system, maybe from ports or
> homebrew? If so, did you uninstall that version? The repo versions
> have different paths than the tarball version.
>
> To double check, run "locate GnuPGInterface.py". If you have it, you
> probably have a very old duplicity hanging out there and will need to
> uninstall and purge it.
>
> --
> To answer this request for more information, you can either reply to
> this email or enter your reply at the following page:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#13 |
Thanks for the fix. I'll get it in.
Did that fix the excess gpg processes as well, or is that something OS X specific as well?
Revision history for this message
|
#14 |
I don't think they are excess. Maybe related to the length of the backup
chain? In any case, the processes are coming and going.
-+= 00001 root /sbin/launchd
\-+= 72002 root
/Library/
\-+= 63036 root /Library/
/dev/fd/63 -u howie restore --alsologtostderr --noshow_progress
howie-macpro2 /Users/
\-+- 63046 root /Library/
/Library/
--archive-
--log-fd=9 --v
|--- 63049 root /usr/local/
|--- 63296 root gpg --status-fd 20 --passphrase-fd 24 --logger-fd 17
--batch --no-tty --no-secmem-warning --decrypt
|--- 63297 root gpg --status-fd 24 --passphrase-fd 28 --logger-fd 21
--batch --no-tty --no-secmem-warning --decrypt
|--- 63298 root gpg --status-fd 28 --passphrase-fd 32 --logger-fd 25
--batch --no-tty --no-secmem-warning --decrypt
|--- 63299 root gpg --status-fd 32 --passphrase-fd 36 --logger-fd 29
--batch --no-tty --no-secmem-warning --decrypt
|--- 63300 root gpg --status-fd 36 --passphrase-fd 40 --logger-fd 33
--batch --no-tty --no-secmem-warning --decrypt
|--- 63301 root gpg --status-fd 40 --passphrase-fd 44 --logger-fd 37
--batch --no-tty --no-secmem-warning --decrypt
|--- 63302 root gpg --status-fd 44 --passphrase-fd 48 --logger-fd 41
--batch --no-tty --no-secmem-warning --decrypt
|--- 63303 root gpg --status-fd 48 --passphrase-fd 52 --logger-fd 45
--batch --no-tty --no-secmem-warning --decrypt
|--- 63304 root gpg --status-fd 52 --passphrase-fd 56 --logger-fd 49
--batch --no-tty --no-secmem-warning --decrypt
|--- 63305 root gpg --status-fd 56 --passphrase-fd 60 --logger-fd 53
--batch --no-tty --no-secmem-warning --decrypt
|--- 63306 root gpg --status-fd 60 --passphrase-fd 65 --logger-fd 57
--batch --no-tty --no-secmem-warning --decrypt
|--- 63308 root gpg --status-fd 65 --passphrase-fd 69 --logger-fd 61
--batch --no-tty --no-secmem-warning --decrypt
|--- 63309 root gpg --status-fd 69 --passphrase-fd 73 --logger-fd 66
--batch --no-tty --no-secmem-warning --decrypt
|--- 63310 root gpg --status-fd 73 --passphrase-fd 77 --logger-fd 70
--batch --no-tty --no-secmem-warning --decrypt
|--- 63314 root gpg --status-fd 77 --passphrase-fd 81 --logger-fd 74
--batch --no-tty --no-secmem-warning --decrypt
|--- 63316 root gpg --status-fd 81 --passphrase-fd 85 --logger-fd 78
--batch --no-tty --no-secmem-warning --decrypt
|--- 63317 root gpg --status-fd 85 --passphrase-fd 89 --logger-fd 82
--batch --no-tty --no-secmem-warning --decrypt
|--- 63318 root gpg --status-fd 89 --passphrase-fd 93 --logger-fd 86
--batch --no-tty --no-secmem-warning --decrypt
|--- 63319 root gpg --status-fd 93 --passphrase-fd 97 --logger-fd 90
--batch --no-tty --no-secmem-warning --decrypt
|--- 63320 root gpg --status-fd 97 --passphrase-fd 101 --logger-fd
94 --batch --no-tty --no-secmem-warning --decrypt
|--- 63321 root gpg --status-fd 101 --passphrase-fd 105 --logger-fd
98 --batch --no-tty --no-secmem-warning --decrypt
\--- 63376 root gpg --status-fd 18 --passphrase-fd 107 --logger-fd 4
--batch --no-tty --no-secmem-warning --decrypt
On Tue, May 9, 2017 at 4:09 PM, Kenneth Loafman <
<email address hidden>> wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Needs information
>
> Kenneth Loafman requested more information:
> Thanks for the fix. I'll get it in.
>
> Did that fix the excess gpg processes as well, or is that something OS X
> specific as well?
>
> --
> To answer this request for more information, you can either reply to
> this email or enter your reply at the following page:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#15 |
I asked about GnuPGInterface.py because we had a lot of problems with package maintainers that kept substituting the official version in place of ours, not understanding that ours was patched to harvest excess gpg processes as the incremental file completed. Now, we include a differently named file to keep them from munging it.
It's possible you're keeping some incrementals open because you have files that are modified throughout the entire chain, so there's no closing of the file just yet. So long as it's not a continuous increase, all is well.
Revision history for this message
|
#16 |
hey guys,
that looks wrong. using os.tmpfile() may resolve the issue, but it removes the users control, where the temp files are created. this comes up on list list regularly when user's small temp partitions overflow and they ask why duplicity is not respecting their temp dir setting.
the python docs say
https:/
"
... will be automatically deleted once there are no file descriptors for the file.
"
https:/
"
... It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). ...
"
Howard, could you please test if explicitly closing obsolete TemporaryFiles instead in patchdir.py solves the issue as well?
the whole condition looks fishy anyway. it's tempfile modules job to take care of crossplatform compatibility, no need for us to treat some platforms different than others. ..ede/duply.net
On 09.05.2017 21:03, Howard Kaye wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Status: Needs information => Open
>
> Howard Kaye gave more information on the question:
> The problem seems to be something going on in the tempfile Python library.
> The following 1 line patch fixes the issue.
>
> *** /Users/
> 09:03:43.000000000 -0400
> --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
> ***************
> *** 496,502 ****
> See https:/
> discussion
> of os.tmpfile() vs tempfile.
> Linux.
> """
> ! if sys.platform.
> tempfp = os.tmpfile()
> else:
> tempfp =
> tempfile.
> --- 496,502 ----
> See https:/
> discussion
> of os.tmpfile() vs tempfile.
> Linux.
> """
> ! if sys.platform.
> tempfp = os.tmpfile()
> else:
> tempfp =
> tempfile.
>
>
> On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
> <email address hidden>> wrote:
>
>> Your question #631423 on Duplicity changed:
>> https:/
>>
>> Status: Open => Needs information
>>
>> Kenneth Loafman requested more information:
>> Part of the problem is the number of gpg processes, you should have a
>> max of 2 or 3, not 13. When you installed the current version, was
>> there perhaps an older version on the system, maybe from ports or
>> homebrew? If so, did you uninstall that version? The repo versions
>> have different paths than the tarball version.
>>
>> To double check, run "locate GnuPGInterface.py". If you have it, you
>> probably have a very old duplicity hanging out there and will need to
>> uninstall and purge it.
>>
>> --
>> To answer this request for more information, you can either reply to
>> this email or enter your reply at the following page:
>> https:/
>>
>> You received this question notification because you asked the question.
>>
>
Revision history for this message
|
#17 |
No GnuPGInterface.py. It doesn't keep increasing. In this case, it stays
stable at around 20. And when it completes they all go away.
On Tue, May 9, 2017 at 5:03 PM, Kenneth Loafman <
<email address hidden>> wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Answered
>
> Kenneth Loafman proposed the following answer:
> I asked about GnuPGInterface.py because we had a lot of problems with
> package maintainers that kept substituting the official version in place
> of ours, not understanding that ours was patched to harvest excess gpg
> processes as the incremental file completed. Now, we include a
> differently named file to keep them from munging it.
>
> It's possible you're keeping some incrementals open because you have
> files that are modified throughout the entire chain, so there's no
> closing of the file just yet. So long as it's not a continuous
> increase, all is well.
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#18 |
I can give that a try tomorrow.
On Tue, May 9, 2017 at 5:08 PM, edso <email address hidden>
wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> edso proposed the following answer:
> hey guys,
>
> that looks wrong. using os.tmpfile() may resolve the issue, but it
> removes the users control, where the temp files are created. this comes
> up on list list regularly when user's small temp partitions overflow and
> they ask why duplicity is not respecting their temp dir setting.
>
> the python docs say
>
> https:/
> "
> ... will be automatically deleted once there are no file descriptors for
> the file.
> "
> https:/
> "
> ... It will be destroyed as soon as it is closed (including an implicit
> close when the object is garbage collected). ...
> "
>
> Howard, could you please test if explicitly closing obsolete
> TemporaryFiles instead in patchdir.py solves the issue as well?
>
> the whole condition looks fishy anyway. it's tempfile modules job to
> take care of crossplatform compatibility, no need for us to treat some
> platforms different than others. ..ede/duply.net
>
> On 09.05.2017 21:03, Howard Kaye wrote:
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Needs information => Open
> >
> > Howard Kaye gave more information on the question:
> > The problem seems to be something going on in the tempfile Python
> library.
> > The following 1 line patch fixes the issue.
> >
> > *** /Users/
> > 09:03:43.000000000 -0400
> > --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
> > ***************
> > *** 496,502 ****
> > See https:/
> > discussion
> > of os.tmpfile() vs tempfile.
> /
> > Linux.
> > """
> > ! if sys.platform.
> > tempfp = os.tmpfile()
> > else:
> > tempfp =
> > tempfile.
> > --- 496,502 ----
> > See https:/
> > discussion
> > of os.tmpfile() vs tempfile.
> /
> > Linux.
> > """
> > ! if sys.platform.
> 'darwin')):
> > tempfp = os.tmpfile()
> > else:
> > tempfp =
> > tempfile.
> >
> >
> > On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
> > <email address hidden>> wrote:
> >
> >> Your question #631423 on Duplicity changed:
> >> https:/
> >>
> >> Status: Open => Needs information
> >>
> >> Kenneth Loafman requested more information:
> >> Part of the problem is the number of gpg processes, you should have a
> >> max of 2 or 3, not 13. When you installed the current version, was
> >> there perhaps an older version on the system, maybe from ports or
> >> homebrew? If so, did you uninstall that version? The repo versions
> >> have different paths than the tarball version.
> >>
> >> To double check, run "locate GnuPGInterface.py". If you have it, you
> >> probably have a very old duplicity hanging out there and will need to
> >> uninstall and purge it.
> >>
> >> --
> >> To answer this request for more information, you can either reply to
> >> this email or enter your reply at the following page:
> >> https:/
> >>
> >> You received this question notification because you asked the question.
> >>
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#19 |
The attached patch seems to fix the problem and keeps current behavior with
respect to temp directories.
Howie
On Tue, May 9, 2017 at 5:08 PM, edso <email address hidden>
wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> edso proposed the following answer:
> hey guys,
>
> that looks wrong. using os.tmpfile() may resolve the issue, but it
> removes the users control, where the temp files are created. this comes
> up on list list regularly when user's small temp partitions overflow and
> they ask why duplicity is not respecting their temp dir setting.
>
> the python docs say
>
> https:/
> "
> ... will be automatically deleted once there are no file descriptors for
> the file.
> "
> https:/
> "
> ... It will be destroyed as soon as it is closed (including an implicit
> close when the object is garbage collected). ...
> "
>
> Howard, could you please test if explicitly closing obsolete
> TemporaryFiles instead in patchdir.py solves the issue as well?
>
> the whole condition looks fishy anyway. it's tempfile modules job to
> take care of crossplatform compatibility, no need for us to treat some
> platforms different than others. ..ede/duply.net
>
> On 09.05.2017 21:03, Howard Kaye wrote:
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Needs information => Open
> >
> > Howard Kaye gave more information on the question:
> > The problem seems to be something going on in the tempfile Python
> library.
> > The following 1 line patch fixes the issue.
> >
> > *** /Users/
> > 09:03:43.000000000 -0400
> > --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
> > ***************
> > *** 496,502 ****
> > See https:/
> > discussion
> > of os.tmpfile() vs tempfile.
> /
> > Linux.
> > """
> > ! if sys.platform.
> > tempfp = os.tmpfile()
> > else:
> > tempfp =
> > tempfile.
> > --- 496,502 ----
> > See https:/
> > discussion
> > of os.tmpfile() vs tempfile.
> /
> > Linux.
> > """
> > ! if sys.platform.
> 'darwin')):
> > tempfp = os.tmpfile()
> > else:
> > tempfp =
> > tempfile.
> >
> >
> > On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
> > <email address hidden>> wrote:
> >
> >> Your question #631423 on Duplicity changed:
> >> https:/
> >>
> >> Status: Open => Needs information
> >>
> >> Kenneth Loafman requested more information:
> >> Part of the problem is the number of gpg processes, you should have a
> >> max of 2 or 3, not 13. When you installed the current version, was
> >> there perhaps an older version on the system, maybe from ports or
> >> homebrew? If so, did you uninstall that version? The repo versions
> >> have different paths than the tarball version.
> >>
> >> To double check, run "locate GnuPGInterface.py". If you have it, you
> >> probably have a very old duplicity hanging out there and will need to
> >> uninstall and purge it.
> >>
> >> --
> >> To answer this request for more information, you can either reply to
> >> this email or enter your reply at the following page:
> >> https:/
> >>
> >> You received this question notification because you asked the question.
> >>
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#20 |
Howie, i don't see the attachment?! ..ede/duply.net
On 10.05.2017 22:57, Howard Kaye wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Howard Kaye gave more information on the question:
> The attached patch seems to fix the problem and keeps current behavior with
> respect to temp directories.
>
> Howie
>
>
> On Tue, May 9, 2017 at 5:08 PM, edso <email address hidden>
> wrote:
>
>> Your question #631423 on Duplicity changed:
>> https:/
>>
>> edso proposed the following answer:
>> hey guys,
>>
>> that looks wrong. using os.tmpfile() may resolve the issue, but it
>> removes the users control, where the temp files are created. this comes
>> up on list list regularly when user's small temp partitions overflow and
>> they ask why duplicity is not respecting their temp dir setting.
>>
>> the python docs say
>>
>> https:/
>> "
>> ... will be automatically deleted once there are no file descriptors for
>> the file.
>> "
>> https:/
>> "
>> ... It will be destroyed as soon as it is closed (including an implicit
>> close when the object is garbage collected). ...
>> "
>>
>> Howard, could you please test if explicitly closing obsolete
>> TemporaryFiles instead in patchdir.py solves the issue as well?
>>
>> the whole condition looks fishy anyway. it's tempfile modules job to
>> take care of crossplatform compatibility, no need for us to treat some
>> platforms different than others. ..ede/duply.net
>>
>> On 09.05.2017 21:03, Howard Kaye wrote:
>>> Question #631423 on Duplicity changed:
>>> https:/
>>>
>>> Status: Needs information => Open
>>>
>>> Howard Kaye gave more information on the question:
>>> The problem seems to be something going on in the tempfile Python
>> library.
>>> The following 1 line patch fixes the issue.
>>>
>>> *** /Users/
>>> 09:03:43.000000000 -0400
>>> --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
>>> ***************
>>> *** 496,502 ****
>>> See https:/
>>> discussion
>>> of os.tmpfile() vs tempfile.
>> /
>>> Linux.
>>> """
>>> ! if sys.platform.
>>> tempfp = os.tmpfile()
>>> else:
>>> tempfp =
>>> tempfile.
>>> --- 496,502 ----
>>> See https:/
>>> discussion
>>> of os.tmpfile() vs tempfile.
>> /
>>> Linux.
>>> """
>>> ! if sys.platform.
>> 'darwin')):
>>> tempfp = os.tmpfile()
>>> else:
>>> tempfp =
>>> tempfile.
>>>
>>>
>>> On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
>>> <email address hidden>> wrote:
>>>
>>>> Your question #631423 on Duplicity changed:
>>>> https:/
>>>>
>>>> Status: Open => Needs information
>>>>
>>>> Kenneth Loafman requested more information:
>>>> Part of the problem is the number of gpg processes, you should have a
>>>> max of 2 or 3, not 13. When you installed the current version, was
>>>> there perhaps an older version on the system, maybe from ports or
>>>> homebrew? If so, did you uninstall that version? The repo versions
>>>> have different paths than the tarball version.
>>>>
>>>> To double check, run "locate GnuPGInterface.py". If you have it, you
>>>> probably have a very old duplicity hanging out there and will need to
>>>> uninstall and purge it.
>>>>
>>>> --
>>>> To answer this request for more information, you can either reply to
>>>> this email or enter your reply at the following page:
>>>> https:/
>>>>
>>>> You received this question notification because you asked the question.
>>>>
>>>
>>
>> --
>> If this answers your question, please go to the following page to let us
>> know that it is solved:
>> https:/
>> confirm?
>>
>> If you still need help, you can reply to this email or go to the
>> following page to enter your feedback:
>> https:/
>>
>> You received this question notification because you asked the question.
>>
>
Revision history for this message
|
#21 |
Actually, it is generating errors. Still working.
On Thu, May 11, 2017 at 5:18 AM, edso <email address hidden>
wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Open => Answered
>
> edso proposed the following answer:
> Howie, i don't see the attachment?! ..ede/duply.net
>
> On 10.05.2017 22:57, Howard Kaye wrote:
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Howard Kaye gave more information on the question:
> > The attached patch seems to fix the problem and keeps current behavior
> with
> > respect to temp directories.
> >
> > Howie
> >
> >
> > On Tue, May 9, 2017 at 5:08 PM, edso <question631423
> launchpad.net>
> > wrote:
> >
> >> Your question #631423 on Duplicity changed:
> >> https:/
> >>
> >> edso proposed the following answer:
> >> hey guys,
> >>
> >> that looks wrong. using os.tmpfile() may resolve the issue, but it
> >> removes the users control, where the temp files are created. this comes
> >> up on list list regularly when user's small temp partitions overflow and
> >> they ask why duplicity is not respecting their temp dir setting.
> >>
> >> the python docs say
> >>
> >> https:/
> >> "
> >> ... will be automatically deleted once there are no file descriptors for
> >> the file.
> >> "
> >> https:/
> >> "
> >> ... It will be destroyed as soon as it is closed (including an implicit
> >> close when the object is garbage collected). ...
> >> "
> >>
> >> Howard, could you please test if explicitly closing obsolete
> >> TemporaryFiles instead in patchdir.py solves the issue as well?
> >>
> >> the whole condition looks fishy anyway. it's tempfile modules job to
> >> take care of crossplatform compatibility, no need for us to treat some
> >> platforms different than others. ..ede/duply.net
> >>
> >> On 09.05.2017 21:03, Howard Kaye wrote:
> >>> Question #631423 on Duplicity changed:
> >>> https:/
> >>>
> >>> Status: Needs information => Open
> >>>
> >>> Howard Kaye gave more information on the question:
> >>> The problem seems to be something going on in the tempfile Python
> >> library.
> >>> The following 1 line patch fixes the issue.
> >>>
> >>> *** /Users/
> >>> 09:03:43.000000000 -0400
> >>> --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
> >>> ***************
> >>> *** 496,502 ****
> >>> See https:/
> >>> discussion
> >>> of os.tmpfile() vs tempfile.
> Windows
> >> /
> >>> Linux.
> >>> """
> >>> ! if sys.platform.
> >>> tempfp = os.tmpfile()
> >>> else:
> >>> tempfp =
> >>> tempfile.
> >>> --- 496,502 ----
> >>> See https:/
> >>> discussion
> >>> of os.tmpfile() vs tempfile.
> Windows
> >> /
> >>> Linux.
> >>> """
> >>> ! if sys.platform.
> >> 'darwin')):
> >>> tempfp = os.tmpfile()
> >>> else:
> >>> tempfp =
> >>> tempfile.
> >>>
> >>>
> >>> On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
> >>> <email address hidden>> wrote:
> >>>
> >>>> Your question #631423 on Duplicity changed:
> >>>> https:/
> >>>>
> >>>> Status: Open => Needs information
> >>>>
> >>>> Kenneth Loafman requested more information:
> >>>> Part of the problem is the number of gpg processes, you should have a
> >>>> max of 2 or 3, not 13. When you installed the current version, was
> >>>> there perhaps an older version on the system, maybe from ports or
> >>>> homebrew? If so, did you uninstall that version? The repo versions
> >>>> have different paths than the tarball version.
> >>>>
> >>>> To double check, run "locate GnuPGInterface.py". If you have it, you
> >>>> probably have a very old duplicity hanging out there and will need to
> >>>> uninstall and purge it.
> >>>>
> >>>> --
> >>>> To answer this request for more information, you can either reply to
> >>>> this email or enter your reply at the following page:
> >>>> https:/
> >>>>
> >>>> You received this question notification because you asked the
> question.
> >>>>
> >>>
> >>
> >> --
> >> If this answers your question, please go to the following page to let us
> >> know that it is solved:
> >> https:/
> >> confirm?
> >>
> >> If you still need help, you can reply to this email or go to the
> >> following page to enter your feedback:
> >> https:/
> >>
> >> You received this question notification because you asked the question.
> >>
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https:/
> confirm?
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https:/
>
> You received this question notification because you asked the question.
>
Revision history for this message
|
#22 |
The problem seems to be related to the patching code when the input is not
an I stance of file. It seems like the librsync calls are not deallocating
the file like object correctly.
The duplicity.
have, but I see that librsync 2.0 is available. What is the recommended
version of librsync?
On May 11, 2017 11:28 AM, "Howard Kaye" <
<email address hidden>> wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> Status: Answered => Open
>
> You are still having a problem:
> Actually, it is generating errors. Still working.
>
> On Thu, May 11, 2017 at 5:18 AM, edso <question631423
> launchpad.net>
> wrote:
>
> > Your question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Open => Answered
> >
> > edso proposed the following answer:
> > Howie, i don't see the attachment?! ..ede/duply.net
> >
> > On 10.05.2017 22:57, Howard Kaye wrote:
> > > Question #631423 on Duplicity changed:
> > > https:/
> > >
> > > Howard Kaye gave more information on the question:
> > > The attached patch seems to fix the problem and keeps current behavior
> > with
> > > respect to temp directories.
> > >
> > > Howie
> > >
> > >
> > > On Tue, May 9, 2017 at 5:08 PM, edso <question631423
> > launchpad.net>
> > > wrote:
> > >
> > >> Your question #631423 on Duplicity changed:
> > >> https:/
> > >>
> > >> edso proposed the following answer:
> > >> hey guys,
> > >>
> > >> that looks wrong. using os.tmpfile() may resolve the issue, but it
> > >> removes the users control, where the temp files are created. this
> comes
> > >> up on list list regularly when user's small temp partitions overflow
> and
> > >> they ask why duplicity is not respecting their temp dir setting.
> > >>
> > >> the python docs say
> > >>
> > >> https:/
> > >> "
> > >> ... will be automatically deleted once there are no file descriptors
> for
> > >> the file.
> > >> "
> > >> https:/
> tempfile.
> > >> "
> > >> ... It will be destroyed as soon as it is closed (including an
> implicit
> > >> close when the object is garbage collected). ...
> > >> "
> > >>
> > >> Howard, could you please test if explicitly closing obsolete
> > >> TemporaryFiles instead in patchdir.py solves the issue as well?
> > >>
> > >> the whole condition looks fishy anyway. it's tempfile modules job to
> > >> take care of crossplatform compatibility, no need for us to treat some
> > >> platforms different than others. ..ede/duply.net
> > >>
> > >> On 09.05.2017 21:03, Howard Kaye wrote:
> > >>> Question #631423 on Duplicity changed:
> > >>> https:/
> > >>>
> > >>> Status: Needs information => Open
> > >>>
> > >>> Howard Kaye gave more information on the question:
> > >>> The problem seems to be something going on in the tempfile Python
> > >> library.
> > >>> The following 1 line patch fixes the issue.
> > >>>
> > >>> *** /Users/
> > >>> 09:03:43.000000000 -0400
> > >>> --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
> > >>> ***************
> > >>> *** 496,502 ****
> > >>> See https:/
> for
> > >>> discussion
> > >>> of os.tmpfile() vs tempfile.
> > Windows
> > >> /
> > >>> Linux.
> > >>> """
> > >>> ! if sys.platform.
> > >>> tempfp = os.tmpfile()
> > >>> else:
> > >>> tempfp =
> > >>> tempfile.
> > >>> --- 496,502 ----
> > >>> See https:/
> for
> > >>> discussion
> > >>> of os.tmpfile() vs tempfile.
> > Windows
> > >> /
> > >>> Linux.
> > >>> """
> > >>> ! if sys.platform.
> > >> 'darwin')):
> > >>> tempfp = os.tmpfile()
> > >>> else:
> > >>> tempfp =
> > >>> tempfile.
> > >>>
> > >>>
> > >>> On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
> > >>> <email address hidden>> wrote:
> > >>>
> > >>>> Your question #631423 on Duplicity changed:
> > >>>> https:/
> > >>>>
> > >>>> Status: Open => Needs information
> > >>>>
> > >>>> Kenneth Loafman requested more information:
> > >>>> Part of the problem is the number of gpg processes, you should have
> a
> > >>>> max of 2 or 3, not 13. When you installed the current version, was
> > >>>> there perhaps an older version on the system, maybe from ports or
> > >>>> homebrew? If so, did you uninstall that version? The repo versions
> > >>>> have different paths than the tarball version.
> > >>>>
> > >>>> To double check, run "locate GnuPGInterface.py". If you have it,
> you
> > >>>> probably have a very old duplicity hanging out there and will need
> to
> > >>>> uninstall and purge it.
> > >>>>
> > >>>> --
> > >>>> To answer this request for more information, you can either reply to
> > >>>> this email or enter your reply at the following page:
> > >>>> https:/
> > >>>>
> > >>>> You received this question notification because you asked the
> > question.
> > >>>>
> > >>>
> > >>
> > >> --
> > >> If this answers your question, please go to the following page to let
> us
> > >> know that it is solved:
> > >> https:/
> > >> confirm?
> > >>
> > >> If you still need help, you can reply to this email or go to the
> > >> following page to enter your feedback:
> > >> https:/
> > >>
> > >> You received this question notification because you asked the
> question.
> > >>
> > >
> >
> > --
> > If this answers your question, please go to the following page to let us
> > know that it is solved:
> > https:/
> > confirm?
> >
> > If you still need help, you can reply to this email or go to the
> > following page to enter your feedback:
> > https:/
> >
> > You received this question notification because you asked the question.
> >
>
> --
> You received this question notification because you asked the question.
>
Revision history for this message
|
#23 |
I'm using librsync 2.0, so you might want to give it a try.
On Thu, May 11, 2017 at 6:38 PM, Howard Kaye <
<email address hidden>> wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Howard Kaye gave more information on the question:
> The problem seems to be related to the patching code when the input is not
> an I stance of file. It seems like the librsync calls are not deallocating
> the file like object correctly.
>
> The duplicity.
> have, but I see that librsync 2.0 is available. What is the recommended
> version of librsync?
>
> On May 11, 2017 11:28 AM, "Howard Kaye" <
> <email address hidden>> wrote:
>
> > Your question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Answered => Open
> >
> > You are still having a problem:
> > Actually, it is generating errors. Still working.
> >
> > On Thu, May 11, 2017 at 5:18 AM, edso <question631423
> > launchpad.net>
> > wrote:
> >
> > > Your question #631423 on Duplicity changed:
> > > https:/
> > >
> > > Status: Open => Answered
> > >
> > > edso proposed the following answer:
> > > Howie, i don't see the attachment?! ..ede/duply.net
> > >
> > > On 10.05.2017 22:57, Howard Kaye wrote:
> > > > Question #631423 on Duplicity changed:
> > > > https:/
> > > >
> > > > Howard Kaye gave more information on the question:
> > > > The attached patch seems to fix the problem and keeps current
> behavior
> > > with
> > > > respect to temp directories.
> > > >
> > > > Howie
> > > >
> > > >
> > > > On Tue, May 9, 2017 at 5:08 PM, edso <question631423
> > > launchpad.net>
> > > > wrote:
> > > >
> > > >> Your question #631423 on Duplicity changed:
> > > >> https:/
> > > >>
> > > >> edso proposed the following answer:
> > > >> hey guys,
> > > >>
> > > >> that looks wrong. using os.tmpfile() may resolve the issue, but it
> > > >> removes the users control, where the temp files are created. this
> > comes
> > > >> up on list list regularly when user's small temp partitions overflow
> > and
> > > >> they ask why duplicity is not respecting their temp dir setting.
> > > >>
> > > >> the python docs say
> > > >>
> > > >> https:/
> > > >> "
> > > >> ... will be automatically deleted once there are no file descriptors
> > for
> > > >> the file.
> > > >> "
> > > >> https:/
> > tempfile.
> > > >> "
> > > >> ... It will be destroyed as soon as it is closed (including an
> > implicit
> > > >> close when the object is garbage collected). ...
> > > >> "
> > > >>
> > > >> Howard, could you please test if explicitly closing obsolete
> > > >> TemporaryFiles instead in patchdir.py solves the issue as well?
> > > >>
> > > >> the whole condition looks fishy anyway. it's tempfile modules job to
> > > >> take care of crossplatform compatibility, no need for us to treat
> some
> > > >> platforms different than others. ..ede/duply.net
> > > >>
> > > >> On 09.05.2017 21:03, Howard Kaye wrote:
> > > >>> Question #631423 on Duplicity changed:
> > > >>> https:/
> > > >>>
> > > >>> Status: Needs information => Open
> > > >>>
> > > >>> Howard Kaye gave more information on the question:
> > > >>> The problem seems to be something going on in the tempfile Python
> > > >> library.
> > > >>> The following 1 line patch fixes the issue.
> > > >>>
> > > >>> *** /Users/
> > > >>> 09:03:43.000000000 -0400
> > > >>> --- patchdir.py 2017-05-09 14:27:38.000000000 -0400
> > > >>> ***************
> > > >>> *** 496,502 ****
> > > >>> See https:/
> > for
> > > >>> discussion
> > > >>> of os.tmpfile() vs tempfile.
> > > Windows
> > > >> /
> > > >>> Linux.
> > > >>> """
> > > >>> ! if sys.platform.
> > > >>> tempfp = os.tmpfile()
> > > >>> else:
> > > >>> tempfp =
> > > >>> tempfile.
> > > >>> --- 496,502 ----
> > > >>> See https:/
> > for
> > > >>> discussion
> > > >>> of os.tmpfile() vs tempfile.
> > > Windows
> > > >> /
> > > >>> Linux.
> > > >>> """
> > > >>> ! if sys.platform.
> > > >> 'darwin')):
> > > >>> tempfp = os.tmpfile()
> > > >>> else:
> > > >>> tempfp =
> > > >>> tempfile.
> > > >>>
> > > >>>
> > > >>> On Tue, May 9, 2017 at 11:47 AM, Kenneth Loafman <
> > > >>> <email address hidden>> wrote:
> > > >>>
> > > >>>> Your question #631423 on Duplicity changed:
> > > >>>> https:/
> > > >>>>
> > > >>>> Status: Open => Needs information
> > > >>>>
> > > >>>> Kenneth Loafman requested more information:
> > > >>>> Part of the problem is the number of gpg processes, you should
> have
> > a
> > > >>>> max of 2 or 3, not 13. When you installed the current version,
> was
> > > >>>> there perhaps an older version on the system, maybe from ports or
> > > >>>> homebrew? If so, did you uninstall that version? The repo
> versions
> > > >>>> have different paths than the tarball version.
> > > >>>>
> > > >>>> To double check, run "locate GnuPGInterface.py". If you have it,
> > you
> > > >>>> probably have a very old duplicity hanging out there and will need
> > to
> > > >>>> uninstall and purge it.
> > > >>>>
> > > >>>> --
> > > >>>> To answer this request for more information, you can either reply
> to
> > > >>>> this email or enter your reply at the following page:
> > > >>>> https:/
> > > >>>>
> > > >>>> You received this question notification because you asked the
> > > question.
> > > >>>>
> > > >>>
> > > >>
> > > >> --
> > > >> If this answers your question, please go to the following page to
> let
> > us
> > > >> know that it is solved:
> > > >> https:/
> > > >> confirm?
> > > >>
> > > >> If you still need help, you can reply to this email or go to the
> > > >> following page to enter your feedback:
> > > >> https:/
> > > >>
> > > >> You received this question notification because you asked the
> > question.
> > > >>
> > > >
> > >
> > > --
> > > If this answers your question, please go to the following page to let
> us
> > > know that it is solved:
> > > https:/
> > > confirm?
> > >
> > > If you still need help, you can reply to this email or go to the
> > > following page to enter your feedback:
> > > https:/
> > >
> > > You received this question notification because you asked the question.
> > >
> >
> > --
> > You received this question notification because you asked the question.
> >
>
> --
> You received this question notification because your team duplicity-team
> is an answer contact for Duplicity.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
Revision history for this message
|
#24 |
OK, I found and fixed the root cause of this error:
Duplicity was failing in the glue code (in C), between Python and librsync.
It did an fdopen() on the file descriptor from a Python File object, and then never
closed the File * which was returned. Even though the Python File object (and the
file descriptor shared by the File * and the Python File) got closed when it was
deallocated, the File *'s were being leaked, and eventually the stdio library ran
out of File *'s (even though the number of open files was not that large).
The fix is to dup the file descriptor, and then close the file in the deallocator
routine in the glue code. Duping the file lets the C code and the Python code
each close the file, when they are done with it.
*** _librsyncmodule
--- _librsyncmodule.c 2017-05-16 11:12:58.000000000 -0400
***************
*** 23,28 ****
--- 23,29 ----
* -------
#include <Python.h>
+ #include <errno.h>
#include <librsync.h>
#define RS_JOB_BLOCKSIZE 65536
***************
*** 287,292 ****
--- 288,294 ----
PyObject_HEAD
rs_job_t *patch_job;
PyObject *basis_file;
+ FILE *cfile;
} _librsync_
/* Call with the basis file */
***************
*** 296,302 ****
_librsync_
PyObject *python_file;
int fd;
- FILE *cfile;
if (!PyArg_
return NULL;
--- 298,303 ----
***************
*** 305,318 ****
PyErr_
return NULL;
}
Py_
pm = PyObject_
if (pm == NULL) return NULL;
pm->basis_file = python_file;
! cfile = fdopen(fd, "rb");
! pm->patch_job = rs_patch_
return (PyObject*)pm;
}
--- 306,327 ----
PyErr_
return NULL;
}
+ /* get our own private copy of the file, so we can close it later. */
+ fd = dup(fd);
+ if (fd == -1) {
+ char buf[256];
+ strerror_r(errno, buf, sizeof(buf));
+ PyErr_SetString
+ return NULL;
+ }
Py_
pm = PyObject_
if (pm == NULL) return NULL;
pm->basis_file = python_file;
! pm->cfile = fdopen(fd, "rb");
! pm->patch_job = rs_patch_
return (PyObject*)pm;
}
***************
*** 323,328 ****
--- 332,340 ----
_librsync_
Py_
rs_
+ if (pm->cfile) {
+ fclose(pm->cfile);
+ }
PyObject_
}
Revision history for this message
|
#25 |
nice catch Howie!
wonder why this worked with os.tmpfile() but not with tempfile.
On 16.05.2017 20:33, Howard Kaye wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Status: Answered => Solved
>
> Howard Kaye confirmed that the question is solved:
> OK, I found and fixed the root cause of this error:
>
> Duplicity was failing in the glue code (in C), between Python and librsync.
> It did an fdopen() on the file descriptor from a Python File object, and then never
> closed the File * which was returned. Even though the Python File object (and the
> file descriptor shared by the File * and the Python File) got closed when it was
> deallocated, the File *'s were being leaked, and eventually the stdio library ran
> out of File *'s (even though the number of open files was not that large).
>
> The fix is to dup the file descriptor, and then close the file in the deallocator
> routine in the glue code. Duping the file lets the C code and the Python code
> each close the file, when they are done with it.
>
> *** _librsyncmodule
> --- _librsyncmodule.c 2017-05-16 11:12:58.000000000 -0400
> ***************
> *** 23,28 ****
> --- 23,29 ----
> * -------
>
> #include <Python.h>
> + #include <errno.h>
> #include <librsync.h>
> #define RS_JOB_BLOCKSIZE 65536
>
> ***************
> *** 287,292 ****
> --- 288,294 ----
> PyObject_HEAD
> rs_job_t *patch_job;
> PyObject *basis_file;
> + FILE *cfile;
> } _librsync_
>
> /* Call with the basis file */
> ***************
> *** 296,302 ****
> _librsync_
> PyObject *python_file;
> int fd;
> - FILE *cfile;
>
> if (!PyArg_
> return NULL;
> --- 298,303 ----
> ***************
> *** 305,318 ****
> PyErr_SetString
> return NULL;
> }
> Py_INCREF(
>
> pm = PyObject_
> if (pm == NULL) return NULL;
>
> pm->basis_file = python_file;
> ! cfile = fdopen(fd, "rb");
> ! pm->patch_job = rs_patch_
>
> return (PyObject*)pm;
> }
> --- 306,327 ----
> PyErr_SetString
> return NULL;
> }
> + /* get our own private copy of the file, so we can close it later. */
> + fd = dup(fd);
> + if (fd == -1) {
> + char buf[256];
> + strerror_r(errno, buf, sizeof(buf));
> + PyErr_SetString
> + return NULL;
> + }
> Py_INCREF(
>
> pm = PyObject_
> if (pm == NULL) return NULL;
>
> pm->basis_file = python_file;
> ! pm->cfile = fdopen(fd, "rb");
> ! pm->patch_job = rs_patch_
>
> return (PyObject*)pm;
> }
> ***************
> *** 323,328 ****
> --- 332,340 ----
> _librsync_
> Py_DECREF(
> rs_job_
> + if (pm->cfile) {
> + fclose(pm->cfile);
> + }
> PyObject_Del(self);
> }
>
Revision history for this message
|
#26 |
I think it just moved the error somewhere else.
On May 17, 2017 4:09 AM, "edso" <email address hidden>
wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> edso posted a new comment:
> nice catch Howie!
>
> wonder why this worked with os.tmpfile() but not with
> tempfile.
>
> On 16.05.2017 20:33, Howard Kaye wrote:
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Status: Answered => Solved
> >
> > Howard Kaye confirmed that the question is solved:
> > OK, I found and fixed the root cause of this error:
> >
> > Duplicity was failing in the glue code (in C), between Python and
> librsync.
> > It did an fdopen() on the file descriptor from a Python File object, and
> then never
> > closed the File * which was returned. Even though the Python File
> object (and the
> > file descriptor shared by the File * and the Python File) got closed
> when it was
> > deallocated, the File *'s were being leaked, and eventually the stdio
> library ran
> > out of File *'s (even though the number of open files was not that
> large).
> >
> > The fix is to dup the file descriptor, and then close the file in the
> deallocator
> > routine in the glue code. Duping the file lets the C code and the
> Python code
> > each close the file, when they are done with it.
> >
> > *** _librsyncmodule
> > --- _librsyncmodule.c 2017-05-16 11:12:58.000000000 -0400
> > ***************
> > *** 23,28 ****
> > --- 23,29 ----
> > * -------
> */
> >
> > #include <Python.h>
> > + #include <errno.h>
> > #include <librsync.h>
> > #define RS_JOB_BLOCKSIZE 65536
> >
> > ***************
> > *** 287,292 ****
> > --- 288,294 ----
> > PyObject_HEAD
> > rs_job_t *patch_job;
> > PyObject *basis_file;
> > + FILE *cfile;
> > } _librsync_
> >
> > /* Call with the basis file */
> > ***************
> > *** 296,302 ****
> > _librsync_
> > PyObject *python_file;
> > int fd;
> > - FILE *cfile;
> >
> > if (!PyArg_
> > return NULL;
> > --- 298,303 ----
> > ***************
> > *** 305,318 ****
> > PyErr_SetString
> > return NULL;
> > }
> > Py_INCREF(
> >
> > pm = PyObject_
> &_librsync_
> > if (pm == NULL) return NULL;
> >
> > pm->basis_file = python_file;
> > ! cfile = fdopen(fd, "rb");
> > ! pm->patch_job = rs_patch_
> >
> > return (PyObject*)pm;
> > }
> > --- 306,327 ----
> > PyErr_SetString
> > return NULL;
> > }
> > + /* get our own private copy of the file, so we can close it later. */
> > + fd = dup(fd);
> > + if (fd == -1) {
> > + char buf[256];
> > + strerror_r(errno, buf, sizeof(buf));
> > + PyErr_SetString
> > + return NULL;
> > + }
> > Py_INCREF(
> >
> > pm = PyObject_
> &_librsync_
> > if (pm == NULL) return NULL;
> >
> > pm->basis_file = python_file;
> > ! pm->cfile = fdopen(fd, "rb");
> > ! pm->patch_job = rs_patch_
> >
> > return (PyObject*)pm;
> > }
> > ***************
> > *** 323,328 ****
> > --- 332,340 ----
> > _librsync_
> > Py_DECREF(
> > rs_job_
> > + if (pm->cfile) {
> > + fclose(pm->cfile);
> > + }
> > PyObject_Del(self);
> > }
> >
>
> --
> You received this question notification because you asked the question.
>
Revision history for this message
|
#27 |
Howie, Ken,
would you agree then, that the whole os.tmpfile() exception
http://
"
if sys.platform.
tempfp = os.tmpfile()
else:
"
can be removed after the librsync adapter is patched?.. ede
On 17.05.2017 12:27, Howard Kaye wrote:
> Question #631423 on Duplicity changed:
> https:/
>
> Howard Kaye posted a new comment:
> I think it just moved the error somewhere else.
>
> On May 17, 2017 4:09 AM, "edso" <email address hidden>
> wrote:
>
>> Your question #631423 on Duplicity changed:
>> https:/
>>
>> edso posted a new comment:
>> nice catch Howie!
>>
>> wonder why this worked with os.tmpfile() but not with
>> tempfile.
>>
>> On 16.05.2017 20:33, Howard Kaye wrote:
>>> Question #631423 on Duplicity changed:
>>> https:/
>>>
>>> Status: Answered => Solved
>>>
>>> Howard Kaye confirmed that the question is solved:
>>> OK, I found and fixed the root cause of this error:
>>>
>>> Duplicity was failing in the glue code (in C), between Python and
>> librsync.
>>> It did an fdopen() on the file descriptor from a Python File object, and
>> then never
>>> closed the File * which was returned. Even though the Python File
>> object (and the
>>> file descriptor shared by the File * and the Python File) got closed
>> when it was
>>> deallocated, the File *'s were being leaked, and eventually the stdio
>> library ran
>>> out of File *'s (even though the number of open files was not that
>> large).
>>>
>>> The fix is to dup the file descriptor, and then close the file in the
>> deallocator
>>> routine in the glue code. Duping the file lets the C code and the
>> Python code
>>> each close the file, when they are done with it.
>>>
>>> *** _librsyncmodule
>>> --- _librsyncmodule.c 2017-05-16 11:12:58.000000000 -0400
>>> ***************
>>> *** 23,28 ****
>>> --- 23,29 ----
>>> * -------
>> */
>>>
>>> #include <Python.h>
>>> + #include <errno.h>
>>> #include <librsync.h>
>>> #define RS_JOB_BLOCKSIZE 65536
>>>
>>> ***************
>>> *** 287,292 ****
>>> --- 288,294 ----
>>> PyObject_HEAD
>>> rs_job_t *patch_job;
>>> PyObject *basis_file;
>>> + FILE *cfile;
>>> } _librsync_
>>>
>>> /* Call with the basis file */
>>> ***************
>>> *** 296,302 ****
>>> _librsync_
>>> PyObject *python_file;
>>> int fd;
>>> - FILE *cfile;
>>>
>>> if (!PyArg_
>>> return NULL;
>>> --- 298,303 ----
>>> ***************
>>> *** 305,318 ****
>>> PyErr_SetString
>>> return NULL;
>>> }
>>> Py_INCREF(
>>>
>>> pm = PyObject_
>> &_librsync_
>>> if (pm == NULL) return NULL;
>>>
>>> pm->basis_file = python_file;
>>> ! cfile = fdopen(fd, "rb");
>>> ! pm->patch_job = rs_patch_
>>>
>>> return (PyObject*)pm;
>>> }
>>> --- 306,327 ----
>>> PyErr_SetString
>>> return NULL;
>>> }
>>> + /* get our own private copy of the file, so we can close it later. */
>>> + fd = dup(fd);
>>> + if (fd == -1) {
>>> + char buf[256];
>>> + strerror_r(errno, buf, sizeof(buf));
>>> + PyErr_SetString
>>> + return NULL;
>>> + }
>>> Py_INCREF(
>>>
>>> pm = PyObject_
>> &_librsync_
>>> if (pm == NULL) return NULL;
>>>
>>> pm->basis_file = python_file;
>>> ! pm->cfile = fdopen(fd, "rb");
>>> ! pm->patch_job = rs_patch_
>>>
>>> return (PyObject*)pm;
>>> }
>>> ***************
>>> *** 323,328 ****
>>> --- 332,340 ----
>>> _librsync_
>>> Py_DECREF(
>>> rs_job_
>>> + if (pm->cfile) {
>>> + fclose(pm->cfile);
>>> + }
>>> PyObject_Del(self);
>>> }
>>>
>>
>> --
>> You received this question notification because you asked the question.
>>
>
Revision history for this message
|
#28 |
Yes, it is not needed at all.
On Wed, May 17, 2017 at 6:47 AM, edso <email address hidden>
wrote:
> Your question #631423 on Duplicity changed:
> https:/
>
> edso posted a new comment:
> Howie, Ken,
>
> would you agree then, that the whole os.tmpfile() exception
>
> http://
> series/
> "
> if sys.platform.
> tempfp = os.tmpfile()
> else:
> "
>
> can be removed after the librsync adapter is patched?.. ede
>
> On 17.05.2017 12:27, Howard Kaye wrote:
> > Question #631423 on Duplicity changed:
> > https:/
> >
> > Howard Kaye posted a new comment:
> > I think it just moved the error somewhere else.
> >
> > On May 17, 2017 4:09 AM, "edso" <email address hidden>
> > wrote:
> >
> >> Your question #631423 on Duplicity changed:
> >> https:/
> >>
> >> edso posted a new comment:
> >> nice catch Howie!
> >>
> >> wonder why this worked with os.tmpfile() but not with
> >> tempfile.
> >>
> >> On 16.05.2017 20:33, Howard Kaye wrote:
> >>> Question #631423 on Duplicity changed:
> >>> https:/
> >>>
> >>> Status: Answered => Solved
> >>>
> >>> Howard Kaye confirmed that the question is solved:
> >>> OK, I found and fixed the root cause of this error:
> >>>
> >>> Duplicity was failing in the glue code (in C), between Python and
> >> librsync.
> >>> It did an fdopen() on the file descriptor from a Python File object,
> and
> >> then never
> >>> closed the File * which was returned. Even though the Python File
> >> object (and the
> >>> file descriptor shared by the File * and the Python File) got closed
> >> when it was
> >>> deallocated, the File *'s were being leaked, and eventually the stdio
> >> library ran
> >>> out of File *'s (even though the number of open files was not that
> >> large).
> >>>
> >>> The fix is to dup the file descriptor, and then close the file in the
> >> deallocator
> >>> routine in the glue code. Duping the file lets the C code and the
> >> Python code
> >>> each close the file, when they are done with it.
> >>>
> >>> *** _librsyncmodule
> >>> --- _librsyncmodule.c 2017-05-16 11:12:58.000000000 -0400
> >>> ***************
> >>> *** 23,28 ****
> >>> --- 23,29 ----
> >>> * -------
> -----------
> >> */
> >>>
> >>> #include <Python.h>
> >>> + #include <errno.h>
> >>> #include <librsync.h>
> >>> #define RS_JOB_BLOCKSIZE 65536
> >>>
> >>> ***************
> >>> *** 287,292 ****
> >>> --- 288,294 ----
> >>> PyObject_HEAD
> >>> rs_job_t *patch_job;
> >>> PyObject *basis_file;
> >>> + FILE *cfile;
> >>> } _librsync_
> >>>
> >>> /* Call with the basis file */
> >>> ***************
> >>> *** 296,302 ****
> >>> _librsync_
> >>> PyObject *python_file;
> >>> int fd;
> >>> - FILE *cfile;
> >>>
> >>> if (!PyArg_
> >>> return NULL;
> >>> --- 298,303 ----
> >>> ***************
> >>> *** 305,318 ****
> >>> PyErr_SetString
> >>> return NULL;
> >>> }
> >>> Py_INCREF(
> >>>
> >>> pm = PyObject_
> >> &_librsync_
> >>> if (pm == NULL) return NULL;
> >>>
> >>> pm->basis_file = python_file;
> >>> ! cfile = fdopen(fd, "rb");
> >>> ! pm->patch_job = rs_patch_
> >>>
> >>> return (PyObject*)pm;
> >>> }
> >>> --- 306,327 ----
> >>> PyErr_SetString
> >>> return NULL;
> >>> }
> >>> + /* get our own private copy of the file, so we can close it later.
> */
> >>> + fd = dup(fd);
> >>> + if (fd == -1) {
> >>> + char buf[256];
> >>> + strerror_r(errno, buf, sizeof(buf));
> >>> + PyErr_SetString
> >>> + return NULL;
> >>> + }
> >>> Py_INCREF(
> >>>
> >>> pm = PyObject_
> >> &_librsync_
> >>> if (pm == NULL) return NULL;
> >>>
> >>> pm->basis_file = python_file;
> >>> ! pm->cfile = fdopen(fd, "rb");
> >>> ! pm->patch_job = rs_patch_
> >>>
> >>> return (PyObject*)pm;
> >>> }
> >>> ***************
> >>> *** 323,328 ****
> >>> --- 332,340 ----
> >>> _librsync_
> *)self;
> >>> Py_DECREF(
> >>> rs_job_
> >>> + if (pm->cfile) {
> >>> + fclose(pm->cfile);
> >>> + }
> >>> PyObject_Del(self);
> >>> }
> >>>
> >>
> >> --
> >> You received this question notification because you asked the question.
> >>
> >
>
> --
> You received this question notification because you asked the question.
>
Revision history for this message
|
#29 |
I agree. Will remove the changes and invalidate the bug report.