Memory requirements of duplicity with gpg and bzip2

Asked by Hannes Ebner

On one of my VMs I've started to experience problems because of low memory, see traceback below:

Traceback (most recent call last):
  File "/usr/bin/duplicity", line 1523, in <module>
    with_tempdir(main)
  File "/usr/bin/duplicity", line 1517, in with_tempdir
    fn()
  File "/usr/bin/duplicity", line 1371, in main
    do_backup(action)
  File "/usr/bin/duplicity", line 1499, in do_backup
    incremental_backup(sig_chain)
  File "/usr/bin/duplicity", line 656, in incremental_backup
    globals.backend)
  File "/usr/bin/duplicity", line 421, in write_multivol
    globals.gpg_profile, globals.volsize)
  File "/usr/lib/python2.7/dist-packages/duplicity/gpg.py", line 332, in GPGWriteFile
    file = GPGFile(True, path.Path(filename), profile)
  File "/usr/lib/python2.7/dist-packages/duplicity/gpg.py", line 163, in __init__
    'logger': self.logger_fp})
  File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 374, in run
    create_fhs, attach_fhs)
  File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 414, in _attach_fork_exec
    process.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

I'm using Ubuntu 12.04 with Duplicity 0.7.04. This problem only started recently, no relevant packages (gpg, bzip, duplicity) have been updated since the last successful backup.

The VM has 1 GB of RAM and 512 MB of swap. When I increased swap from 512 MB to 1 GB the problem went away.

According to Duplicity's backup statistics the SourceFileSize is 22.7 GB.

Now to my question: obviously the backup size has come to a level where 1 GB RAM /512 MB swap are insufficient (or did I come to the wrong conclusion?), which makes me curious about the necessary amount of RAM for running backups.

I'm backing up to Google Cloud Storage with --gpg-options="--compress-algo=bzip2 --bzip2-compress-level=9" --volsize=200.

Will lowering the compression level decrease the demands for memory? Does the volume size affect memory consumption?

Cheers,
Hannes

Question information

Language:
English Edit question
Status:
Solved
For:
Duplicity Edit question
Assignee:
No assignee Edit question
Solved by:
Hannes Ebner
Solved:
Last query:
Last reply:
Revision history for this message
edso (ed.so) said :
#1

On 30.08.2015 11:41, Hannes Ebner wrote:
> Question #270906 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/270906
>
> Description changed to:
> On one of my VMs I've started to experience problems because of low
> memory, see traceback below:
>
> Traceback (most recent call last):
> File "/usr/bin/duplicity", line 1523, in <module>
> with_tempdir(main)
> File "/usr/bin/duplicity", line 1517, in with_tempdir
> fn()
> File "/usr/bin/duplicity", line 1371, in main
> do_backup(action)
> File "/usr/bin/duplicity", line 1499, in do_backup
> incremental_backup(sig_chain)
> File "/usr/bin/duplicity", line 656, in incremental_backup
> globals.backend)
> File "/usr/bin/duplicity", line 421, in write_multivol
> globals.gpg_profile, globals.volsize)
> File "/usr/lib/python2.7/dist-packages/duplicity/gpg.py", line 332, in GPGWriteFile
> file = GPGFile(True, path.Path(filename), profile)
> File "/usr/lib/python2.7/dist-packages/duplicity/gpg.py", line 163, in __init__
> 'logger': self.logger_fp})
> File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 374, in run
> create_fhs, attach_fhs)
> File "/usr/lib/python2.7/dist-packages/duplicity/gpginterface.py", line 414, in _attach_fork_exec
> process.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
>
> I'm using Ubuntu 12.04 with Duplicity 0.7.04. This problem only started
> recently, no relevant packages (gpg, bzip, duplicity) have been updated
> since the last successful backup.
>
> The VM has 1 GB of RAM and 512 MB of swap. When I increased swap from
> 512 MB to 1 GB the problem went away.
>
> According to Duplicity's backup statistics the SourceFileSize is 22.7
> GB.
>
> Now to my question: obviously the backup size has come to a level where
> 1 GB RAM /512 MB swap are insufficient (or did I come to the wrong
> conclusion?), which makes me curious about the necessary amount of RAM
> for running backups.
>
> I'm backing up to Google Cloud Storage with --gpg-options="--compress-
> algo=bzip2 --bzip2-compress-level=9" --volsize=200.
>
> Will lowering the compression level decrease the demands for memory?
> Does the volume size affect memory consumption?
>

you should monitor a backup and it's processes w/ memory requirements to get an idea where the most memory is used.

my gut feeling would be gpg as well. so limiting vol. size or changing gpg compression, assuming another algo is more memory efficient, should help.

..ede/duply.net

Revision history for this message
Hannes Ebner (hebner) said :
#2

Lowering the bzip2 compression level from 9 to 6 solved the problem, let's see for how long.