BZR Problem-- out of memory with pull

Asked by mdb

I am trying to download a launchpad branch for the first time on a Windows system (Vista)

Running from the command line

    bzr branch lp:leo-editor

fails with the message

    7275kB 117kB/s | Fetching revisions:Inserting stream:Estimate 44471/71492
    problem -- bzr: out of memory
    Use -Dmem_dump to dump memory to a file.

This works -r version works:

  bzr branch lp:leo-editor -r100

and iterating up the revision numbers I am successful with

  bzr branch lp:leo-editor -r3085

but fail again with the same memory problem at

  bzr branch lp:leo-editor -r3086

I assume this is one bad or too large revision

I was able to pull the whole revision set on a Linux Ubuntu machine

Is BZR on Windows really this fickle?

Any workarounds such as skipping this revision?

Question information

Language:
English Edit question
Status:
Answered
For:
Bazaar Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Martin Pool (mbp) said :
#1

The first thing to check is that you're using the current bzr release
- probably 2.5b2 is the best bet

Revision history for this message
mdb (mdboldin) said :
#2

Upgraded to 2.5b2, which is much faster, but shows exact same problem
at the exact same place -r3086

On Wed, Oct 19, 2011 at 12:55 AM, Martin Pool
<email address hidden> wrote:
> Your question #175127 on Bazaar changed:
> https://answers.launchpad.net/bzr/+question/175127
>
>    Status: Open => Answered
>
> Martin Pool proposed the following answer:
> The first thing to check is that you're using the current bzr release
> - probably 2.5b2 is the best bet
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/bzr/+question/175127/+confirm?answer_id=0
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/bzr/+question/175127
>
> You received this question notification because you asked the question.
>

Revision history for this message
Martin Packman (gz) said :
#3

There doesn't seem to be anything particularly surprising in that revision.

Seeing what's in your .bzr.log for one of these failed branch commands might be helpful, running `bzr version` will tell you where to find that.

If you do:

> bzr branch -r 3000 lp:leo-editor
> bzr pull -d leo-editor -r 3100
> bzr pull -d leo-editor -r 3200

...and so on, does that then get you past this problem?

Revision history for this message
mdb (mdboldin) said :
#4

> bzr pull -d leo-editor -r 3100

does not solve it. any r30xx > 3086 shows a memory error

Here is the log for a 2.5b2 run

Wed 2011-10-19 05:57:59 -0400
0.094 bazaar version: 2.5b2
0.094 bzr arguments: [u'pull', u'-r3086']
0.136 looking for plugins in C:/Users/XXX/AppData/Roaming/bazaar/2.0/plugins
0.136 looking for plugins in C:/Progra~1/Bazaar/plugins
0.200 encoding stdout as sys.stdout encoding 'cp437'
0.251 opening working tree 'C:/leo-versions/r2/leo-editor'
8.777 Using fetch logic to copy between CHKInventoryRepository('http://bazaar.launchpad.net/%7Eleo-editor-team/leo-editor/trunk/.bzr/repository/')(RepositoryFormat2a()) and CHKInventoryRepository('file:///C:/leo-versions/.bzr/repository/')(RepositoryFormat2a())
8.778 fetching: <SearchResult search:(set(['<email address hidden>']), ['<email address hidden>', '<email address hidden>', '<email address hidden>', '<email address hidden>', '<email address hidden>', ...], 3)>
15.631 25 bytes left on the HTTP socket
17.982 25 bytes left on the HTTP socket
18.622 25 bytes left on the HTTP socket
19.649 25 bytes left on the HTTP socket
21.070 25 bytes left on the HTTP socket
22.487 25 bytes left on the HTTP socket
25.396 25 bytes left on the HTTP socket
30.031 25 bytes left on the HTTP socket
35.774 Adding the key (<bzrlib.btree_index.BTreeGraphIndex object at 0x02E521D0>, 7346774, 1802272) to an LRUSizeCache failed. value 906728419 is too big to fit in a the cache with size 41943040 52428800
39.257 Transferred: 5577kB (143.8kB/s r:5555kB w:22kB)
39.257 Traceback (most recent call last):
  File "bzrlib\commands.pyo", line 923, in exception_to_return_code
  File "bzrlib\commands.pyo", line 1128, in run_bzr
  File "bzrlib\commands.pyo", line 676, in run_argv_aliases
  File "bzrlib\commands.pyo", line 698, in run
  File "bzrlib\cleanup.pyo", line 135, in run_simple
  File "bzrlib\cleanup.pyo", line 165, in _do_with_cleanups
  File "bzrlib\builtins.pyo", line 1083, in run
  File "bzrlib\decorators.pyo", line 217, in write_locked
  File "bzrlib\workingtree.pyo", line 985, in pull
  File "bzrlib\branch.pyo", line 1120, in pull
  File "bzrlib\decorators.pyo", line 217, in write_locked
  File "bzrlib\branch.pyo", line 3427, in pull
  File "bzrlib\branch.pyo", line 3557, in _pull
  File "bzrlib\decorators.pyo", line 217, in write_locked
  File "bzrlib\branch.pyo", line 3370, in _update_revisions
  File "bzrlib\decorators.pyo", line 217, in write_locked
  File "bzrlib\branch.pyo", line 3347, in fetch
  File "bzrlib\repository.pyo", line 711, in fetch
  File "bzrlib\decorators.pyo", line 217, in write_locked
  File "bzrlib\vf_repository.pyo", line 2517, in fetch
  File "bzrlib\fetch.pyo", line 76, in __init__
  File "bzrlib\fetch.pyo", line 103, in __fetch
  File "bzrlib\fetch.pyo", line 131, in _fetch_everything_for_search
  File "bzrlib\vf_repository.pyo", line 1977, in insert_stream
  File "bzrlib\vf_repository.pyo", line 2041, in insert_stream_without_locking
  File "bzrlib\groupcompress.pyo", line 1662, in insert_record_stream
  File "bzrlib\groupcompress.pyo", line 1803, in _insert_record_stream
  File "bzrlib\groupcompress.pyo", line 471, in get_bytes_as
  File "bzrlib\groupcompress.pyo", line 595, in _prepare_for_extract
  File "bzrlib\groupcompress.pyo", line 163, in _ensure_content
MemoryError

BZR 2.4.1 gave a simular error:
82.656 Adding the key (<bzrlib.btree_index.BTreeGraphIndex object at 0x02E84D10>, 7346774, 1802272) to an LRUSizeCache failed. value 906728419 is too big to fit in a the cache with size 41943040 52428800

Revision history for this message
John A Meinel (jameinel) said :
#5

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Long story short:
If at all possible, get an account on Launchpad, and use 'bzr
launchpad-login $USERNAME' so that 'bzr branch lp:leo-editor' will use
bzr+ssh instead of http. I can confirm that branching over http on
Windows fails, but branching over bzr+ssh succeeds.

There is a file in the repository, which is a little bit more than
900MB when decompressed. This file is referenced in a revision which
is no longer in the ancestry of lp:leo-editor. My guess is he
committed it by mistake, pushed it to Launchpad, realized his error,
uncommitted it, and then started a new branch. However, the data is
still there in the repository.

The best way to get rid of it is to get someone to branch it into
another location (which won't include it) delete that repository, and
push the 'clean' branch back into place.

Long version:

On 10/20/2011 1:15 PM, mdb wrote:
> Question #175127 on Bazaar changed:
> https://answers.launchpad.net/bzr/+question/175127
>
> Status: Answered => Open
>
> mdb is still having a problem:
>> bzr pull -d leo-editor -r 3100
>
> does not solve it. any r30xx > 3086 shows a memory error
>
> Here is the log for a 2.5b2 run
>
...

> 8.777 Using fetch logic to copy between
> CHKInventoryRepository('http://bazaar.launchpad.net/%7Eleo-editor-team/leo-editor/trunk/.bzr/repository/')(RepositoryFormat2a())
>
>
and
> CHKInventoryRepository('file:///C:/leo-versions/.bzr/repository/')(RepositoryFormat2a())
...

>
>
>
File "bzrlib\groupcompress.pyo", line 163, in _ensure_content
> MemoryError
>
>
> BZR 2.4.1 gave a simular error: 82.656 Adding the key
> (<bzrlib.btree_index.BTreeGraphIndex object at 0x02E84D10>,
> 7346774, 1802272) to an LRUSizeCache failed. value 906728419 is
> too big to fit in a the cache with size 41943040 52428800
>

The fact that you are accessing the repository over http might be
relevant.

35.774 Adding the key (<bzrlib.btree_index.BTreeGraphIndex object at
0x02E521D0>, 7346774, 1802272) to an LRUSizeCache failed. value
906728419 is too big to fit in a the cache with size 41943040 52428800

^- This is also a bit suspicious. It indicates there is a compressed
record blob that is 906MB in size (when uncompressed, I believe).
(..., 7346774, 1802272)
Says that there is a compressed blob at offset 7,346,774 of 1,802,272
bytes long. (it doesn't say which file, unfortunately).

And the error message indicates that when that record is uncompressed,
it expands to about 906MB in memory.

Now, when I run the same command (over http), I seem to get a bit
farther, but I do eventually get at out-of-memory error:

215.657 Adding the key (<bzrlib.btree_index.BTreeGraphIndex object at
0x02F43810>, 7346774, 1802272) to an LRUSizeCache failed. value
906728419 is too big to fit in a the cache with size 41943040 52428800
215.775 25 bytes left on the HTTP socket
217.046 Adding the key (<bzrlib.btree_index.BTreeGraphIndex object at
0x02F43810>, 7346774, 1802272) to an LRUSizeCache failed. value
906728419 is too big to fit in a the cache with size 41943040 52428800
219.508 Transferred: 180768kB (825.0kB/s r:180616kB w:152kB)
[ 3868] 2011-10-20 14:27:16.482 INFO: Process status after command:
[ 3868] 2011-10-20 14:27:16.482 INFO: WorkingSize 101940 KiB
[ 3868] 2011-10-20 14:27:16.482 INFO: PeakWorking 627252 KiB
[ 3868] 2011-10-20 14:27:16.482 INFO: PagefileUsage 98604 KiB
[ 3868] 2011-10-20 14:27:16.483 INFO: PeakPagefileUsage 886576 KiB
[ 3868] 2011-10-20 14:27:16.483 INFO: PrivateUsage 98604 KiB
[ 3868] 2011-10-20 14:27:16.483 INFO: PageFaultCount 1365748
219.523 Traceback (most recent call last):
  File "C:\dev\bzr\bzr.dev\bzrlib\commands.py", line 923, in
exception_to_return_code
    return the_callable(*args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\commands.py", line 1128, in run_bzr
    ret = run(*run_argv)
  File "C:\dev\bzr\bzr.dev\bzrlib\commands.py", line 676, in
run_argv_aliases
    return self.run(**all_cmd_args)
  File "C:\dev\bzr\bzr.dev\bzrlib\commands.py", line 698, in run
    return self._operation.run_simple(*args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\cleanup.py", line 135, in run_simple
    self.cleanups, self.func, *args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\cleanup.py", line 165, in
_do_with_cleanups
    result = func(*args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\builtins.py", line 1350, in run
    source_branch=br_from)
  File "C:\dev\bzr\bzr.dev\bzrlib\bzrdir.py", line 362, in sprout
    create_tree_if_local=create_tree_if_local)
  File "C:\dev\bzr\bzr.dev\bzrlib\cleanup.py", line 131, in run
    self.cleanups, self.func, self, *args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\cleanup.py", line 165, in
_do_with_cleanups
    result = func(*args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\bzrdir.py", line 403, in _sprout
    result_repo.fetch(source_repository, fetch_spec=fetch_spec)
  File "C:\dev\bzr\bzr.dev\bzrlib\repository.py", line 711, in fetch
    find_ghosts=find_ghosts, fetch_spec=fetch_spec)
  File "C:\dev\bzr\bzr.dev\bzrlib\decorators.py", line 217, in
write_locked
    result = unbound(self, *args, **kwargs)
  File "C:\dev\bzr\bzr.dev\bzrlib\vf_repository.py", line 2518, in fetch
    find_ghosts=find_ghosts)
  File "C:\dev\bzr\bzr.dev\bzrlib\fetch.py", line 76, in __init__
    self.__fetch()
  File "C:\dev\bzr\bzr.dev\bzrlib\fetch.py", line 103, in __fetch
    self._fetch_everything_for_search(search_result)
  File "C:\dev\bzr\bzr.dev\bzrlib\fetch.py", line 131, in
_fetch_everything_for_search
    stream, from_format, [])
  File "C:\dev\bzr\bzr.dev\bzrlib\vf_repository.py", line 1978, in
insert_stream
    src_format, is_resume)
  File "C:\dev\bzr\bzr.dev\bzrlib\vf_repository.py", line 2042, in
insert_stream_without_locking
    self.target_repo.texts.insert_record_stream(substream)
  File "C:\dev\bzr\bzr.dev\bzrlib\groupcompress.py", line 1662, in
insert_record_stream
    for _ in self._insert_record_stream(stream, random_id=False):
  File "C:\dev\bzr\bzr.dev\bzrlib\groupcompress.py", line 1803, in
_insert_record_stream
    bytes = record.get_bytes_as('fulltext')
  File "C:\dev\bzr\bzr.dev\bzrlib\groupcompress.py", line 471, in
get_bytes_as
    self._manager._prepare_for_extract()
  File "C:\dev\bzr\bzr.dev\bzrlib\groupcompress.py", line 595, in
_prepare_for_extract
    self._block._ensure_content(self._last_byte)
  File "C:\dev\bzr\bzr.dev\bzrlib\groupcompress.py", line 163, in
_ensure_content
    self._content = zlib.decompress(z_content)
MemoryError

Note especially:
  Transferred: 180768kB (825.0kB/s r:180616kB w:152kB)
and
  PeakWorking 627252 KiB

Now, if I use 'bzr+ssh://' instead of http:, things seem a lot happier:

It completes successfully in 2m35s with:
Transferred: 82638kB (537.6kB/s r:82637kB w:1kB)
PeakWorking 91672 KiB

That's 91MB peak, instead of 600+MB peak.

Now, my guess is that 'lp:leo-editor' has a bit of junk data in it.
Something that compresses entirely too well (like a 900MB file of all
0 bytes). Further, that data might not actually be referenced in the
real history, but just be present in a revision that was pushed into
the repository, but then that revision was uncommitted/etc.

When you access the data via HTTP, the bzr client has to download the
whole blob, unpack it locally, and then use whatever content it
actually wanted from the blob. In contrast, if you use 'bzr+ssh://'
the server side can notice "oh, you only want bytes XX and YY from
that blob, I'll unpack it on my side, and then only send you the bytes
you are actually going to use".

The reason we get multiple lines about "value ... is too big" is
because we see that we have a large object, we decide not to cache it,
and then we have to download it again, and notice the same thing
again. However, if I just change the max size to 2GB, it never
complains, but it still goes OOM on Windows trying to extract the
content. (It happens to be in a different spot, but still goes OOM.)

One problem on Windows is that you rarely get to use all 2GB of
addressable memory as a contiguous block, because it maps all sorts of
DLLs into the middle of your virtual address space. (If you allocate
1MB chunks, you can usually get close to the 2GB mark.) A 32-bit Linux
server, on the other hand, usually lets you get much closer to a 3GB
addressable space. So the Launchpad servers don't have a problem
uncompressing and then throwing away the bytes from lp:leo-editor.

Note that this isn't strictly about http, though it plays a role. If I
mirror the bytes from Launchpad down exactly as they are, and then try
to do "bzr branch local-leo-editor --no-tree test" it fails for the
same reasons, albeit a lot faster.

I did a little bit more digging, and I found the record that is
specifically problematic:
spellpyx.txt-20080228081140-oy1uzrph8dr32kq8-508
 <email address hidden>
 spellpyx.txt-20080228081140-oy1uzrph8dr32kq8-508
 <email address hidden>
 7346774 1802272 904890145 904926181

spellpyx.txt-20080228081140-oy1uzrph8dr32kq8-508
 <email address hidden>
 spellpyx.txt-20080228081140-oy1uzrph8dr32kq8-508
 <email address hidden>
 7346774 1802272 2361701 904890145

^- This says, specifically, that there is a spellpyx.txt file, created
by edreamleo, which is at bytes from 2361701 to 904890145 in the
uncompressed block.

Logging that revision we see:
      Edward K. Ream 2010-08-06
      revision-id:<email address hidden>
      completed first draft of chapter 4
      M leo/doc/LeoDocs.leo
      M leo/doc/directives.txt
      M leo/doc/glossary.txt
      M leo/doc/intro.txt
      M leo/doc/scripting.txt
      M leo/plugins/spellpyx.txt

Note that the revision does *not* have a revno, which means it isn't
in the ancestry of the lp:leo-editor branch.

Just writing the minimal python code to try to extract that file gives
me OOM on Windows (during zlib decompression). However, I can get the
raw zlib compressed bytes, and then use an iterating decompressor
(decompress the next 10k bytes, write them to disk, grab the next, etc.)

That leaves me with a 865MB file on disk. Then playing tricks with
truncating, etc, lets me get access to the raw content, which ends up
being...
b'b"b\'appendText\'"'
b'b\'b"b\\\'angleBrackets\\\'"\''
b'b\'b\\\'b"b\\\\\\\'aspect_j\\\\\\\'"\\\'\''
b'b\'b\\\'b\\\\\\\'b"b\\\\\\\\\\\\\\\'applescript\\\\\\\\\\\\\\\'"\\\\\\\'\\\'\''

b'b\'b\\\'b\\\\\\\'b\\\\\\\\\\\\\\\'b"b\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'apdl\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'"\\\\\\\\\\\\\\\'\\\\\\\'\\\'\''

...
And a *whole* lot more backslash characters. About 3400 lines of '\'
with the last line being 393,239 bytes long.

I have the feeling it was a generated file, and the generator went
horribly, horribly wrong.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6gH2wACgkQJdeBCYSNAAO4cQCeN2xQRextSHe/YI91VY61pETo
aPAAnizsakZHoD5h6rBVFt1dDFPbZ8ri
=/LIR
-----END PGP SIGNATURE-----

Can you help with this problem?

Provide an answer of your own, or ask mdb for more information if necessary.

To post a message you must log in.