Commits taking ridiculously long

Asked by Darren

Hello,

I have a small project consisting only of a few dozen source files.

It's my feeling that commits normally take a long time considering the amount of data in question (maybe 20-30 seconds), but one particular commit took so long it's prompting me to post for help. The change in the commit was a single line of text in one file, and the commit took well over 5 minutes.

I copied the status lines in the stdout when I realised something was weird, but stupidly I just lost them from my clipboard. I think it was at the 'Inserting text' stage, something like that? And the status seemed to indicate an amount of data in the order of dozens of megabytes was being transferred (I may have misread this - but is there some intentional but sporadic activity that could be feasibly taking so long?)

I am using the sftp protocol. Are there any obvious things that might be wrong with my setup? Is there another protocol that I should be using?

Thank you in advance for any suggestions,

Darren

Question information

Language:
English Edit question
Status:
Answered
For:
Bazaar Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Martin Pool (mbp) said :
#1

Hi Darren,

Accessing a repository over sftp can be slow, especially if the server
is a long way away on the network.

You probably saw one unusually slow commit because bzr had reached a
point where it had to repack the repository; over sftp that means
reading and then re-writing a lot of data. The time is going to be
related to the amount of data written since the last repack, not the
size of the commit that initiated it.

I have a few suggestions:

1- use bzr+ssh instead of sftp
2- don't use a checkout or bound branch, but rather have a regular
local branch, and push to the server from time to time
3- if you feel the time was unreasonable even considering all the
above (eg even if the whole repo is just 1MB on the server and we
transferred dozens to repack it), please file a bug including the
results of committing to 'log+sftp://example.com/....'

Martin

On 23 March 2011 11:41, Darren <email address hidden> wrote:
> New question #150099 on Bazaar:
> https://answers.launchpad.net/bzr/+question/150099
>
> Hello,
>
> I have a small project consisting only of a few dozen source files.
>
> It's my feeling that commits normally take a long time considering the amount of data in question (maybe 20-30 seconds), but one particular commit took so long it's prompting me to post for help. The change in the commit was a single line of text in one file, and the commit took well over 5 minutes.
>
> I copied the status lines in the stdout when I realised something was weird, but stupidly I just lost them from my clipboard.  I think it was at the 'Inserting text' stage, something like that?  And the status seemed to indicate an amount of data in the order of dozens of megabytes was being transferred (I may have misread this though).
>
> I am using the sftp protocol.  Are there any obvious things that might be wrong with my setup?  Is there another protocol that I should be using?
>
> Thank you in advance for any suggestions,
>
> Darren
>
> --
> You received this question notification because you are an answer
> contact for Bazaar.
>
>

Revision history for this message
John A Meinel (jameinel) said :
#2

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...
> I am using the sftp protocol. Are there any obvious things that might
> be wrong with my setup? Is there another protocol that I should be
> using?
>
> Thank you in advance for any suggestions,
>
> Darren
>

bzr's disk layout means that periodically it needs to recombine existing
content to preserve efficiency. It generally does this at orders of
magnitude. (So after 10 commits, it recombines the info about those 10
into a single file. After 100 commits, it recombines the 10 "10-commit"
files into a single "100-commit" file).

Using 'sftp' this has to all be done on the client, downloading all the
information about the files, combining it, and re-uploading it.

The order-of-magnitudes means that infrequently it will be extra slow.

However, if you are using "bzr+ssh://" to access the remote content,
then we tell the remote "bzr" to do the repacking. So you won't be
downloading the content locally, to push it back up again.

There are other significant advantages of using bzr+ssh, but this is the
one for your particular concern.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2JU6cACgkQJdeBCYSNAAN9FgCePCXjPLhDrgaco1s9TyjR7Zpd
VuAAoJ4U6x+w+Abq+WF/vmU4qUVdhA7N
=oxpr
-----END PGP SIGNATURE-----

Can you help with this problem?

Provide an answer of your own, or ask Darren for more information if necessary.

To post a message you must log in.