I can reliably crash iprohc client

Asked by matthew

My test case.

1. Start up rohc server and client communicating via private ip tunnel using ip addresses 10.0.0.1 -> 10.0.0.11
2. Send a large amount of data client -> server.
3. Client core dumps.

Now I can do all manner of other things successfully, for instance I can ssh from client to server and have no problem. Here is the text from the client. Note: nothing of interest shows up in /var/log/messages when the crash occurs.

The command I use on the server side in this particular test was "scp rohcuser@10.0.0.11:/home/rohcuser/test.tar.bz2"

*** glibc detected *** /home/rohcuser/workspace/iprohc/client/iprohc_client: double free or corruption (out): 0x00007f27e402b180 ***
======= Backtrace: =========
/lib64/libc.so.6[0x345e8760e6]
/lib64/libc.so.6[0x345e878c13]
/home/rohcuser/workspace/iprohc/common/libiprohc_common.so(tun2raw+0x40a)[0x7f27eb6cb370]
/home/rohcuser/workspace/iprohc/common/libiprohc_common.so(new_tunnel+0x5a5)[0x7f27eb6cab7b]
/lib64/libpthread.so.0[0x3dc4407851]
/lib64/libc.so.6(clone+0x6d)[0x345e8e890d]
======= Memory map: ========
00400000-00406000 r-xp 00000000 08:03 1726032 /home/rohcuser/workspace/iprohc/client/iprohc_client
00605000-00606000 rw-p 00005000 08:03 1726032 /home/rohcuser/workspace/iprohc/client/iprohc_client
012ce000-0130f000 rw-p 00000000 00:00 0 [heap]
3032c00000-3032c30000 r-xp 00000000 08:03 1472141 /usr/lib64/librohc_comp.so.0.1.0
3032c30000-3032e30000 ---p 00030000 08:03 1472141 /usr/lib64/librohc_comp.so.0.1.0
3032e30000-3032e31000 rw-p 00030000 08:03 1472141 /usr/lib64/librohc_comp.so.0.1.0
3033000000-303300a000 r-xp 00000000 08:03 1471908 /usr/lib64/librohc_common.so.0.1.0
303300a000-303320a000 ---p 0000a000 08:03 1471908 /usr/lib64/librohc_common.so.0.1.0
303320a000-303320b000 rw-p 0000a000 08:03 1471908 /usr/lib64/librohc_common.so.0.1.0
3033400000-3033430000 r-xp 00000000 08:03 1484267 /usr/lib64/librohc_decomp.so.0.1.0
3033430000-3033630000 ---p 00030000 08:03 1484267 /usr/lib64/librohc_decomp.so.0.1.0
3033630000-3033631000 rw-p 00030000 08:03 1484267 /usr/lib64/librohc_decomp.so.0.1.0
345e000000-345e020000 r-xp 00000000 08:03 916117 /lib64/ld-2.12.so
345e21f000-345e220000 r--p 0001f000 08:03 916117 /lib64/ld-2.12.so
345e220000-345e221000 rw-p 00020000 08:03 916117 /lib64/ld-2.12.so
345e221000-345e222000 rw-p 00000000 00:00 0
345e400000-345e402000 r-xp 00000000 08:03 916126 /lib64/libdl-2.12.so
345e402000-345e602000 ---p 00002000 08:03 916126 /lib64/libdl-2.12.so
345e602000-345e603000 r--p 00002000 08:03 916126 /lib64/libdl-2.12.so
345e603000-345e604000 rw-p 00003000 08:03 916126 /lib64/libdl-2.12.so
345e800000-345e98a000 r-xp 00000000 08:03 916118 /lib64/libc-2.12.so
345e98a000-345eb89000 ---p 0018a000 08:03 916118 /lib64/libc-2.12.so
345eb89000-345eb8d000 r--p 00189000 08:03 916118 /lib64/libc-2.12.so
345eb8d000-345eb8e000 rw-p 0018d000 08:03 916118 /lib64/libc-2.12.so
345eb8e000-345eb93000 rw-p 00000000 00:00 0
345fc00000-345fc15000 r-xp 00000000 08:03 916125 /lib64/libz.so.1.2.3
345fc15000-345fe14000 ---p 00015000 08:03 916125 /lib64/libz.so.1.2.3
345fe14000-345fe15000 r--p 00014000 08:03 916125 /lib64/libz.so.1.2.3
345fe15000-345fe16000 rw-p 00015000 08:03 916125 /lib64/libz.so.1.2.3
346ac00000-346ac16000 r-xp 00000000 08:03 916140 /lib64/libgcc_s-4.4.7-20120601.so.1
346ac16000-346ae15000 ---p 00016000 08:03 916140 /lib64/libgcc_s-4.4.7-20120601.so.1
346ae15000-346ae16000 rw-p 00015000 08:03 916140 /lib64/libgcc_s-4.4.7-20120601.so.1
346dc00000-346dc03000 r-xp 00000000 08:03 916141 /lib64/libgpg-error.so.0.5.0
346dc03000-346de02000 ---p 00003000 08:03 916141 /lib64/libgpg-error.so.0.5.0
346de02000-346de03000 r--p 00002000 08:03 916141 /lib64/libgpg-error.so.0.5.0
346de03000-346de04000 rw-p 00003000 08:03 916141 /lib64/libgpg-error.so.0.5.0
3470800000-3470872000 r-xp 00000000 08:03 916142 /lib64/libgcrypt.so.11.5.3
3470872000-3470a71000 ---p 00072000 08:03 916142 /lib64/libgcrypt.so.11.5.3
3470a71000-3470a72000 r--p 00071000 08:03 916142 /lib64/libgcrypt.so.11.5.3
3470a72000-3470a75000 rw-p 00072000 08:03 916142 /lib64/libgcrypt.so.11.5.3
3472000000-3472010000 r-xp 00000000 08:03 1441128 /usr/lib64/libtasn1.so.3.1.6
3472010000-347220f000 ---p 00010000 08:03 1441128 /usr/lib64/libtasn1.so.3.1.6
347220f000-3472210000 rw-p 0000f000 08:03 1441128 /usr/lib64/libtasn1.so.3.1.6
3bb3a00000-3bb3a9c000 r-xp 00000000 08:03 1441114 /usr/lib64/libgnutls.so.26.14.12
3bb3a9c000-3bb3c9c000 ---p 0009c000 08:03 1441114 /usr/lib64/libgnutls.so.26.14.12
3bb3c9c000-3bb3ca3000 rw-p 0009c000 08:03 1441114 /usr/lib64/libgnutls.so.26.14.12
3dc4400000-3dc4417000 r-xp 00000000 08:03 923980 /lib64/libpthread-2.12.so
3dc4417000-3dc4617000 ---p 00017000 08:03 923980 /lib64/libpthread-2.12.so
3dc4617000-3dc4618000 r--p 00017000 08:03 923980 /lib64/libpthread-2.12.so
3dc4618000-3dc4619000 rw-p 00018000 08:03 923980 /lib64/libpthread-2.12.so
3dc4619000-3dc461d000 rw-p 00000000 00:00 0
7f27e4000000-7f27e4037000 rw-p 00000000 00:00 0
7f27e4037000-7f27e8000000 ---p 00000000 00:00 0
7f27eacb0000-7f27eacb1000 ---p 00000000 00:00 0
7f27eacb1000-7f27eb6b8000 rw-p 00000000 00:00 0
7f27eb6c6000-7f27eb6c8000 rw-p 00000000 00:00 0
7f27eb6c8000-7f27eb6d1000 r-xp 00000000 08:03 1725987 /home/rohcuser/workspace/iprohc/common/libiprohc_common.so
7f27eb6d1000-7f27eb8d0000 ---p 00009000 08:03 1725987 /home/rohcuser/workspace/iprohc/common/libiprohc_common.so
7f27eb8d0000-7f27eb8d1000 rw-p 00008000 08:03 1725987 /home/rohcuser/workspace/iprohc/common/libiprohc_common.so
7f27eb8d1000-7f27eb8d2000 rw-p 00000000 00:00 0
7fffd3daf000-7fffd3dc4000 rw-p 00000000 00:00 0 [stack]
7fffd3dff000-7fffd3e00000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
./RunClient: line 1: 10245 Aborted (core dumped) /home/rohcuser/workspace/iprohc/client/iprohc_client --remote $1 --port 3126 --dev rohcclient --debug --p12 /home/rohcuser/certificate/ia.p12

p.s. in the logs for the client and the server I get this message a lot:
"localhost rsyslogd-2177: imuxsock lost 708 messages from pid 6429 due to rate-limiting"

pid 6429 is the process id of my iprohc-server.

Question information

Language:
English Edit question
Status:
Solved
For:
rohc Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Didier Barvaux (didier-barvaux) said :
#1

Matthew,

As foreword, I encourage you to subscribe the mailing list directly instead of using the "questions" on the Launchpad website. You may subscribe on the page https://launchpad.net/~rohc/+join , then send your questions/problems by email to the <email address hidden> address.

My answers below.

> My test case.
>
> 1. Start up rohc server and client communicating via
> private ip tunnel using ip addresses 10.0.0.1 -> 10.0.0.11
> 2. Send a large amount of data client -> server.
> 3. Client core dumps.
>
> Now I can do all manner of other things successfully, for instance
> I can ssh from client to server and have no problem. Here is the
> text from the client. Note: nothing of interest shows up in
> /var/log/messages when the crash occurs.

Thank you for reporting the problem! I opened a dedicated ticket: https://bugs.launchpad.net/rohc/+bug/1180480
Please answer to my questions and upload files there.

What version of the ROHC library do you use? What version of the IP/ROHC application do you use? What is your Linux distribution?

> The command I use on the server side in this particular test was
> "scp rohcuser@10.0.0.11:/home/rohcuser/test.tar.bz2"

How large is the test.tar.bz2 file?

> ./RunClient: line 1: 10245 Aborted (core dumped) /home/rohcuser/workspace/iprohc/client/iprohc_client --remote $1 --port 3126 --dev rohcclient --debug --p12 /home/rohcuser/certificate/ia.p12

The client crashed. A coredump shall be available somewhere on your filesystem. You'll probably find a file named "core" in the current directory or in /.

Upload the "core" file, the /home/rohcuser/workspace/iprohc/client/iprohc_client binary and also the files listed by:
 $ ldd /home/rohcuser/workspace/iprohc/client/iprohc_client
on the ticket https://bugs.launchpad.net/rohc/+bug/1180480

> p.s. in the logs for the client and the server I get this message a lot:
> "localhost rsyslogd-2177: imuxsock lost 708 messages from pid 6429
> due to rate-limiting"

This is probably due to the IP/ROHC client sending too many traces to the log daemon. The log daemon rsyslogd tells you that it was not quick enough and missed many messages. Avoid using the --debug option for the IP/ROHC client or server when transfering large bunch of data. The --debug option slows down the tunnel by several factors.

Regards,
Didier

Revision history for this message
matthew (xcalibre0) said :
#2

Thanks Didier, I do belong to the mailing list, but I dont know if I have ever received an email. If you dont mind, could you explain what the difference is between submitting by email and through launchpad? That is, what is the advantage, I'm just curious, I will use the mailing list when I find the way.

The version of the iprohc is 0.7 I think. I looked in the change long and saw the last two entries as (don't know how else to check):
 However when I use iprohc_client --help it prints that its 0.4, so I don't know which is accurate.

Release 0.7 (XX Xxx 201x)
 Not published yet.

Release 0.6 (25 Mar 2013)
 Do not use static variables in threaded functions.
 At client, filter traffic on destination IP address to avoid mixing
  traffic between several clients.
 Do not print syslog

My rohc lib is version 1.6 (it has to be for iprohc to compile I think) I dont know what subversion it might be, I do not know how to check again the last entries in the ChangeLog are:

XX XXX 201X - release 1.6.0
  Compatibility:
    TODO
  License/Authors:
    TODO
  Acknowledgments for bug reports and/or bug fixes:
    FWX, Elisabeth, Viveris Technologies, Yura.
  Main changes:
    TODO
  Build system:
    TODO
  Q&A:
    TODO
  Performances:
    TODO
  Bug fixes:
    TODO

The size of my test file is 694k, its actually just rohc 1.5.1 zip and tar'd from your website :D

I will upload the core shortly

Revision history for this message
matthew (xcalibre0) said :
#3

Oh also I'm using RedHat 6.

Revision history for this message
Didier Barvaux (didier-barvaux) said :
#4

Matthew,

> Thanks Didier, I do belong to the mailing list, but I dont know if I have
> ever received an email.

It seems you're not subscribed, see [1]. Please go to [2].

> If you dont mind, could you explain what the
> difference is between submitting by email and through launchpad?
> That is, what is the advantage, I'm just curious, I will use the mailing
> list when I find the way.

It would avoid me and you typing long answers in the unfriendly editor of the launchpad website. In general, email clients got better text editors :)

> The version of the iprohc is 0.7 I think. I looked in the change long
> and saw the last two entries as (don't know how else to check):

Version 0.7 is not released yet. Please type the following command at the root of IP/ROHC sources:
 $ bzr revno

> However when I use iprohc_client --help it prints that its 0.4, so
> I don't know which is accurate.

Arg, my bad, I forgot to update that number... Now fixed in bzr. Thank you for pointing it. I also added the bzr revision if the binaries were built from bzr.

> My rohc lib is version 1.6 (it has to be for iprohc to compile I think)
> I dont know what subversion it might be, I do not know how to check
> again the last entries in the ChangeLog are:
> [...]

ROHC version 1.6.0 is not released yet, so that's probably a bzr version. Please type the following command at the root of ROHC library sources:
 $ bzr revno

I will update the autotools configuration to append the bzr revision number to the version number. So trunk version will be numbered X.Y.Z~rev (example 1.6.0~725). It will avoid confusion next time.

> The size of my test file is 694k, its actually just rohc 1.5.1 zip and tar'd from your website :D

OK.

> Oh also I'm using RedHat 6.

OK.

Regards,
Didier

[1] https://launchpad.net/~rohc/+members#active
[2] http://launchpad.net/~rohc/+join

Revision history for this message
Didier Barvaux (didier-barvaux) said :
#5

Bug fixed.