compressor memory allocation question

Asked by teloniatis


In both compressor & decompressor (struct rohc_(de)comp), uint8_t rru[ROHC_MAX_MRRU] is a array of 65535 bytes allocated for every new (de)compressor, but by default mrru is 0 and in my application ROHC segmentation is not used, so why not use a pointer instead for rru and allocate memory at will?
Also, why is it needed to have three CRC tables (crc_table_X)? Since one type of CRC could apply in one packet, right?

Also, is there any progress with this:

I am asking all those, because we are designing an application with many (de)compressors / contexts and memory allocation is quite important.

Best regards,

Question information

English Edit question
rohc Edit question
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Didier Barvaux (didier-barvaux) said :


Thank you for the feedback on your use case!

You're right about the rru buffers in compressors and decompressors. I allocated it dynamically in a dev branch. It seems to work. Please test it too. See below for details.

The CRC tables are pre-computed to speed up CRC computations. Several years ago, they were not in compressor/decompressor. They were global variables. This was a kind of singleton that needed to be initialized before initializing any compressor or decompressor. It made the API simpler to put them in compressor/decompressor. It could be improved from a memory point of view, but it requires API changes. In short, it is not quick to do ;-)

About bug #1604491, there are some work in progress for performances on ARM targets. Such tiny systems are small caches, so the sizes of structures is important for performances (cache invalidation). So, there is some work in a dev branch that minimizes all structures. Please test it for your setup/use case.

The dev branch is dev_improve_perfs_decomp_on_arm on Github. It also contains the optimization that you proposed on RRU buffers. Tell me if it improves your use case.

Changelog of the branch:


Revision history for this message
Launchpad Janitor (janitor) said :

This question was expired because it remained in the 'Needs information' state without activity for the last 15 days.

Revision history for this message
teloniatis (teloniatis) said :

Hi again,

Sorry for late response.
Indeed the mentioned change with rru buffers decreases a lot the memory footprint of (de)compressor objects.

However, still the memory allocation is very high, because context's memory footprint is also too high.
One of the reason for high needed memory is for compressing lists of IPv6 extension headers, for which I am not so familiar yet.

However, by modifying some constants (eg. ROHC_LIST_ITEM_DATA_MAX) in "common\rohc_list.h", we managed to decrease size of structs list_(de)comp from 70224 to 2064.
But my questions is why to reserve memory for lists if no IPv6 support is needed? And I am referring to decompressor's context, since during compressor context initialization there is a check for IP version, and "rohc_comp_list_ipv6_new" is called only for IPv6.
Why not having the same check when init decompressor's context, when calling "rohc_decomp_list_ipv6_new" respectively?


Revision history for this message
Didier Barvaux (didier-barvaux) said :


I checked the allocated buffers for lists of IPv6 extension headers. There are indeed quite large. I did change them for the TCP profile in november, but I forgot to do the same for RFC3095 profiles. I have just did it on the dev_improve_perfs_decomp_on_arm branch. Please update your copy and tell me if it improves your situation.

There is still room for optimization for the struct list_comp. It still weights 35k. I'm adding this work on my TODO list.


Revision history for this message
Didier Barvaux (didier-barvaux) said :


The changes to the buffers of RRU and IPv6 extension headers are now merged in the master branch, as well as all the other performance enhancements. There will be soon released with the next major release.

Didier Barvaux

Can you help with this problem?

Provide an answer of your own, or ask teloniatis for more information if necessary.

To post a message you must log in.