Accumlations of expired fernet tokens causes high memory Utilization on memcached process

Asked by chandramouli Chekuri on 2018-11-08

Memcache process configuration related to item_size and max_memory is handled in below code

class

{ '::memcached': listen_ip => get_network_role_property('mgmt/memcache', 'ipaddr'), max_memory => '50%', item_size => '10m', }
We have some questions regarding the above manifest.

1) Present max_memory is 50 % of total ram. But this is not actually considered in the dimensioning guideline wrt RAM of the vCIC. Is there any reason for storing the old tokens in ram.

2) Item_size currently we have 10mb, But memcache log says that " WARNING: Setting item max size above 1MB is not recommended! Raising this limit increases the minimum memory requirements and will decrease your memory efficiency."

Can we know how big these values should be, Can you please give some reasons for these values.

Question information

Language:
English Edit question
Status:
Answered
For:
Fuel for OpenStack Edit question
Assignee:
No assignee Edit question
Last query:
2018-11-08
Last reply:
2018-12-24
Denis Meltsaykin (dmeltsaykin) said : #1

Previously memcached had 95% max memory consumption and it was fixed by https://bugs.launchpad.net/fuel/+bug/1439882.
This was made because it caused problems during heavy load testing.

Regarding the item_size variable it was increased by https://bugs.launchpad.net/mos/+bug/1571626 because revocation lists are stored as one item by keystone.

These values have proven to be more or less compatible with the heavy load scenarios on 1000 nodes clusters during rapid users additions (thousands of users in several minutes). Although I wouldn't recommend to change them you might find it appropriate to decrease them for your concrete environment, certainly, after excessive testing on a staging environment.

Anil (karri-anil) said : #2

Hi All,

We have understood that these below values are compatiable for 1000 node deployments

If the values are changed as below, will we face any issues in case of deployments with 150 nodes.

Regarding max_memory => '50%'
Regarding item_size => '10m'

to

max_memory => '64mb'
item_size => '1m'

Regarding max_memory => '50%' :

One token reserves 12kB, so with 64MB we can keep 5k tokens. Is 64MB reasonable for a 150 servers, 3K VMs, 15K vNICs for one hour?. Or please suggest the value less than '50%' that is compatiable for 150 nodes.

Regarding item_size => '10m' :

Currently Item_size 10mb, But memcache log says that " WARNING: Setting item max size above 1MB is not recommended!. From launchpad bug https://bugs.launchpad.net/mos/+bug/1571626 we got to know that '1m' is not sufficient for 197 nodes. Can we reduce the value lessthan '10m' like '5m' or any other value that is compatiable for 150 nodes?.

Could you please suggest us the appropiate values for max_memory and item_size for environment setup like 150 nodes ?

Thanks and Regards,
Anil.

Denis Meltsaykin (dmeltsaykin) said : #3

These values have been certified during the GA acceptance testing. Re-testing of a released product is impossible. I cannot recommend any change to the architecture. I.e. the "appropriate values" are those that already set in the product. If you feel these values to be non-optimal for you, you can change them on your own risk having them tested on a staging environment.

Can you help with this problem?

Provide an answer of your own, or ask chandramouli Chekuri for more information if necessary.

To post a message you must log in.