Running multiple carbon caches and rabbitmq
Hi guys,
I want to run monitoring stack based on collectd+
I will be pulling quite a lot of metrics about 500hosts to monitor with interval of 1 minute (some important metrics will be collected more often, not less than 10secs) so I assume about 200k metrics/minute. Do I need multiple carbon caches to handle this amount of traffic? Or a single carbon cache should be enough? (carbon will be on quite powerful server with 6 SSD disks).
Second question is about RabbitMQ. By default carbon creates exclusive queues - I want to use rabbit for zero downtime - when server with carbon will die rabbitmq will be collecting the metrics. But when carbon disconnects, the exclusive queue dissapears. Why by defualt carbon creates exclusive queue? I can possibly change the code of the amqp section in carbon .py files but is it a good idea?
For now I have an idea of running a single queue in rabbit and use carbon-relay to spread metrics over multiple carbon-caches? Is it a good idea?
There were also performance problems with txamqp plugin in earlier version of carbon, are those problems still occuring? And is the pickle protocol still best option for scalability?
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Graphite Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Can you help with this problem?
Provide an answer of your own, or ask Krzysztof for more information if necessary.