configuration suggestion for 400k metrics
First of all, Thanks you for creating Graphite, it offers huge amount of flexibility and easy to plotting timebase graph.
Used to work with MRTG, RRD, Cacti, I admit that i feel more relax to work with Graphite.
Recently, the number of metrics that we feed to one machine carbon-cache has been almost triple than it used to be, and it will be more in near future. I have noticed that the graph start breaking-the line of the graph doesn't look smooth like when i had only 100k metrics.
Currently, the spec of carbon-cache (0.9.9) is
- Intel Xeon 2.6G 24cores
- 24G Ram
- 1x1.1TB 7200rpm SATA
I just get another server with the same spec. that i can use together with the first box, and hope that once i add this machine in, it would help share the load and graph will look nice again.
Question:
1. what would be the good setup for those 2 servers ? i am thinking to have the existing box to have carbon-relay + carbon-cache, and 1 or 2 carbon-cache on the new host.
2. How fast (number of metric/sec) the listener of carbon-relay can be?
Right now, the poller is using GNU parallel running every 40sec to get the metrics from near 1k machines producing almost 400K metrics feeding to carbon-cache in one batch. Are 400k metrics injecting into carbon-cache in one batch considered bad practice? should i break it into smaller chunks and submit them chunk by chunk ?
- Patrick
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Graphite Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Can you help with this problem?
Provide an answer of your own, or ask Mr-Glee for more information if necessary.