Sub-second data gathering

Asked by Mohan on 2012-06-06

Can I use Graphite / StatsD to gather data at sub-second intervals ? Our usecase is a fairly high-speed message handler.

The other way is to use the facilities of the programming language to write logs but I have to push it into graphite again later. Our load on graphite might not be very high because we will have only one or two servers but the data is captured fast. We won't be capturing this for a long time.

Thanks.

Question information

Language:
English Edit question
Status:
Answered
For:
Graphite Edit question
Assignee:
No assignee Edit question
Last query:
2012-06-06
Last reply:
2012-06-10
Mohan (radhakrishnan-mohan) said : #1

Has anyone tried this by gathering sub-second values in the client generator and update graphite in bulk ? Is there an array type so that bulk values can be passed to graphite ? I know that the programming language I use can store sub-second values in an efficient way.

Can I bulk update graphite thereby preserving the granularity and also not overwhelming graphite ?

Thanks.

Michael Leinartas (mleinartas) said : #2

Graphite can't handle sub-second granularity unfortunately. The data format stores epoch time using an unsigned long int which isn't big enough for any further precision. If you want Graphite to store the data you'll need to aggregate it to per-second values using something like statsd.

You can do bulk updates however - from the commandline, the whisper-update.py utility can take multiple points or you can send to the carbon-daemon via Python using the pickle format (by default, pickle receiver listens on port 2004). These are merely pickle'd lists of metric tuples of the format:
[ (metric_name, (timestamp, value)) ]
You can pickle.dumps() one of these and send it via tcp socket to carbon-daemon.

Can you help with this problem?

Provide an answer of your own, or ask Mohan for more information if necessary.

To post a message you must log in.