[rrd-users] rrdcached not flushing when used form my script
matija at serverflow.com
Thu May 1 18:31:32 CEST 2014
I'm trying to write a script that will need to do a lot of RRD writing,
so of course I need rrdcached.
But I must be doing something slightly off, because while rrdcached
doesn't report any errors,
it also doesn't flush the data to the file for some reason.
A munin setup using the same rrdcached has no such problem, so I think
the rrdcached is set up correctly.
What I do:
1) I do a rrdcreate (practically copied from munin, since I didn't want
that to be a problem), except that the start specification is about two
2) Then I write two years worth of updates (in 300 second intervals)
into the RRD. (Yes, I know that is a LOT of updates).
3) Then I do a flush
4) Then I do a rrdgraph or a rrdtool dump.
This works when I use RRDs without a --daemon specification, but it
doesn't work if I use RRDs *with* daemon (although I see the update
statements in the rrdcached journal), and it doesn't work if I send the
update commands directly to the rrdcached - even though I do it inside a
batch and a the end rrdcached responds
UPDATE /var/lib/rrdcached/db//foo3.rrd 1398957791:0.76786129
UPDATE /var/lib/rrdcached/db//foo3.rrd 1398958091:0.76782300
UPDATE /var/lib/rrdcached/db//foo3.rrd 1398958391:0.76778471
When I do a flush for the file, I get back
0 Successfully flushed /var/lib/rrdcached/db//foo3.rrd.
However, after the flush, rrdtool dump shows a file without any of the
information, and the graphs don't reflect the info, either.
I tried this with rrdcached 1.4.7 and 1.4.8, but I didn't see any
difference in behavior.
Any pointers about what I can try? Is there a limit to the number of
updates I can send in the same batch or the a limit to the number of
updates to the same file without a flush?
More information about the rrd-users