[rrd-users] One-off and no-data-problem
jonatan at cmteknik.se
Thu Dec 30 18:43:57 CET 2010
On 2010-12-30 18:23, Simon Hobson wrote:
> If you keep supplying updates which are not more than heartbeat
> apart, then each update will specify the value for the interval since
> the previous update to the current one.
> As for gauge values, you need to remember that RRD tools do NOT store
> them ! RRD only deals with rates, when you specify gauge then you are
> bypassing the conversion from counter values to rates - but it's
> still assumed to be a rate.
> Think of it like this, if you were logging distance/time/speed for a
> car then you could do it two ways :
> 1) Periodically tell RRD the odometer reading - it then knows how far
> you've driven and can work out the speed as (d1 - d2)/t. This is the
> COUNTER data type.
> 2) Periodically you can tell it how fast you are going by reading the
> speedometer. If you think about it, this is less accurate - you might
> have temporarily slowed down just when a reading is taken = but RRD
> will simply assume that you were doing the stated speed for the whole
> of the previous update interval. This is the GAUGE data type.
> On update frequency. You can update as often as you like as long as
> you don't try and update more often than once per second. The most
> accurate in terms of getting out what you put in is if you update at
> the end of every step, and exactly on the step boundary. If you do
> this then normalisation becomes a null operation and you'll get out
> what you put in (given the limits of floating point storage).
> If your updates aren't exactly on the step boundaries then
> normalisation will be done - did you read Alex's tutorial ?
> Note that step boundaries are always an integer multiple of the step
> size from Unix epoch (midnight, 1st Jan 1970).
Thanks, things are starting to fall in place. I've read the tutorial,
but its the little details that I'm still unsure about.
> If you leave a gap longer than heartbeat between updates then the
> data for that period of time is specifically unknown.
So, RRDtool stores data with higher resolution than the step-size for
the current datapoint then?
So that if I reach a new time boundary (a new step) and first throw a
couple of values with short intervals at it, and then stop supplying
values, then finally when the next time boundary is reached and (i
guess) the current datapoint should be consolidated, it checks how much
of the time within the step had valid and unvalid values, and if that
ratio is greater than XFF, then the whole datapoint is considered valid?
(assuming that the RRA has a step-size of 1)
More information about the rrd-users