[rrd-users] One-off and no-data-problem
Simon Hobson
linux at thehobsons.co.uk
Fri Dec 31 00:26:22 CET 2010
Jonatan Magnusson wrote:
> > If you leave a gap longer than heartbeat between updates then the
>> data for that period of time is specifically unknown.
>>
>So, RRDtool stores data with higher resolution than the step-size for
>the current datapoint then?
>
>So that if I reach a new time boundary (a new step) and first throw a
>couple of values with short intervals at it, and then stop supplying
>values, then finally when the next time boundary is reached and (i
>guess) the current datapoint should be consolidated, it checks how much
>of the time within the step had valid and unvalid values, and if that
>ratio is greater than XFF, then the whole datapoint is considered valid?
>(assuming that the RRA has a step-size of 1)
Not quite.
Firstly, a time period is not "complete" until there has been an
update on or after the end time of that period I think - ie, until
you do such an update then the value for that period is unknown.
If you think about it, suppose you have a step period of 300 and do
updates at 0, 10, and 20. You ask for output covering 0 to 300, but
the tools have no way of knowing if you are going to come back and do
another update somewhere between 20 and 300 - so it cannot give any
answer other than unknown. I think this bit will be irrespective of
your XFF.
Where XFF would come in would be (say) if you had done no updates for
a while and then updated at x+250 and x+300 (where x is some multiple
of 300, and the step is 300). In that case, you'd have data for 250
to 300, but unknown for 0 to 250 - and XFF would be consulted to see
if the result was unknown, or derived from the data that was supplied.
To do all this, rrd doesn't store higher resolution data, it stores a
"running value". Suppose you have a counter type, a step of 300, and
update with 0 at time 0 to kick off with. I haven't looked at the
code, and this is just one way of calculating this.
Your rate so far (r1) is 0, and your time so far (t1) is 0.
At time 10 you update with value 10 - so your rate for this update
(r2) is (10-0)/10, and your interim rate ((r1*t1)+(r2*t2))/T is 10,
where T is the time so far in this step, or the step size if the
update completes the step.
At time 100 you update with value 1000 - a rate of 11. The overall
rate so far would be 10, is ((1*10) + (11*90))/100.
At time 400, you update with value 1750. The rate from 100 to 400 is
(1750-1000)/(400-100) = 2.5, and the end result from 0 to 300 will be
:
((10*100)+(2.5*200))/300 = 5
The value 5 would be stored for the step from 0 to 300, and 2.5
stored as the "running value" for the first 100s of the next step.
So you see, for storage of partial updates, you only have to store
one tuple of data - the time for which we have data, and the "running
value" of that data.
--
Simon Hobson
Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.
More information about the rrd-users
mailing list