[rrd-users] Re: Some values become NaN...

BAARDA, Don don.baarda at baesystems.com
Thu Jul 27 02:56:27 MEST 2000


G'day,

Please excuse the cr*ppy Outlook formatting.

> -----Original Message-----
> From:	Alex van den Bogaerdt [SMTP:alex at ergens.op.HET.NET]
> Sent:	Thursday, July 27, 2000 9:41 AM
> To:	michael.bischof at ch.uu.net
> Cc:	rrd-users at list.ee.ethz.ch
> Subject:	[rrd-users] Re: Some values become NaN...
> 
> 
> Michael Bischof wrote:
> > Hi,
> > 
> > I wrote a script that updates every 5 Minute a rrdtool database...
> > From time to time some inserted values become NaN ???
> 
> The strange thing is that there are not enough NaN ...
[...]
> > [26.07.2000 23:21:27] N:1700217451:3249272098
> > [26.07.2000 23:26:28] N:1853471652:3464823810
> 
> 5 minutes and 1 second apart.  This is 301 seconds
[...]
> > ---rrdtool database---
> > ./rrdtool create test.rrd \
> >                     DS:inp:COUNTER:300:0:U \
> >                     DS:out:COUNTER:300:0:U \
> 
> You ask for a heart beat counter of 300.  This means that there
> should be an update every 300 seconds.  If you plan to update every
> 5 minutes you should use another (higher) value for heartbeat.
> If you use a heartbeat of 300, all three updates above should fail AFAIK.
> Only one does.
[...]

Someone else will trip over this. At first I thought the following; "He is
only out by 1 second on a few samples. Isn't RRD's data resampling supposed
to fix this?" Then I read the man page. The thing that confused me is the
heartbeat is _not_ the same as the step. The following is my interpretation
of how it _should_ work from the man page.

The step is the interval between stored samples. The heartbeat is a
parameter that adjusts the sampling rate sensitivity. If the time interval
between two recorded samples is less than heartbeat, then all _stored_
samples during that interval will be calculated based on those two samples.
If the time interval is larger than heartbeat, all the stored samples in
that interval will be recorded as unknown.

This is pretty cool, and answers some of the problems I asked about earlier.
If you really don't want unknowns in your RRD, you can extend your heartbeat
out to many times your step. Then you can miss a few samples, and the
missing steps are filled out with the average over that period. The really
cool thing about this, is when you consolidate these averages, you don't end
up loosing or approximating the data over that consolidated period. This is
much better than using a forgiving "xff" to allow for a few unknowns when
consolidating.

I dunno why there weren't more unknowns, but I did notice that the one
unknown did happen after two consecutive drifts of 1 sec, so maybe the
resampling algo is more forgiving than the manual suggests.

ABO

--
Unsubscribe mailto:rrd-users-request at list.ee.ethz.ch?subject=unsubscribe
Help        mailto:rrd-users-request at list.ee.ethz.ch?subject=help
Archive     http://www.ee.ethz.ch/~slist/rrd-users
WebAdmin    http://www.ee.ethz.ch/~slist/lsg2.cgi



More information about the rrd-users mailing list