<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<body bgcolor="#ffffff" text="#000000">
Thanks for this extensive reply, it helped me dig into the problem and
I now have more questions.<br>
My problem is that I can't get the server to send more than one report
per every five minute-period.<br>
1. Is there a way to get rrdtool to smooth over the wrap and estimate
what should be where it's now unknown?<br>
2. Is there a way to tell rrdtool that if it gets a value lower than 0,
it should use 0 as the base instead of adding unknown?<br>
I would prefer 1 but if I can implement 2 it would be enough, then I
can instruct my operator to do the reload just seconds after a statdump.<br>
Alex van den Bogaerdt wrote:
<pre wrap="">On Wed, Mar 07, 2007 at 06:24:43PM +0100, Rickard Dahlstrand wrote:
I'm trying to log load from a NSD DNS-server and it wraps the counter
every time we update the zone (every 2 hours). This introduces
unknown-values in the rrd-file. I would like rrdtool to figure out that
a wrap has occured and calculate a correct value.
Is it me asking to much from rrdtool or have do I need to change the
This has been asked and answered a couple of times, even recently (last
week or so?) See the mail archive for details, sorry no pointer as I am
too lazy right now to search them myself.
<pre wrap="">Time         UnixTime         Zone         UnixZone         Counter         Diff
11:05         1173265500         09:56         1173261360         1527837
11:10         1173265800         09:56         1173261360         1580265         52428
11:15         1173266100         09:56         1173261360         1634669         54404
Somewhere between 11:15 and 11:20 you update your zone. This happens
at time "T". A counter is set to zero. You lost two important rates:
between 11:15 and T, (say values 1634669 to 1683500) and between
T and 11:20 (values 0 to 971).
<pre wrap="">11:20         1173266400         11:19         1173266394         971         -1633698
11:25         1173266700         11:19         1173266394         52998         52027
11:30         1173267000         11:19         1173266394         104549         51551
11:35         1173267300         11:19         1173266394         154445         49896
At timestamp T-2 sec., update using the last known value just before this
zone update. The closer the better.
At timestamp T-1 sec., update using "U" for unknown
At timestamp T, update using zero
When using COUNTER, that 971 at 11:20 will be computed against the zero
at time "T", which is correct.
You will have two seconds (T-2 .. T) per two hours unknown. Unless you
have a very rigid xff setting, you won't see those unknowns. The rate
during the other 298 seconds of this interval will be your final rate.
In addition, you loose any counter increase between your last update (T-2)
and updating your zone (T).
#1 try using microsecond precision, reducing the unknown time to much less
#2 compute the differences yourself and use another type of counter,
for instance ABSOLUTE. You can completely eliminate unknowns occuring
due to this counter wrap.
You won't be able to remove all uncertainty, unless you know how to combine
getting the statistics which are valid at the precise moment of updating the