[rrd-users] trying to understand the relationship between source data, what's in rrd and what gets plotted
Alex van den Bogaerdt
alex at ergens.op.het.net
Wed Jul 25 15:39:49 CEST 2007
On Wed, Jul 25, 2007 at 08:57:32AM -0400, Mark Seger wrote:
> so if I understand what you're suggesting I should pick a start time and
> step size such that my data will align accordingly, right? Since I have
> samples at 00:01:06, 01:12, etc that would mean I should pick a time
> that lands on a minute boundary and a step of 2 because 00:01:02, 1:04:,
> 1:06, etc will still hit all my timestamps. 1 sec would work too but
> that would be overkill. I don't think 3 or 6 would do it because they
> would not all align. 00:01:06 would, but you'd never see 01:16.
Whatever you are going to do: unless you make sure everything aligns,
RRDtool will do this for you (normalization).
And unless you make sure your RRA is able to hold all data in the
resolution you feed it, RRDtool will do this for you (consolidation).
1: data input at arbitrary times.
2: compute a rate from the data you input
- COUNTER: (current_counter_value - previous_counter_value)
divided by (current_time - previous_time). If the rate
would be negative, the counter is assumed to have wrapped
and RRDtool will compensate for this.
- DERIVE: as counter, except that negative rates are allowed
thus are not compensated for.
- ABSOLUTE: as counter, except that previous_counter_value is
assumed to be zero, which means you have reset the counter
when you last read it.
- GAUGE: the input is already a rate.
3: normalize the data. Make sure data fits "n * step" seconds since
the unix epoch (1970-01-01 00:00:00 +0000). The resulting rate
is now a Primary Data Point (PDP). Each PDP describes an interval
which ends at "n*stepsize" and starts "1*stepsize" before.
4: consolidate the data. RRAs contain rows of Consolidated Data
Points (CDP). Each RRA row needs one or more of these CDPs and
each CDP needs one or more PDPs. Each CDP describes an interval
which ends at "n*stepsize*steps_per_CDP" and starts
> so let's say I have 3 samples of 100, 1000 and 100 starting at
> 00:01:06. since these are absolute numbers for 10 second intervals,
> they really represent rates of 10/sec, 100/sec and 10/sec. am I then
> correct in assuming that rrd will then normalize it into 15 slots with
> 20/slot for the first 5, 200 for the next 5 and then 20 for the next 5,
> all aligned to 00:01:00. so starting at 01:00 the data would look like
> 20 20 20 20 20 200 200 200 200 200 200 20 20 20 20 20. If I then wanted
> to see what the rate is at 01:06, rrd would see a value in that 2 second
> slot of 20 and treat it as a rate of 10/sec. the same would hold for
> any of the 200s which would be reported as 100/sec for the slots they
> occur in, right?
I couldn't quite follow this, but my guess is "no".
Why don't you experiment a bit.
One input rate, 90000 seconds (just over a day), and just one RRA.
Set heartbeat to 90000 (thus not interfering with your input) and
do your three updates at, for example, 01:00:06, 01:20:12 and 23:45:12.
starttime=$(date -d 23:59:59\ yesterday +%s)
time1=$(date -d 01:00:06 +%s)
time2=$(date -d 01:20:00 +%s)
time3=$(date -d 23:56:12 +%s)
rrdtool create test.rrd \
--start $starttime \
--step $step \
rrdtool update test.rrd $time1:$rate1 $time2:$rate2
rrdtool update test.rrd $time3:$rate3
rrdtool fetch test.rrd AVERAGE --end $time1 --start end-10sec
rrdtool dump test.rrd > test.xml
rrdtool fetch test.rrd AVERAGE --start midnight --end start+1d > test.output
For unix and derivatives this is cut'n'paste ready. I cannot make it
more easy for you. I didn't add MIN and MAX, because I think you still
need to figure out the basics. Have a look inside the created xml,
and notice how the two RRAs are different. Pay extra attention to
timestamps close to $time1, $time2 and $time3. But don't forget to
look at (say) 12:00:00.
Modify the example, for instance by changing "step" from 1 into 2.
Try similar experiments using MIN, MAX, and LAST.
Try the original experiment with a heartbeat of only 3600, and look
at times 01:10 and 01:30 (or 01:10 and 23:50)
> btw - just to toss in an interesting wrinkle did you know if you sample
> network statistics once a second you will periodically get an invalid
> value because of the frequency at which linux updates its network
> counters? the only way I'm able to get accurate network statistics near
> that rate is to sample them every 0.9765 seconds. I can go into more
> detail if anyone really cares. 8-)
Please do so, but in a separate thread. I guess it has something
to do with numbers like 64000 and 65535 although that would result
in 0.9766, not 0.9765
Alex van den Bogaerdt
More information about the rrd-users