<HTML><HEAD></HEAD>
<BODY>
<P>I've read the documentation and Alex's tutorial trying to understand exactly how the AVERAGE, MIN, and MAX consolidation functions work, but I'd like to know more. Let's say that I have created the following RRD to track uptime of routers: </P>
<P> '--step=1', # one second step <BR> 'DS:snmp_up:GAUGE:3600:0:1',<BR> 'DS:icmp_up:GAUGE:3600:0:1',<BR> 'RRA:MIN:0.9999:10:10', # 10 steps => 10 secs.per consolidated data point, 10 cdp's => 100 secs. <BR> 'RRA:MIN:0.9999:30:240', # 30 secs., 240 half-minute points, => 120 mins. of data<BR> 'RRA:MIN:0.9999:300:432', # 300 secs. per cdp (5 mins.), 432/12 = 36 hours. <BR> 'RRA:MIN:0.9999:1800:336', # 30 mins. per cdp, 336/2 = 168 hours = 1 week<BR> 'RRA:MIN:0.9999:7200:432', # 180 mins. cdp, one month data<BR> 'RRA:MIN:0.9999:86400:425', # one year data</P>
<P>All the input PDP's (i.e. values entered) are either 1 or 0 (up or down). As defined above any update value of 0 (down) will cause the active row in each rra to show 0, right? By changing the consolidation function to AVERAGE it would show the AVERAGE of all the 1's and 0's entered during that row's active period, is that so? (actually the AVERAGE of all the interpolated data points in the row, not just the ones entered. But with an xff of 0.9999 they should be nearly the same.) Am I correct in concluding that these rra definitions should be changed to AVERAGE to give a better picture of uptime with these settings? </P>
<P>Jim Eshelman</P>
<P> </P>
<P> </P>
<P><BR> </P>
<P> </P>
<P> </P>
<P> </P></BODY></HTML>