[rrd-users] peak from multiple databases over time

Alex van den Bogaerdt alex at vandenbogaerdt.nl
Mon May 6 12:23:02 CEST 2013


> DEF for each database is:
> DEF:l${NR}=${DATAFILE}.rrd:ds1:AVERAGE where NR is 0..19

Are you using only digits here?
Try changing it to ds1${NR} throughout your script. If suddenly you get an 
error message, you know that you were for instance adding the amount 119 
instead of the 20th DS.

> My CDEF is:
> cdef:ctotal=${RPNCALC} where RPNCALC is each ds added (0,l0,+,l1+,l2...)

This could be a problem area. You should have shown us the complete command, 
so we can check that you actually add up 20, not 19, DSes.

Make sure to draw a line for ctotal, it should correspond with the top of 
the topmost stacked area. If not, there's your problem.

> And then I vdef' it:
> VDEF:vtotal=ctotal,MAXIMUM
>
> On this point the graph and the commentline vary: I have a graph showing 
> nearly 6
> TB on it's highest peak, but the GPRINT gives me only 5,2 TB.

You know when, approximately, this peak happens. Create a graph showing only 
a small amount of time, around that moment. Does the problem still show up?

If you only graph one timeslot, for instance if your step size = 300 graph 
'--start end-now' and set end to some well defined moment in time (not: 
now), you can gprint every average separately and print vtotal, which should 
match exactly the sum.

Do mention the version of RRDtool you are using.  RRDtool had its fair share 
of off-by-one errors, which could result in computing from time range 't to 
t+300' but showing time range 't-300 to t' or vice versa.




More information about the rrd-users mailing list