[rrd-users] RRDs Coding Question - Dynamic Multiple RRD Data Sources

Wesley Wyche wesley.wyche at verizon.com
Wed Sep 26 00:34:58 CEST 2012


I gave up trying to craft it using the RRDs perl mod.  It wouldn't take a
string of multiple DEFs. 

Now am building a string to feed to a system call from a perl script based
upon a dynamic list of rrd's needed to feed to it.

However, I don't think i have the RPN correct for the CDEF.  The original
CDEFs for the single rrd were:

           "CDEF:usravg=usrdat,1,1,*,*",
           "CDEF:nicavg=nicdat,1,1,*,*",
           "CDEF:sysavg=sysdat,1,1,*,*",
           "CDEF:idlavg=idldat,1,1,*,*",

Don't understand why I would waste cycles multiplying by 1 twice, but that
was the code I originally found in an example script long ago.  

But, anyhoo, here is the updated dynamically generated string to the
command-line rrdtool via a system call:

/tools/bin/rrdtool graph /system/temp-cpu-week.png "-i" "-b" "1024" "-s"
"-1week" "-w" 800 "-h" 275 "-a" "PNG" "-n" "DEFAULT:0:Arial" "-u 1"  "-l 0" 
"-r"  "--title"  "cpu usage - last week (Tue 2012-09-25 21:54)"
DEF:usrdat0=/system/cpu0.rrd:cpuusr:AVERAGE
DEF:usrdat1=/system/cpu1.rrd:cpuusr:AVERAGE
DEF:usrdat2=/system/cpu2.rrd:cpuusr:AVERAGE
DEF:usrdat3=/system/cpu3.rrd:cpuusr:AVERAGE 
DEF:nicdat0=/system/cpu0.rrd:cpunic:AVERAGE
DEF:nicdat1=/system/cpu1.rrd:cpunic:AVERAGE
DEF:nicdat2=/system/cpu2.rrd:cpunic:AVERAGE
DEF:nicdat3=/system/cpu3.rrd:cpunic:AVERAGE 
DEF:sysdat0=/system/cpu0.rrd:cpusys:AVERAGE
DEF:sysdat1=/system/cpu1.rrd:cpusys:AVERAGE
DEF:sysdat2=/system/cpu2.rrd:cpusys:AVERAGE
DEF:sysdat3=/system/cpu3.rrd:cpusys:AVERAGE 
DEF:idldat0=/system/cpu0.rrd:cpuidl:AVERAGE
DEF:idldat1=/system/cpu1.rrd:cpuidl:AVERAGE
DEF:idldat2=/system/cpu2.rrd:cpuidl:AVERAGE
DEF:idldat3=/system/cpu3.rrd:cpuidl:AVERAGE
"CDEF:usravg=usrdat0,1,1,*,*,usrdat1,1,1,*,*,+,usrdat2,1,1,*,*,+,usrdat3,1,1,*,*,+"
"CDEF:nicavg=nicdat0,1,1,*,*,nicdat1,1,1,*,*,+,nicdat2,1,1,*,*,+,nicdat3,1,1,*,*,+"
"CDEF:sysavg=sysdat0,1,1,*,*,sysdat1,1,1,*,*,+,sysdat2,1,1,*,*,+,sysdat3,1,1,*,*,+"
"CDEF:idlavg=idldat0,1,1,*,*,idldat1,1,1,*,*,+,idldat2,1,1,*,*,+,idldat3,1,1,*,*,+"
"AREA:sysavg#ff0000:system\g" "GPRINT:sysavg:MIN: (min\:%7.2lf,\g"
"GPRINT:sysavg:AVERAGE: avg\:%7.2lf,\g" "GPRINT:sysavg:MAX: max\:%7.2lf)\n"
"STACK:usravg#0000ff:user\g" "GPRINT:usravg:MIN:   (min\:%7.2lf,\g"
"GPRINT:usravg:AVERAGE: avg\:%7.2lf,\g" "GPRINT:usravg:MAX: max\:%7.2lf)\n"
"STACK:nicavg#ffff00:nice\g" "GPRINT:nicavg:MIN:   (min\:%7.2lf,\g"
"GPRINT:nicavg:AVERAGE: avg\:%7.2lf,\g" "GPRINT:nicavg:MAX: max\:%7.2lf)\n"
"STACK:idlavg#00ff00:idle\g" "GPRINT:idlavg:MIN:   (min\:%7.2lf,\g"
"GPRINT:idlavg:AVERAGE: avg\:%7.2lf,\g" "GPRINT:idlavg:MAX: max\:%7.2lf)\n"
"HRULE:0#000000"

I've tried it with simple addition (+) and with the double multiplication by
1. 

The graph is generated, but i get a single line in the middle. 

Perhaps the problem lies with the initial retrieval in the DEF.

Keep in mind, these 4 rrds all have 25% each of the data in total.  To
further illustrate, I have a front end load balancer web server.  This
server routes the request to one of four backend collectors that accept the
incoming data and do an rrd update on it's own personal rrd (hence the
cpu0.rrd, cpu1.rrd, etc).  

The LB routes the incoming request to one of the collectors, but it is not
always a perfect 25% coverage each due to the random volume of which one is
next in rotation to get a request.  However, for arguments sake, we'll
assume that there will always be 4 rrds. 

For a simple example, say each inbound update request comes in every 60
seconds.  The LB routes the request to node0 at 6:01 and node1 updates it's
cpu0.rrd inserting that data into the 6:01 slot.  The 6:02 request comes in
and the LB routes it to node3 and node3 inserts it into cpu3.rrd in the 6:02
slot.  The 6:03 request comes in and the LB routes it to node1 and node1
inserts it into cpu1.rrd.  Etc etc.  

I had originally planned to use NFS to share the data tree among the various
collectors.  However, that failed.  The reason we have separate RRDs is that
these collectors are literally in completely different geographical
locations.  One is in Sacramento, CA USA.  Another is in Omaha, Nebraska
USA.  NFS over this wide area just wasn't feasible due to the high volume of
writes to the shared NFS filessystem (over 50M writes per day).  I needed
local disk on each collector node to be able to live through the I/O thrash. 

I just need to extract the average from all 4 RRDs and graph the data AS IF
there was only a single RRD all along.  




--
View this message in context: http://rrd-mailinglists.937164.n2.nabble.com/RRDs-Coding-Question-Dynamic-Multiple-RRD-Data-Sources-tp7580480p7580487.html
Sent from the RRDtool Users Mailinglist mailing list archive at Nabble.com.



More information about the rrd-users mailing list