[rrd-users] Re: increase performance

Mike Wright Mike at auckland-services.freeserve.co.uk
Wed Aug 22 22:40:10 MEST 2001


At 14:23 22/08/01 +0200, Alex van den Bogaerdt wrote:

>While the monitoring box may get a performance boost, I'm wondering what
>happens to the monitored host.  It too will have to answer to lesser
>calls but at the same time it needs to deliver more data in a single
>request.

In my experience, any modern router or switch will answer many thousands of
snmpget requests without having any noticable impact on performance. It's
not something to worry about.

Multiple SNMP gets in a single request is definately the most efficient way
to gather the data. eg, if you know your box has 10 interfaces, then you
request ifInOctets and ifOutOctets for all 10 interfaces in the same
request. The thing you have to be careful of is that there is a limit to
how many SNMP variables you can get into a single response. 

The main limiting factor with SNMP (v1 anyway) is that it uses UDP so a
single threaded process will have to wait for the reply to each GET request
before it can make the next request. If you have a moderate to high latency
network then this all adds up very quickly.

As for SNMPwalk vs SNMPget, they create the exactly same amount of traffic.
All snmpwalk does is do multiple snmpgetnext requests until it reaches the
end of the mib. So doing 10 snmpgets to retrieve ifInOctets for 10
interfaces is just the same as doing snmpwalk ifInOctets and they will take
the same amount of time. (Using SNMPv1 anyway).

>I find that loading (/starting/whatever) perl uses many resources.
>Having a daemon that is started once and keeps running is much better
>(IMHO) than starting it every 5 minutes from cron.  Alternately you
>can fetch data without using perl.

Definately true. The script Peter is using uses backticks to run a shell
command to do each snmpget command. This has a massive overhead and that is
probably the reason why it is so slow and the load average is so high.
Starting a new perl process every 5 minutes is no overhead, starting
multiple snmpget shell commands using back ticks is.

My advice is to get the "SNMP_util" library for perl. It's fast, real
simple to use and well up to the job, I use it in my collector to gather
data for about 5000 interfaces. My hardware is nothing special, a
Pentium3/500Mhz/256RAM/10Gig IDE disk, load average is about 3.

Also, you don't mention how you are getting the data into RRDtool. Do you
run `rrdtool update` in back ticks too? This will also have an even bigger
overhead. You should be using the "RRDs" module.

Install the SNMP_util module and try a bit of code like this:

#!perl -w
use strict;
use RRDs;
use SNMP_util;

# Array containing the list of machines to query
my @ipaddresses = ( 'xpasswordx at 10.1.1.1' , 'xpasswordx at 20.2.2.2' ); 

for my $ipaddress (@ipaddresses) {
  my ($CpuIdle,$CpuUser,$CpuSystem) = &snmpget ($ipaddress,
     "enterprises.ucdavis.systemStats.ssCpuIdle.0",
     "enterprises.ucdavis.systemStats.ssCpuUser.0",
     "enterprises.ucdavis.systemStats.ssCpuSystem.0");

  print "CPU: $CpuIdle,$CpuUser,$CpuSystem\n";

  RRDs::update("$ipaddress.rrd","N:$CpuIdle:$CpuUser:$CpuSystem");
}

A little script like that should be able to query hundreds of machines per
minute with very little load on the collector system. It also makes just
one snmpget request to retrieve all three MIBs. (I think I have the mibs
correct but double check them).


Good luck!

Mike Wright
Network Engineer, Reuters.


--
Unsubscribe mailto:rrd-users-request at list.ee.ethz.ch?subject=unsubscribe
Help        mailto:rrd-users-request at list.ee.ethz.ch?subject=help
Archive     http://www.ee.ethz.ch/~slist/rrd-users
WebAdmin    http://www.ee.ethz.ch/~slist/lsg2.cgi



More information about the rrd-users mailing list