[rrd-users] Disk I/O

Dave Plonka plonka at doit.wisc.edu
Wed Mar 12 21:47:55 CET 2008


Hi Jeremy,

On Wed, Mar 12, 2008 at 01:06:47PM -0500, Jeremy wrote:
> 
> I'm doing a lot of graphing with RRDTOOL (using PNP for Nagios), updating
> over 10,000 RRD files every few minutes. Just over 6 GB of small .rrd files,
> about 400 writes/sec on average nonstop.

That's a pretty small number.  Consider your reads/sec as a potential
bottleneck, since at least one read is required to read each of the
blocks that are modified by rrdtool update, and depending on your OS
and RRD tool version, the operating system's file read-ahead can play
a dramatic role.

> Starting to run into disk I/O
> issues, it's keeping up ok but the system is getting less and less
> responsive as the I/O stays pegged at near 100%.
>
> Before I throw hardware at the problem (Gigabyte ramdisks etc), I was
> wondering a few things:

You might be interested in this paper that describes a larger RRD-based
measurement system than yours and shows how RRD files are accessed
in detail:

   http://www.usenix.org/events/lisa07/tech/plonka.html

> 1) When you do an "rrdtool update" to add new data, does it have to rewrite
> the whole file, or does it just append to the file?

Neither, it selectively updates just the necessary hot blocks in the
fixed-size RRD file.  This involves reading the hot blocks (if not
cached), then writing them once modified.  This is shown graphically
in the context of a typical RRD file in the paper.

> Since it rolls up the
> old data I guess it needs to re-write the entire file each time?

No, it essentially moves a pointer around an RRA, in a round-robin fashion,
hence the name.  This is shown graphically in the paper.

> Either way
> I guess this is not really anything that's configurable that we could try to
> optimize but just curious what's going on under the hood.

What is tunable for performance is the version of RRD tool you're
using (for instance 1.2.25 has the patch mentioned in the paper,
the RRD file definitions themselves, and the operating system.

> 2) Is there any way to have rrdtool use a MySQL database to save all the
> data for the various files, instead of a ton of separate small files? Seems
> like that might cause less I/O overhead maybe if it was just doing a bunch
> of inserts but not rewriting all the individual files. I don't think this is
> really possible but I just want to make sure.

I suggest that you want to do is to get enough memory in your system
so that the buffer cache can hold all the RRD file blocks that you
regularly access - i.e. the "hot" blocks.  When configured with
enough memory, this can reduce read disk I/O to near zero because
all the requisite pages are in cache.  The paper shows the sar-based
measurements that would be of interest to you.

You can use the fincore command to determine how many "hot" blocks
you have per RRD file by running it immediately after you access the
file to run the report(s) that read the files:

   http://net.doit.wisc.edu/~plonka/fincore/

Dave

-- 
plonka at doit.wisc.edu  http://net.doit.wisc.edu/~plonka/  Madison, WI



More information about the rrd-users mailing list