[rrd-users] Inquiring on how to ease disk io (linux).

Ronan Mullally ronan at iol.ie
Wed Jun 25 11:01:19 CEST 2008

Hi Raimund,

A few suggestions that I've tried on different systems in the past:

 - Upgrading your disk system is an obvious choice - more disks
   RAID 1 (or 0 if you live dangerously) and a caching RAID controller
   will help definitely help.

 - Tweaking your underlying filesystem to take advantage of a RAID
   filesystem can make a difference - I've seen big improvements using
   the stride-size and stripe-width options to mke2fs (though that was
   on a mail spool, not RRD files).

 - Tweaking some of the kernel attributes in /proc/sys/vm will change
   when and how often data cached by the OS will be written to the disk
   (or the cache in the disk controller).  I found this helped slightly
   on a filesystem with about 5,000 RRDs being hit every 5 minutes.

 - Tweaking the disk scheduler parameters in /sys/block/<disk>/queue can
   improve things a bit, depending on your traffic profile.

I hope these are useful.  The last 2 can be tried on the fly.  The others
require downtime.


On Wed, 25 Jun 2008, Raimund Berger wrote:

> Hi
> I'm currently evaluating collectd on linux for monitoring (with
> rrdtool backend of course), and while the default 10s update interval
> is pretty cool, as the number of monitored systems increases the
> server's disks start to rattle noticably. Quite understandably so I
> guess, as rrdtool isn't meant to provide (sequential disk) caching
> solutions like sql stuff usually sports.
> Right now, I'm testing a workaround by stack mounting a tmpfs over the
> real rrd database directory via aufs, which pleasantly resolves the
> disk io issue. It has drawbacks though:
> * as data is written only to the tmpfs, all of it written since the
>   last mount/disk copy is going to be lost on crash/power outage
> * those stacking fs (aufs, unionfs) apparently provide no means to
>   automatically sync back data at specified intervals.
> That's at least how I now see it, so my solution is not yet completely
> satisfactory.
> Question hence, did anybody here already run into and maybe tackle a
> similar issue in a better way? If so, I'd sure appreciate some hints.
> Thanks, R.
> _______________________________________________
> rrd-users mailing list
> rrd-users at lists.oetiker.ch
> https://lists.oetiker.ch/cgi-bin/listinfo/rrd-users

More information about the rrd-users mailing list