[rrd-developers] rrdupates per second

Scott Brumbaugh scottb at prolexic.com
Fri Jun 20 22:53:46 CEST 2008

Hi Ian,

On Fri, Jun 20, 2008 at 09:43:08AM +1000, Ian Holsman (Lists) wrote:
> Do you have any ideas on how to make the data structure on disk more 
> write-friendly?
> my thinking was that we could have a RRD Daemon running that would the 
> CLI would talk to instead of accessing the files directly.
> that way we could cache writes, share the meta data across files (i'm 
> guessing for most people the 30/100k files are all close to identical, 
> and the meta-data component of them could be shared resulting in less 
> disk being used) and be network friendly so that the machines can become 
> data-collection servers  and can we could use standard horizontal 
> partioning techniques used in standard databases.
> Sadly in my case I don't have the time to do it.
> -Ian

No, I have not looked at file structure changes. Your idea sounds like
a good one though, and I will tell you our approach to the disk
bottleneck.  It is important to note that we are using centos5 linux
with kernel 2.6.18.  Our approach to handling the disk cache might not
work on other platforms.

Our requirement is to have good update speed and also timely readout
after the update.  Because of that we don't use an application cache
where the updates are stored until a graph using them is requested.
Since we are constantly reading out current rates for most of our
metrics any application cache would be flushed immediately after an
update anyway.

Assuming the system has enough memory, when rrd_update writes to the
file system the changes go to the disk cache and updated blocks are
marked dirty. Periodically (30 sec by default), the linux pdflushd
kernel threads run and find all dirty blocks and sync them to disk.
That works fine until the number of dirty blocks overwhelms pdflushd
and it slows down other processes on the system including processes
doing rrd_updates.  We find that the problem is really not the number
of dirty blocks getting written to disk but the fact that pdflushd
wants to write them all at the same time and blocks other io and
processes while doing so.

To get around this, first we set the pdflushd cache expire time to a
relatively high value, say 15 minutes for example:

 echo 90000 > /proc/sys/vm/dirty_expire_centisecs

This keeps pdflushd from immediately writing out dirty blocks after an
rrd_update.  The data is in the disk cache so it can be read out
quickly and because of the way rrd files are structured most updates
always happen to he same blocks in the cache so it does not grow much
for subsequent update cycles.

In order to keep pdflushd from blocking the system when it finds all
these dirty rrd blocks we run an auxiliary daemon that walks rrd file
directory tree and identifies all rrd files with blocks in core
memory. The daemon individually flushes the dirty blocks to disk at a
controlled rate, usually about a 10 min cycle for all rrd files with a
pdflushd rate of 15 min.  This way when pdflushd runs it does not find
any dirty rrd blocks but still takes care of the rest of the

We have found that technique will distribute the cache synchronization
over a much longer time period and allows us to update more rrd files
without slowing down the system.

Scott B

More information about the rrd-developers mailing list