[rrd-developers] Accelerator Daemon

Florian Forster rrdtool at nospam.verplant.org
Mon Jun 30 10:52:23 CEST 2008


Hi Scott,

thanks for your feedback, I really appreciate it :)

On Sun, Jun 29, 2008 at 02:21:10PM -0700, Scott Brumbaugh wrote:
> When a large number of rrd updates occur at the same time the
> filesystem tries to write all updates to disk file at the same time.
> This slows the system.  I may be missing something but it looks like
> that will happen with this patch if a client is both writing and
> reading large numbers of rrd files simultaneously.

Yes, that's probably right. I didn't expect anyone with such a huge
number of RRD files to generate graphs periodically/statically instead
of on demand/dynamically.. Is anyone with a big number of files doing
that? If so: Why?

I was going to propose to do the flushing more intelligently: We could
only flush the file if the oldest data not written to the disk is
older than
  flush_since = now - ((now - begin) / width)
and loose at most the rightmost pixel (assuming that you generate ``from
one {day,week,month,year} ago until now'' graphs). But if you write to
the RRD files every 300 seconds and generate 400 pixels wide graphs of
the last 24 hours, `flush_since' will be `216' and you'll flush to disk
every time nonetheless.

I'm not sure how that could be solved at all.. You need a lot of data
from disk when generating the graphs, so unless you have _all_ data you
need in memory, you'll still run into IO problems.

I think that, under the assumption that we cannot keep all data in
memory, optimizing for many updates and sporadic graphing is the best be
can do.. And in my experience that's a common situation..

> Maybe an architecture could cache the 'hot' rrd disk blocks in
> application memory instead of the filesystem cache. Rrd update would
> update the blocks in the application cache. Rrd data fetch calls would
> first look for rrd blocks in this application cache, and with a miss
> read from disk using the filesystem.

Is writing to disk first and graphing then really so much slower than
just reading a bunch of data from disk? After the write some of the data
you need is in the cache and you don't need to read that again.. I'm not
saying it doesn't make a difference, but I'm sceptical. I'd do some
performance tests first and only invest any work if the tests confirm
that you can save a lot of IO this way. 

Regards,
-octo
-- 
Florian octo Forster
Hacker in training
GnuPG: 0x91523C3D
http://verplant.org/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://lists.oetiker.ch/pipermail/rrd-developers/attachments/20080630/1cd75bc9/attachment.bin 


More information about the rrd-developers mailing list