[rrd-developers] rrdupates per second
Bernard Li
bernard at vanhpc.org
Tue Jun 10 07:16:15 CEST 2008
Hi Tobias:
On Mon, Jun 9, 2008 at 9:51 PM, Tobias Oetiker <tobi at oetiker.ch> wrote:
> the key here is where you store the rrd files. If you store them on
> disk, the update speed will be much lower ... the 22k updates are
> in memory ...
>
> also, note that for high performance you have to use language
> bindings for the update as starting rrdupdate for every update is
> quite costly.
>
> the main performance gain for large numbers of rrds (on disk) comes
> from the fact that more rrd-hotblocks can stay in cache with the
> 1.3 release which means that you can scale to a larger number of
> rrd files without the performance dropping.
>
> Note, that running at 250 Up/s you get to udate 150k RRDs in a 5
> minute interval ...
Thanks for the reply.
I am one of the developers of Ganglia http://www.ganglia.info. The
issue we have is that once Ganglia starts monitoring past a certain
number of hosts (> 1k nodes), we noticed the system grinds to a halt
with a lot of I/O requests (from rrdupdates).
Typically Ganglia monitors 32 unique metrics for each host and these
translate to individual rrd files. So with 1k nodes, you essentially
have over 30,000 rrd files that need to be updated periodically.
The current "workaround" for this issue, is to place the rrd files in
tmpfs. It would be nice to have a better solution than this.
I did some tests with RRDtool 1.3. While it does provide some I/O
enhancements over 1.2.x, the system with standard 7200RPM SATA HD is
still bogged down by I/O and becomes really sluggish.
I was wondering if you have any suggestions on how to programmatically
tune the performance? The Ganglia code responsible for doing the
rrdupate is written in C. Our frontend, written in PHP does call
rrdgraph via system call, but that is not really the bottleneck for
us.
Cheers,
Bernard
More information about the rrd-developers
mailing list