[rrd-users] [rrd-developers] rrdupates per second
Dmitry B. Bigunayk
icestar at inbox.ru
Tue Jun 10 08:02:43 CEST 2008
>
> Thanks for the reply.
>
> I am one of the developers of Ganglia http://www.ganglia.info. The
> issue we have is that once Ganglia starts monitoring past a certain
> number of hosts (> 1k nodes), we noticed the system grinds to a halt
> with a lot of I/O requests (from rrdupdates).
>
> Typically Ganglia monitors 32 unique metrics for each host and these
> translate to individual rrd files. So with 1k nodes, you essentially
> have over 30,000 rrd files that need to be updated periodically.
>
> The current "workaround" for this issue, is to place the rrd files in
> tmpfs. It would be nice to have a better solution than this.
>
> I did some tests with RRDtool 1.3. While it does provide some I/O
> enhancements over 1.2.x, the system with standard 7200RPM SATA HD is
> still bogged down by I/O and becomes really sluggish.
>
> I was wondering if you have any suggestions on how to programmatically
> tune the performance? The Ganglia code responsible for doing the
> rrdupate is written in C. Our frontend, written in PHP does call
> rrdgraph via system call, but that is not really the bottleneck for
> us.
>
> Cheers,
>
> Bernard
>
Why you are storing each one unique metrics in separate rrd file? Maybe
data collected in different time? I think that the best way to increase
performance it's to accumulate metrics from one host in one rrd file.
--
Dima: Nosce te ipsum
e-mail: icestar at inbox.ru
More information about the rrd-users
mailing list