[rrd-developers] rrdupates per second

Ian Holsman (Lists) lists at holsman.net
Fri Jun 20 01:43:08 CEST 2008


Do you have any ideas on how to make the data structure on disk more 
write-friendly?

my thinking was that we could have a RRD Daemon running that would the 
CLI would talk to instead of accessing the files directly.

that way we could cache writes, share the meta data across files (i'm 
guessing for most people the 30/100k files are all close to identical, 
and the meta-data component of them could be shared resulting in less 
disk being used) and be network friendly so that the machines can become 
data-collection servers  and can we could use standard horizontal 
partioning techniques used in standard databases.

Sadly in my case I don't have the time to do it.

-Ian

Scott Brumbaugh wrote:
> Hi Bernard,
>
> Sorry for the double mail.  The first one went from a different email
> address.
>
> On Mon, Jun 09, 2008 at 10:42:16PM -0700, Bernard Li wrote:
>> Hi Tobias:
>>
>> On Mon, Jun 9, 2008 at 10:31 PM, Tobias Oetiker<tobi at oetiker.ch>  wrote:
>>
>>> the prime method to gain performance is to have more cache
>>> available, since this will enable the system to be able to keep all
>>> hot rrd blocks in memory and thus only 'modify-write' disk blocks
>>> as opposed 'read-modify-write' them. (It will have todo reads too,
>>> but only ocasionally).
>>>
>>> Since 1.2.24 or so, rrdtool tries to inform the OS (tested on
>>> linux) that it should not read-a-head when accessing rrdfiles, but
>>> just keep the current block around. By this we use less cache per
>>> rrdfile and thus can keep more hot-blocks in memory.
>>>
>>> If you do not see this improvement then maybe your OS does not
>>> support the fadvise call or rather configure did not pick it up
>>> correctly.
>>>
>>> In 1.3 the whole fadvise/madvise thing has been further tuned, and
>>> on top of that all the IO is mmaped which cal also help.
>> There are noticeable improvements with 1.3, but not enough to
>> eliminate the need for the tmpfs workaround.
>>
>> I will dig deeper to confirm whether my OS (CentOS 4.x) supports
>> fadvise and whether the rrdtool RPM was built correctly with that
>> option enabled.
>>
>>> does this mean ganglia calls rrd_update and links librrd ? or does
>>> it call out?
>> Yes, that is the case AFAIK.  The code in question is viewable here:
>>
>> http://ganglia.svn.sourceforge.net/viewvc/ganglia/branches/monitor-core-3.1/gmetad/rrd_helpers.c?view=markup
>>
>> Thanks,
>>
>> Bernard
>
> The general problem we faced with rrd was how to scale the number of
> files.  There are different ways to tune the rrd files, disks and
> filesystem for better performance, but regardless at some point the
> system will hit a maximum.  The solution we are using is to distribute
> the rrd storage load across multiple servers, when one reaches
> capacity we can add another in parallel.
>
> The simplest way to do this is by using separate hosts for polling and
> using NFS to mount the rrd storage partitions onto a central host for
> graphing.  We experimented with this technique for a while but there
> are some disadvantages.  For one, the storage hosts need to be located
> close to the grapher on a fast LAN or NFS can have problems.  This
> will be an issue if your poller servers are separated across the
> Internet or a WAN.
>
> We took a different approach and separated the rrd file readout code
> from the graph creation code and added a network communication layer
> between them.  This allows us to store rrd files on multiple hosts and
> access them transparently with the rrd_graph command.
>
> There is a patch on this list that contains this code:
>
>   http://thread.gmane.org/gmane.comp.db.rrdtool.devel/2216
>
> The network messaging is based on a middleware framework called ICE
> that has a lot of performance features built in and the patch requires
> it.
>
> Thanks,
>
>
> Scott B
>
> _______________________________________________
> rrd-developers mailing list
> rrd-developers at lists.oetiker.ch
> https://lists.oetiker.ch/cgi-bin/listinfo/rrd-developers



More information about the rrd-developers mailing list