[rrd-users] Multiple Updates Per Second
Rick Jones
rick.jones2 at hp.com
Tue Oct 2 22:46:52 CEST 2012
On 10/02/2012 01:32 PM, Wesley Wyche wrote:
> I am still faced with the problem of being able to handle a huge amount of
> data. After trying several routes to a solution, I'm down to this final
> issue...I need to be able to process a large amount of updates per second.
>
> My benchmark is 250 updates per second. Any less and I won't be able to
> keep up with the volume of inbound metrics data being collected.
>
> I have rrdcached implemented, so I'm routing all my calls to the daemon
> socket. Disk I/O shouldn't be a problem since I'm passing that off to
> rrdcached to deal with whenever it needs to.
>
> I've tried threading the updates using Perl threads, and the best I can
> achieve is about 20 per second. This is doing system calls to rrdtool
> update. I've tried the RRDs perlmod, but it crashes with a segfault (due to
> the lack of thread safeness). I have tried multiple approaches within the
> perl script (system calls with an &, fork and exec, etc), but none work any
> better.
How many I/Os per second is your mass storage attempting to do, and is
that perhaps its limit?
> Is there anyone that has had success in dealing with large numbers of
> updates per second, and if so, what solution are you using?
Isn't that what the rrdcached was created to address? Minimizing the
I/Os for rrd updates.
rick jones
More information about the rrd-users
mailing list