[rrd-users] Re: Who has the most RRD files (or data sources)?
happy at usg.edu
Sun Mar 19 03:48:07 MET 2006
Peter Valdemar Mørch <swp5jhu02 at sneakemail.com> writes:
> Mark Plaksin happy-at-usg.edu |Lists| wrote:
>> We are using a home-grown system to update over 20,000 RRD files. Each RRD
>> file has a single data source. RRD files are using a total of 45G of disk
> We just migrated our solution from having one DS pr. file to having on
> average 100 ds's pr file. About 2GB worth.
> That changed the "updating time" from 13 minutes to 12 seconds (on
> standard PC desktop hardware)!!!! Load on the machine went from 20 to
> 0.9 . Having individual DSes each in their own files is *not* the way to
> go for performance!
Do you know why multiple DSes per RRD is better? I'm willing to accept
that it is but I'm curious about what makes it better. It's definitely
easier to manage one DS per RRD.
> So now instead we've implemented a way to add, remove and move DSes in
> and out of individual files using a dump->parse->load cycle.... Parsing
> the XML with a real perl XML parser just took forever, so instead we're
> using raw perl regular expressions / string operations. Not GPLed,
> though. If there is interest, I'll ask if it can be released as GPL.
GPL is always good. Is the Perl script very complex? That is, I think I
can imagine a quick Perl script which would probably do the trick but maybe
the job is more complex than I imagine.
Also, do you have a feel for why doing the operation via XML is so slow?
Is it the nature of XML? Some specific problem with your XML library?
Does your Perl script ever fail or miss some important data?
Unsubscribe mailto:rrd-users-request at list.ee.ethz.ch?subject=unsubscribe
Help mailto:rrd-users-request at list.ee.ethz.ch?subject=help
More information about the rrd-users