[rrd-developers] rrdupates per second
rrdtool at nospam.verplant.org
Tue Jun 10 10:06:03 CEST 2008
On Mon, Jun 09, 2008 at 10:42:16PM -0700, Bernard Li wrote:
> > the prime method to gain performance is to have more cache
> > available, since this will enable the system to be able to keep all
> > hot rrd blocks in memory and thus only 'modify-write' disk blocks as
> > opposed 'read-modify-write' them. (It will have todo reads too, but
> > only ocasionally).
since our box writing RRD files was at its limit IO wise, I upgraded
from the vanilla Debian Etch version of librrd to the latest 1.2.* at
that time, 1.2.27 iIrc. That version includes the fadvise fixes and
indeed, disk reads dropped dramatically. This did not improve write
throughput, though, as you can see in this graph:
I don't have a graphs at hand, but other metrics such as CPU time spent
in IO-wait or ``system load'' stayed the same.
THE performance improvement has been caching within the daemon that is
writing the files to disk (collectd in our case). Writing 1 or 30
updates (to one file) at once basically doesn't make any difference
performance wise, but you only have to do 1/30th of the updates and can
thus handle 30 times as many files.
The downside is obvious: In average it will take 15 times as long till
you see new data. For us it's only annoying, because we use a 10 second
step, so you'll see data after 2:30 minutes in average. But if you
collect values less often, say the popular 300 second step, aggregating
several writes is even more annoying and might be out of question
entirely. Plus, the admins don't like waiting up to 5 minutes for new
data to be available ;)
The solution we're implementing at the moment (to make the admins happy
because they don't have to wait ;) is to have the daemon `flush' the
data just before creating the graphs. The rrdtool plugin of collectd
caches all values and if enough have been aggregated for one file, those
values are inserted in an update queue. A separate thread continuously
dequeues updates and writes them to disk. Using a control socket a user
or a script can unconditionally enqueue a value at the _head_ of that
queue, so that it will be updated within the next 1/<writes per sec>
So in conclusion: Stay away from disks, keep as much information in
memory, do many updates at once and, ideally, ``own'' the cache.
P. S.: One thing I've wondered for some time: Does updating several
files in parallel increase write performance? Does anybody know?
Maybe the controller of disks can do something clever to minimize
head movement or combine writes or something..?
Florian octo Forster
Hacker in training
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: Digital signature
Url : http://lists.oetiker.ch/pipermail/rrd-developers/attachments/20080610/f423e9e2/attachment.bin
More information about the rrd-developers