<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Thorsten von Eicken wrote:
<blockquote cite="mid:4AD2CC2F.80008@voneicken.com" type="cite">
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
- I'm wondering how we could overcome the RRD working set issue. Even
with rrdcached and long cache periods (e.g. I use 1 hour) it seems that
the system comes to a crawl if the RRD working set exceeds memory. One
idea that came to mind is to use the caching in rrdcached to convert
the random small writes that are typical for RRDs to more of a
sequential access pattern. If we could tweak the RRD creation and the
cache write-back algorithm such that RRDs are always accessed in the
same order, and we manage to get the RRDs allocated on disk in that
order, then we could use the cache to essentially do one sweep through
the disk per cache flush period (e.g. per hour in my case). Of course
on-demand flushes and other things would interrupt this sweep, but the
bulk of accesses could end up being more or less sequential. I believe
that doing the cache write-back in a specific order is not too
difficult, what I'm not sure of is how to make it such that the RRD
files get allocated on disk in the that order too. Any thoughts?<br>
<br>
</blockquote>
One further thought, instead of trying to allocate RRDs sequentially,
if there is a way to query/detect where each RRD file is allocated on
disk, then rrdcached could sort the dirty tree nodes by disk location
and write them in that order. I don't know whether Linux (or FreeBSD)
have a way to query disk location or to at least infer it.<br>
<br>
TvE<br>
</body>
</html>