[rrd-users] Caching question
leeb at ratnaling.org
Mon Mar 28 03:54:47 CEST 2011
I wonder if somebody has already addressed this. I'm unable to find
anything so far.
Here's my environment:
Internet Edge Server (Wombat) running CentOS.
Cacti server (central to many sites) separated by a 1.5Mbps T1 link.
I have an application I've written that captures data every 15 seconds from
interface and QoS statistics on the CentOS machine.
16 RRD's each about 1MB in size for QoS
19 RRD's each about 1MB in size for interfaces
Now I don't care if the information isn't updated to Cacti every 15 seconds.
In fact for normal use, every 5 minutes is fine to receive the data
generated during the previous 5 minutes.
It seems my options are:
1. Run RRDCACHED on the Cacti machine, but then I'm pushing traffic at it
manically every 15 seconds over that slow link. I'm concerned how
compressed that information may be.
2. Update the local CentOS RRD's every 15 seconds. Every 5 minutes I can
compress and copy the RRD's to Cacti with rsync.
3. CentOS has a Postgres database that the Cacti RRD's could pull from.
Again the concern is data size/time to transfer.
Currently, 2 seems to be the best option to keep the traffic to the smallest
size and bursting it.
Option 3 has the appeal that I may be able to refresh the Cacti webpage and
get current data (testing required)
Then there's the extended cache idea I have:
What if RRDCACHED were modified to act as a cache to a *remote* RRDCACHED
instance rather than a local rrd file?
rrdtool update --> RRDCACHED +-----------+ RRDCACHED --> rrdtool update
( CentOS machine ) ( Cacti Machine )
So here the CentOS machine is issuing updates every 15 seconds to it's local
RRDCACHED. That holds onto data for 5 minutes, then pushes it out to the
Cacti machine for inclusion.
I'm sure others have come across this and I wonder how they implemented
their solution? Thoughts and comments welcomed.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rrd-users