[rrd-users] rrdtool HA

Matthew Chambers matthew.chambers at vanderbilt.edu
Wed May 30 22:28:03 CEST 2007


Ganglia's RRD updating tool, gmetad, is designed for scalability and not
being a single point of failure.  You can have multiple gmetads monitoring a
gmond (an xml source for what to update the rrd with) on different machines,
and they will have rrds with more or less the same information.  I suggest a
similar solution.  Just have multiple machines, each with their own set of
RRDs, and use some kind of fault-tolerance/round-robin setup when you go to
access your RRDs.  Ganglia uses a pull model, where gmetad queries the gmond
for data, but it could also be a push model.  If it was a push model then it
could be multicast/broadcast so it wouldn't have to send the same data
twice.

HTH,
Matt Chambers


> -----Original Message-----
> From: rrd-users-bounces at lists.oetiker.ch [mailto:rrd-users-
> bounces at lists.oetiker.ch] On Behalf Of Dylan Vanderhoof
> Sent: Wednesday, May 30, 2007 3:09 PM
> To: rrd-users at lists.oetiker.ch
> Subject: [rrd-users] rrdtool HA
> 
> I've been trying to figure out a way to do this usefully and haven't had
> much success due to the nature of RRDs.
> 
> I have an environment where I need to have some sort of high
> availability for my RRD collector boxes.  While its possible to use a
> SAN, that still leaves me with a single point of failure on the SAN box,
> which is less than ideal.  Unfortunatly, it doesn't appear I can
> replicate RRDs without coping the entire file from one machine to
> another at certain intervals, which is a massive IO hit.
> 
> I've heard rumblings of a SQL backend, but from what I've seen so far
> that only seems to be for fetching data from SQL, not updating.
> 
> Does anybody have any suggestions for how one might approach
> active/standby redundancy with a large amount of RRDs?  We only have
> about 15k RRDs currently, but I expect to be seeing over 100k in the
> next 6-8 months and I'd like to see if I can solve this problem before I
> have that kind of volume.
> 
> Thanks,
> Dylan
> 
> _______________________________________________
> rrd-users mailing list
> rrd-users at lists.oetiker.ch
> https://lists.oetiker.ch/cgi-bin/listinfo/rrd-users



More information about the rrd-users mailing list