[rrd-users] [unsure] max DS per rrd file
mikel
infoeuskadi at gmail.com
Sat Apr 20 11:42:04 CEST 2013
Thanks for your quick response.
A bit more background information follows,
-Some hosts have 50-60 metrics (DSs)
-Other hosts have up to 200 metrics (DSs)
-All metrics are different
-Some metrics update fresh data every 5 mins, but not all of them at the
same time
-Updates are local to each host, same with queries, that is to say, RRD file
lives in each of the monitored machines local disk, without being shared
with anything else.
-Some metrics may get queried eventually, but this is far less common than
updates. For example queries would be around the number of ten per week.
However the query would need to scan all metrics in a given RRD file.
-I can not run rrdcached agent on each of these hosts.
I know this is very uncommon setup.
As you can see this is a monitoring setup for a largely distributed cluster,
so the only way I can see to have a long historical record of too many
metrics is to keep them distributed.
Obviously I am building a mechanism to query the nodes, however this is not
the main problem.
What I really would like to hear about is:
-even when using the highest possible "data-loss" for a given RRD file, does
anybody have experience querying/updating something between fifty to 200
Datasources inside one given RRD file ?
I am open to hear for any comments, no matter how crazy they are.
Thanks again,
m
--
View this message in context: http://rrd-mailinglists.937164.n2.nabble.com/max-DS-per-rrd-file-tp7580966p7580969.html
Sent from the RRDtool Users Mailinglist mailing list archive at Nabble.com.
More information about the rrd-users
mailing list