[rrd-users] Input values normalization
ENTRESSANGLE, ERIC (ERIC)
eric.entressangle at alcatel-lucent.com
Wed Feb 19 17:15:34 CET 2014
Sure it helps !
With a --step 1 and cacti's poller frequency set to 1 minute :
Initially I had RRA:AVERAGE:0.5:1:2592000 (for 30 days), and I changed it into RRA:MAX:0.5:60:43200 as suggested by you. And it works, I do not have normalization, and only a value per minute in rrd files.
Just to be sure to fully understand what's happening when poller's frequency is 60 seconds and step is 1 second : for the RRA:MAX:0.5:60:43200, from which 60 PDPs is the CDP calculated, since we have only 1 sample value ?
And what do you mean exactly by "As long as RRDtool is fed with integer timestamps" ?
Thanks a lot
De : rrd-users-bounces+eric.entressangle=alcatel-lucent.com at lists.oetiker.ch [mailto:rrd-users-bounces+eric.entressangle=alcatel-lucent.com at lists.oetiker.ch] De la part de Alex van den Bogaerdt
Envoyé : mardi 18 février 2014 18:44
À : rrd-users at lists.oetiker.ch
Objet : Re: [rrd-users] Input values normalization
> But the accuracy will still only be as good as the collection
> frequency (and it's relation to the rate of change of the measured
> value). If the measured value can (for example) rise sharply and drop
> back again in between samples then even the max function won't tell
> you anything about it.
Altough your point is valid, it is (IMHO) irrelevant for this particular question. In fact, the higher the sampling resolution, the smaller the problem of missing such spikes.
In the case at hand the OP wants to keep his discrete values. Average won't do, so MIN MAX or LAST remain, and then I would choose to use both MIN and MAX. Maybe the OP will choose differently.
With proper heartbeat settings there is no need to take a sample every second either. It could remain at every 5 minutes or so.
Anyway, the main point of my answer was to have the 'best' RRA with 'steps'
larger than 1. I'll elaborate.
> The solution I found is to set a data source step to 1 second to avoid
> normalization, but this produces big rrd files with a lot of redundant
> I did not find a satisfactory solution up to now, thanks for any hint.
Something has to give, so if not increasing the size of the database is a must, then somehow RRDtool needs to combine several of its input values into one. Averaging them is undesirable, then choose one or any of MIN, MAX and LAST.
Instead of having each RRA row being 1 times 300 seconds in an RRD with 'step==300', the same amount of time is stored (and thus not a bigger
database) when having RRA rows of 300 times 1 second, in an RRD with 'step==1'.
Let's assume the original database was like this:
created database with "--step 300" (could be left out, as is default) RRA:AVERAGE:0.5:1:1200 (100 hours: 300 seconds per row, 1200 rows)
Now when creating the database with "--step 1", without increasing its size:
do not specify RRA:AVERAGE:0.5:1:360000 (100 hours: 1 second per row) but instead specify RRA:AVERAGE:0.5:300:1200 (100 hours, 300 seconds per row
This would still have fractions in the database, so the next step is to alter the consolidation function as suggested before. As long as RRDtool is fed with integer timestamps and when it has '--step 1', normalization will be a no-op, and the input will be untouched in this phase. Then during consolidation the integer values are kept, which I believe was the goal.
rrd-users mailing list
rrd-users at lists.oetiker.ch
More information about the rrd-users