[mrtg] Trying to find a scaling method that produces readable graphs against spike loads

Stock, David cdstock at ccis.edu
Wed Oct 10 20:01:39 CEST 2007


Have you tried the "secondmean" option?  Looks like that should focus on
the lower data.


-----Original Message-----
From: mrtg-bounces at lists.oetiker.ch
[mailto:mrtg-bounces at lists.oetiker.ch] On Behalf Of Bill Moran
Sent: Wednesday, October 10, 2007 12:07 PM
To: mrtg at lists.oetiker.ch
Subject: [mrtg] Trying to find a scaling method that produces readable
graphs against spike loads

I have some graphs that go unreadable because spike loads are throwing
off the scaling.

A daily cron job causes the graphs to spike to ~10 million for a brief
time every day, which causes the graphs to scale to that.
the data I'm _really_ interested in seeing is between 1000 and 100000,
which gets scaled to the bottom of the graph in such a way that it
can't really be read (it's just a flat line with a few little bumps.

I've switched to log scaling, but it's not enough in these cases.

I wouldn't mind losing the details off the top of the graph to get
the graphs to scale to where I can see the other data.  I already know
the cron jobs max out the resource, I really need to see what the
activity in between the cron jobs is doing.

Will MaxBytes accomplish this by completely ignoring the high values?
I tried it, but it simply draws a red line across the graph at the
Maxbytes value, Will this scale out correctly over time?

Any advice is appreciated.

Bill Moran
Collaborative Fusion Inc.

wmoran at collaborativefusion.com
Phone: 412-422-3463x4023

mrtg mailing list
mrtg at lists.oetiker.ch

More information about the mrtg mailing list