> I have one problem. When the server application is restarted, the > counters are all zeroed. This causes the RRD to assume that the > counter has rolled over at some specific word size and compute the > average throughput based on that assumption. Since the server never > gets anywhere close to filling the word size of an unsigned long int, > that means the RRD's assumptions are way, way off. This shows up on > the graph as a huge ugly spike which is wrong and makes it look like > my server's rate control is faulty. > > When I create the .rrd file, I use > > "DS:readbytes:COUNTER:600:0:1250000", > > for each client because the server is connected to a 10bT ethernet, > and 1250000 Bytes per second is the maximum theoretical throughput. > This limits the spikes to 10Mbps, but since the actual data is far > closer to 1Mbps, the graphs still look horrible. MRTG never had this > problem. > > Is there a work around? > Dave To be exact, you are using a workaround right now. The proper way to handle this is to log an "U"nknown value. Assume you take a sample and the counter is 12345, next time you take a sample the counter is reset and it is 123. What do you know ? Nothing, apart from the reset. You don't know at which point you reset. Was it at 12356 ? 12367 ? Your front end should notice the reset and insert the U value. Regards, Alex -- * To unsubscribe from the rrd-users mailing list, send a message with the subject: unsubscribe to rrd-users-request@list.ee.ethz.ch