<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Yes, you are right ... that was a bit strong :-)<br>
<br>
However, I started by reading the into, the tutorial etc. and came
away with a certain impression.<br>
I then went through the rather steep learning curve (well started on
it, I'm not finished by a long way yet!), got something running, and
it didn't behave at all like my mental picture built up in working
through those intro/tutorial notes.<br>
<br>
I was ready to give up, or hack the code (which, although I am an
open source believer, I don't think is really the right answer,
unless my needs are really far away from the original objectives).<br>
<br>
Anyway, thanks to your suggestion I now have it doing what I want
(still think the way it actually behaves for GAUGE is broken...).<br>
<br>
My data gathering script is in perl.<br>
It has an inner loop which sleeps for 30 seconds, which means that
processing is going to take it over the 30 second boundary.<br>
<br>
What I did was:<br>
<blockquote>$t = time();<br>
$t = $t - ($t%30);<br>
</blockquote>
and use $t in place of "N".<br>
<br>
Looking at the stored values, they are now integer values (well,
floating point representations of).<br>
<br>
Thanks again for the suggestion.<br>
<br>
Philip<br>
<br>
---------------------------<br>
On 8/19/2010 9:04 AM, Simon Hobson wrote:
<blockquote cite="mid:p06240822c89302dce913@simon.thehobsons.co.uk"
type="cite">
<pre wrap="">At 08:37 -0700 19/8/10, Philip Peake wrote:
</pre>
<blockquote type="cite">
<pre wrap=""> > And something no-one has mentioned so far - you do not have to use
</pre>
<blockquote type="cite">
<pre wrap=""> "now" in your update statement. Taking the example above, you can
compute what the time was at the last integral multiple of "step" and
use that. Eg, if you end up calling rrd tool at 00:00:07, then
instead of being 7 seconds late, you could use a timestamp of
00:00:00.
Obviously you may some some precision in timing, but if your primary
concern is to avoid normalisation then overall you gain.
</pre>
</blockquote>
<pre wrap="">
That may actually be the best solution.
I have no control over how fast the remote systems respond with their
data, and under heavy load they will be slower.
Doing it this way, I can tolerate quite slow response ...
The other problem I was mulling over in my mind is that the data
gathering process might (will) be re-started from time to time -- how
would it ever find out the exact tim of the first update so that it
could time its own to be exactly N minutes, to the second, later.
Forcing the first entry on some convenient boundary, then calculating
boundary multiples and using those would fix my problem
</pre>
</blockquote>
<pre wrap="">
And adding to what you wrote earlier :
At 07:07 -0700 19/8/10, Philip Peake wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Tell me how I synchronize a data source to RDD's concept of sample times?
Actually, what is RDD's concept of sample times? How does it determine
the start?
I would assume time starts with the first entry (or is it the time it is
created?).
</pre>
</blockquote>
<pre wrap="">
As Tobias wrote earlier, all times in rrd are relative to Unix epoch
- midnight 1st Jan 1970. You need know nothing of when the database
was created to know when the next/previous step boundaries occurred -
they are simply an integer number of step periods from Unix epoch.
Thus 300s steps will always be on the hour, 5 past, 10 past and so
on. 2 hour steps will always be at midnight, 2am, 4am, and so on (all
UTC).
A quick look shows that in my graphing routines (Bash script), I use
the following to get graph end times that are on a step boundary
(${Step} is set elsewhere according to the graph being drawn) :
Etime=`/bin/date +%s`
Etime=$(( ${Etime} / ${Step} * ${Step} ))
</pre>
<blockquote type="cite">
<pre wrap=""> Then for a huge class of problems, actually, IMHO, most real world
problems not involving rate of change, its useless.
</pre>
</blockquote>
<pre wrap="">
That's a bit strong I think. RRD Tool isn't intended to be the right
tool for all jobs, it is designed to do ONE job WELL. It is known and
already acknowledged that it isn't the correct tool for a great many
jobs - and I'd suggest it is NOT the right tool for the task that
started this thread.
The fact that other outside constraints mean that using another tool
is either difficult or not possible doesn't give you grounds for
complaining that RRD Tool isn't the right tool for your job.
And I would very much dispute your assertion that "most real world
problems not involving rate of change" are not suitable for RRD Tool.
It handles gauge data types just fine - as long as your requirements
fit in with the normalisation and consolidation model RRD Tool uses.
I use RRD Tool for two main jobs at work - one is rate based (network
traffic flows), the other is gauge based (temperatures). It works
equally well for both.
Your argument is like saying "I don't have any drills, it's the
hammers fault that it doesn't make tidy holes".
</pre>
</blockquote>
<br>
</body>
</html>