<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<font face="Bitstream Vera Sans Mono">Peter,<br>
<br>
Thank you for your reply.<br>
</font><br>
Peter Kristolaitis wrote:<br>
<blockquote cite="mid:485167F2.6070008@alter3d.ca" type="cite">You may
want to look into using the master/slave functionality to
<br>
distribute the load rather than making more probes on a single host.
<br>
</blockquote>
Yes, we have looked at doing a master/slave setup, so we may try that
in the future.<br>
<br>
<blockquote cite="mid:485167F2.6070008@alter3d.ca" type="cite"><br>
Also, disks aren't faster just by virtue of being on a SAN. Similarly
<br>
configured volumes (same number of disks, same RAID level, same amount
<br>
of controller cache) are almost invariably faster by using DAS, as you
<br>
don't have the added latency of the FC (or even worse, iSCSI) packet
<br>
switching network (plus SAN typically has shared bus and cache).
<br>
</blockquote>
I didn't mean to imply that SAN is automatically faster than internal
drives, but ours would be based on our array and RAID level normally
used. We would need to convince our SAN admins to give us some disk
however.<br>
<br>
<blockquote cite="mid:485167F2.6070008@alter3d.ca" type="cite"><br>
Is there a real, technical reason to use exactly 30 pings? For
example,
<br>
do you need that level of granularity for % loss? Could you get by
with
<br>
15 or 20? You'd still get host up/down notifications, and improved
<br>
performance, at a cost of less granularity for loss % -- which often
<br>
(though not always) isn't an important metric when monitoring branches
<br>
(2 packets lost out of 30 = 5.7%, 2 packets out of 20 = 10% -- does
that
<br>
difference actually matter in your case? Could it be compensated for
by
<br>
slightly increasing loss % thresholds for alerts?)
<br>
</blockquote>
We would like to have as many pings in the shortest interval we can,
because branch connections could be down for a few minutes and we would
only see some packet loss over the 5 minute interval instead of seeing
the node actually down. Last time I reconfigured, the network group had
requested more granularity (maybe they just wanted to see more smoke on
the graphs), but I think what they really wanted this whole time was a
smaller interval.<br>
<br>
<br>
</body>
</html>