[rrd-users] Disk I/O - use tmpfs
Ofer Inbar
cos at aaaaa.org
Thu Mar 13 17:17:20 CET 2008
Last fall we reached a point where gmetad was doing too many writes
and causing the server it was running on to spend much of its time in
I/O wait, slowing everything else down. After some suggestions from
this list, I moved our Ganglia database to using tmpfs, which has been
working beautifully since then.
Excerpts from the emails I sent about it then:
----------------------------------------------------------------------
Here are my additions to /etc/init.d/gmetad :
TMPFS=/ganglia/rrds
At the beginning of the start section:
# restore from disk to tmpfs if necessary
if [ "$TMPFS" -a ! -d $TMPFS ]; then
echo -n "Restoring /ganglia/rrds from /var/lib/ganglia/rrds..."
rsync -azH /var/lib/ganglia/rrds /ganglia/
echo "done."
fi
At the end of the stop section:
# back up from tmpfs to disk
if [ "$TMPFS" -a $RETVAL -eq 0 -a -d $TMPFS/__SummaryInfo__ ]; then
echo -n "Saving /ganglia/rrds to /var/lib/ganglia/rrds..."
rsync -azH /ganglia/rrds /var/lib/ganglia/
echo "done."
fi
And here is my cron job to back up from tmpfs to disk every 10min:
0,10,20,30,40,50 * * * * if [ -d /ganglia/rrds/__SummaryInfo__ ]; then rsync -azH /ganglia/rrds/ /var/lib/ganglia/rrds/; fi 2>&1 | /usr/local/bin/cronmail sysadmin
cronmail is a very short script I wrote that emails the output piped
to it, with a nice subject line that has host, time, and first line of
output, but only if the output piped to it was non-blank; if what it
read was entirely whitespace, it silently does nothing
http://thwip.sysadmin.org/cronmail
Note: Because I run the rsync without --delete, if you delete
something from the active Ganglia database on the tmpfs partition it
will linger on disk unless you delete it there too. If you reboot,
the previously deleted files will reappear on the tmpfs copy too.
On the other hand, this protects you from losing stuff on the disk
(backup) copy; the drawback is that if you want to delete somethng,
you have to delete it from both copies. Or you could rsync --delete.
----------------------------------------------------------------------
Ben Hartshorne <ganglia at green.hartshorne.net> wrote:
> I had to edit grub.conf to adjust the size of the ramdisk. By default
> they're 64MB, but with an argument to the kernel start line, you can set
> it to whatever size you need. I chose 4x the current RRD directory, to
> accomodate new hosts and more metrics. It is unfortunate that a reboot
> is required to change the size of the ramdisk.
Which is much easier to do with tmpfs, see below...
Seth Graham <sether at fnal.gov> wrote:
> Once I switched to tmpfs it became rock solid. tmpfs has the added
> advantage of being easier to configure.. no editing kernel boot
> arguments, just pass mount the options you want and it does it all for you.
>
> Ramdisk is probably better on a busy system where you don't want risk a
> bunch of swapping, but on a dedicated gmetad host I reccomend tmpfs.
Either way, you have to allocate some RAM to the RAMdisk or tmpfs.
With tmpfs, you can set an upper limit by putting a size option in
/etc/fstab, like so:
none /ganglia tmpfs size=1024M,mode=755,uid=nobody,gid=root 0 0
Pick an amount of memory that makes sense for you. If you use a
RAMdisk, I assume you'll have to allocate all of that RAM to it
whether you're using it or not, whereas with tmpfs, it'll only use as
much as is needed to store the files, until it reaches the upper limit.
That means that if you're memory limited, tmpfs with size= is actually
better than RAMdisk. You won't risk any more swapping than you would
with a RAMdisk of the same size as your tmpfs size=, and you'll likely
do better because you're not using all the space you set aside.
If you ever want to resize the /ganglia partition, with my setup:
1. service gmetad stop
umount /ganglia
2. edit /etc/fstab, change the size= value for /ganglia
3. mount /ganglia
service gmetad start
That's it. Do it quickly and you probably won't even see a gap.
(you could actually edit fstab *before* stopping gmetad and umounting)
----------------------------------------------------------------------
-- Cos
More information about the rrd-users
mailing list