[rrd-users] Adding new data source - malloc failure

Wes wespvp at msg.bt.com
Tue Jul 10 22:53:18 CEST 2007

This old horse just won't die...  Nothing in the archive seems to help.

I have a 1 GB RRD database.  Using add_ds, I tried to add some data sources
I forgot to add the first time around - after I have a lot of production
data in it.  I had originally intended to add a number of dummy values that
could be later renamed, but they managed to get omitted.  When I run add_ds,
it aborts "allocating ds_values".

Looking at the source code for restore, I see this is a malloc issue.
Watching 'top', I see the restore process grows to about 2.3 GB, then
aborts.  I copied everything over to a system I know is running 64 bit
RedHat.  This time it aborted at 3 GB.  The same thing happens if I try to
restore an unmodified dump file.

I guess the questions are:

- Why does restore need at least 3x (how much more?) times the size of the
RRD database?  I understand restore builds the entire database in memory,
but why so much more memory than the actual database size?

- Would it be reasonable to memory map the RRD file instead of using
in-memory data structures?

- How can I resolve this problem now?  Creating a new RRD is not an
attractive option since I'd lose my historical data.  Putting the new data
sources in a new database (i.e. Keeping both the old an new ones live) also
would not be a good option.



More information about the rrd-users mailing list