Loading...
 
Specification for conversion of Database. Specification drafts

DatabaseConversionSpecification

Proposal for conversion of PerfParse binary data.


Members of perfparse-devel at lists.sf.net
2004-10-29 Revision 1.


1. Brief.


Binary data extracted from the performance data from Nagios is stored in a single table. This holds the time, host, service and metric name. As well as other information gathered with the metric value. This suffers vast duplication of information as well as a less than perfect key structure. It is proposed to convert this table into a new format, being more efficient with storage space and faster at accessing data.

2. Requirements.

The functional requirements of the new structure can be listed:

2.1 To hold the metric value as a DOUBLE. Allow NULL to indicate a missing
value which should be shown as a space on a graph.

2.2 Store the State of the metric value. This allows SQL queries on the
state. This may be NULL if cannot be calculated.

2.3 Store the time in a format easy to program and portable over different
DBMS.

2.4 Store the maximum and minimum range of the metric with every value. This
may be NULL of not recorded.

2.5 Store the range of warning and critical, as either an inside or outside
range. A value of NULL will indicate infinity. A range type of NULL
will indicate no value.

2.6 Offer the correct keys to enable likely queries. Including:

6.1 Extraction of an ordered series of data between two times.

6.2 Extraction of the last entered value for each metric.

6.3 Extraction of data relative to a specific host, service or metric.

2.7 Store the data is an efficient format being small and fast to extract.

2.8 Use ANSI SQL where ever possible.

2.9 Referential integrity will not be important against the data tables.

2.10 Duplicate data should be impossible to add.



3. Table Schema

The two new tables will be defined as:

3.1 perfdata_bin

CREATE TABLE IF NOT EXISTS perfdata_bin (
host INT NOT NULL,
INDEX perfdata_bin_ix0 (host),
FOREIGN KEY (host)
REFERENCES perfdata_host (host_id)
ON DELETE RESTRICT,
service INT NOT NULL,
metric INT NOT NULL,
ctime DATETIME NOT NULL,
PRIMARY KEY (host, service, metric, ctime),
value DOUBLE,
state TINYINT NOT NULL,
bin_extra_id INT
) TYPE=InnoDB;

3.2 perfdata_bin_extra

CREATE TABLE IF NOT EXISTS perfdata_bin_extra (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
metric INT NOT NULL,
data_hash BIGINT UNSIGNED NOT NULL,
INDEX extra_id0 (metric, data_hash),
warn_lower DOUBLE,
warn_upper DOUBLE,
warn_range ENUM ('inner','outer'),
crit_lower DOUBLE,
crit_upper DOUBLE,
crit_range ENUM ('inner','outer'),
min DOUBLE,
max DOUBLE
) TYPE=InnoDB;3.2 perfdata_bin_extra


4. Proposed Use Snapshots

4.1 The extraction of data for a graph between two dates can be completed as:

SELECT *
FROM perfdata_bin
JOIN perfdata_bin_extra
ON perfdata_bin.bin_extra_id = perfdata_bin_extra.id
WHERE host = 1 AND service = 2 AND metric = 3
AND ctime BETWEEN '2004-01-01' AND '2004-12-31'
ORDER BY ctime;

4.2 Addition of data will be completed as follows:

2.1 Calculation of the hash of the extra data. Eg, MD5(";1;2;3;4") This
value will be supplied by the parser.

2.2 A check against the extra data will be completed to find out if this
extra data has been entered, against the key 'extra_id0'. If this data
has not been entered before, this should take place. In either case,
the 'id' of the extra will be registered.

2.3 The perfdata_bin table will be correctly populated with the new value.

4.3 Sample ranges.

3.1 A range with one value, n. This implies an outside range from negative
infinity to zero, and then from the value to infinity:

warn_lower = 0, warn_upper = n, warn_range = 'outer'

3.2 A critical range from 10 to 20:

crit_lower = 10, crit_upper = 20, crit_range = 'inner'

3.3 A critical range from 100 to infinity:

crit_lower = 100, crit_upper = NULL, crit_range = 'inner'


5. Conversion process.

The conversion of users data will take place in two stages:

i) Conversion of all programs and database schema.
ii) Conversion of data.

5.1 The conversion of the programs will require an update sympathetic to the
new schema. The database schema will be a set of opperations against
the SQL. This will take place using the exiting perfparse-db-update
tool:

1.1 The numeric indexes for the hosts, services and metrics will be added.

1.2 The foreign key references to the old tables will be removed.

1.3 The new tables will be created.

After these stages, PerfParse should function without error, and without
data. However, new data may be added without issue.

5.2 The conversion of the data will take place using a suit of tools provided
for this one-hit operation. This method is designed to be fast and
sympathetic for users limited space:

2.1 The data will be extracted into an efficient text format. One file
for each metric. This may be done in one hit, or one metric at a
time.

2.2 The data will be imported into the new schema by piping the
extracted data into an import program.

2.3 The original table will be dropped, therefore recovering the
table-space.





Contributors to this page: benclewett .
Page last modified on Friday 10 of December, 2004 08:18:27 UTC by benclewett.

Shoutbox

redflo, 14:25 UTC, Sun 10 of Apr, 2005: No, lneuxyuhje, this shall write to a pipe not to a file and so ">" is o.k.
lneuxyuhje, 15:14 UTC, Thu 07 of Apr, 2005: is ">" meant to be ">>" in /usr/local/perfparse/bin/perfparse_nagios_pipe_command.pl ?
redflo, 08:35 UTC, Tue 05 of Apr, 2005: fhdplbvjipsld, it works now. Permissions were wrong.
fhdplbvjipsld, 14:42 UTC, Wed 30 of Mar, 2005: How to post to Forums->Help?
benclewett, 09:27 UTC, Thu 03 of Feb, 2005: Hi Jon! I hope you like our wiki site.