Among the different possibilities for Zabbix storage, one piqued my curiosity: TimescaleDB

TimescaleDB is a time series database plugin for PostgreSQL. Joël Cattin already blogged about it here from a general point of view. In this blog post, I will focus on Zabbix setup and visible improvements.

Overview

As a reminder, my Zabbix was installed with a PostgreSQL database backend, thus migration to TimescaleDB should be easy. I will follow these steps:

  • Install TimescaleDB
  • Enable TimescaleDB extension in PostgreSQL
  • Migrate tables to hypertables

TimescaleDB Installation

Before starting anything, we must stop Zabbix services:

systemctl stop zabbix-java-gateway.service
systemctl stop zabbix-agent.service
systemctl stop zabbix-server.service

Next, we can add TimescaleDB repository:

tee /etc/yum.repos.d/timescale_timescaledb.repo <<EOL
[timescale_timescaledb]
name=timescale_timescaledb
baseurl=https://packagecloud.io/timescale/timescaledb/el/$(rpm -E %{rhel})/\$basearch
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/timescale/timescaledb/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
EOL

Run a simple yum update and install package:

yum install timescaledb-2-postgresql-15

Enable TimescaleDB Extension

Next step is to make PostgreSQL preload the newly installed library by add timescaledb to the shared_preload_libraries parameter in /u02/pgdata/15/zabbix/postgresql.conf configuration file:

shared_preload_libraries = 'timescaledb'

On a standard Zabbix setup, there will be no existing line with shared_preload_libraries. If you have already a line like that, you must add timescaledb to the parameter and not replacing it completely.

Then, we need to restart the database:

systemctl restart postgres-zabbix.service

And we don’t forget to check it is running after that:

$ systemctl status postgres-zabbix.service
● postgres-zabbix.service - PostgreSQL for Zabbix
   Loaded: loaded (/etc/systemd/system/postgres-zabbix.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2023-09-04 15:08:58 CEST; 1s ago
 Main PID: 12370 (postgres)
    Tasks: 8 (limit: 23220)
   Memory: 21.6M

And one last check to confirm extension is present1:

# select extname, extversion from pg_extension;
   extname   | extversion
-------------+------------
 plpgsql     | 1.0
 timescaledb | 2.11.2
(2 rows)

Subsequently, we must enable extension for zabbix database2.

echo "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" | sudo -u postgres psql zabbix

Expected output is the following:

WARNING:
WELCOME TO
 _____ _                               _     ____________
|_   _(_)                             | |    |  _  \ ___ \
  | |  _ _ __ ___   ___  ___  ___ __ _| | ___| | | | |_/ /
  | | | |  _ ` _ \ / _ \/ __|/ __/ _` | |/ _ \ | | | ___ \
  | | | | | | | | |  __/\__ \ (_| (_| | |  __/ |/ /| |_/ /
  |_| |_|_| |_| |_|\___||___/\___\__,_|_|\___|___/ \____/
               Running version 2.11.2
For more information on TimescaleDB, please visit the following links:

 1. Getting started: https://docs.timescale.com/timescaledb/latest/getting-started
 2. API reference documentation: https://docs.timescale.com/api/latest
 3. How TimescaleDB is designed: https://docs.timescale.com/timescaledb/latest/overview/core-concepts

Note: TimescaleDB collects anonymous reports to better understand and assist our users.
For more information and how to disable, please see our docs https://docs.timescale.com/timescaledb/latest/how-to-guides/configuration/telemetry.

CREATE EXTENSION

If extension is not preloaded, a message like that will show:

FATAL:  extension "timescaledb" must be preloaded
HINT:  Please preload the timescaledb library via shared_preload_libraries.

Migrate Tables

Finally, we can run the sql script provided with Zabbix to migrate tables to hypertables.

$ cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb.sql | sudo -u zabbix psql zabbix
NOTICE:  PostgreSQL version 15.4 is valid
NOTICE:  TimescaleDB extension is detected
NOTICE:  TimescaleDB version 2.11.2 is valid
NOTICE:  migrating data to chunks
DETAIL:  Migration might take a while depending on the amount of data.
NOTICE:  migrating data to chunks
DETAIL:  Migration might take a while depending on the amount of data.
WARNING:  column type "character varying" used for "source" does not follow best practices
HINT:  Use datatype TEXT instead.
NOTICE:  migrating data to chunks
DETAIL:  Migration might take a while depending on the amount of data.
WARNING:  column type "character varying" used for "value" does not follow best practices
HINT:  Use datatype TEXT instead.
NOTICE:  migrating data to chunks
DETAIL:  Migration might take a while depending on the amount of data.
NOTICE:  migrating data to chunks
DETAIL:  Migration might take a while depending on the amount of data.
NOTICE:  migrating data to chunks
DETAIL:  Migration might take a while depending on the amount of data.
NOTICE:  TimescaleDB is configured successfully
DO

Database is migrated; thus we can start Zabbix services:

systemctl start zabbix-java-gateway.service
systemctl start zabbix-agent.service
systemctl start zabbix-server.service

Verifications

My first check is to list the created hypertables:

# select hypertable_name from timescaledb_information.hypertables where owner = 'zabbix';
 hypertable_name
-----------------
 history
 history_uint
 history_str
 history_text
 history_log
 trends
 trends_uint
(7 rows)

I can also list the list of jobs managed by TimescaleDB3:

# select job_id, application_name, hypertable_name FROM timescaledb_information.jobs;
 job_id |        application_name        | hypertable_name
--------+--------------------------------+-----------------
      2 | Error Log Retention Policy [2] |
      1 | Telemetry Reporter [1]         |
   1000 | Compression Policy [1000]      | history
   1001 | Compression Policy [1001]      | history_uint
   1002 | Compression Policy [1002]      | history_str
   1003 | Compression Policy [1003]      | history_text
   1004 | Compression Policy [1004]      | history_log
   1005 | Compression Policy [1005]      | trends
   1006 | Compression Policy [1006]      | trends_uint
(9 rows)

My Zabbix server does not ingest so many values (ie. trends, events, tags), thus the advantage on storage might not be so good as on a more loaded server. Nevertheless, I can see some benefits:

# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression", pg_size_pretty(after_compression_total_bytes) as "after compression" FROM hypertable_compression_stats('trends');
 before compression | after compression
--------------------+-------------------
 1272 kB            | 672 kB
(1 row)

# SELECT pg_size_pretty(before_compression_total_bytes) as "before compression", pg_size_pretty(after_compression_total_bytes) as "after compression" FROM hypertable_compression_stats('history_uint');
 before compression | after compression
--------------------+-------------------
 872 kB             | 528 kB
(1 row)

I will add more agent to my Zabbix setup and follow these compression figures closely.


  1. Use command below to connect to the database:
    sudo -u postgres psql
    ↩︎
  2. Use sql command below to switch to zabbix database
    \c zabbix ↩︎
  3. Jobs are created by Zabbix housekeeper which is run every hour per default ↩︎