{"id":8642,"date":"2016-07-27T14:02:24","date_gmt":"2016-07-27T12:02:24","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/"},"modified":"2016-07-27T14:02:24","modified_gmt":"2016-07-27T12:02:24","slug":"elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/","title":{"rendered":"Elasticsearch, Kibana, Logstash and Filebeat &#8211; Centralize all your database logs (and even more)"},"content":{"rendered":"<p>When it comes to centralizing logs of various sources (operating systems, databases, webservers, etc.) the ELK stack is becoming more and more popular in the open source world. ELK stands for Elasticsearch, Logstash and Kibana. <a href=\"https:\/\/www.elastic.co\/products\/elasticsearch\" target=\"_blank\">Elasticsearch<\/a> is based on <a href=\"https:\/\/lucene.apache.org\/\" target=\"_blank\">Apache Lucene<\/a> and the primary goal is to provide distributed search and analytic functions. <a href=\"https:\/\/www.elastic.co\/products\/logstash\" target=\"_blank\">Logstash<\/a> is responsible to collect logs from a variety of systems and is able to forward these to Elasticsearch. <a href=\"https:\/\/www.elastic.co\/products\/kibana\" target=\"_blank\">Kibana<\/a> is the data visualization platform on top of Elasticsearch. Nowadays the term ELK seems not be used anymore and people speak about the Elastic Stack. In this post I&#8217;ll look at how you can use these tools to centralize your PostgreSQL log file(s) into the Elastic Stack. <\/p>\n<p><!--more--><\/p>\n<p>As usual my VMs are running CentOS although that should no be very important for the following (except for the yum commands). As Elasticsearch, Kibana and Logstash are all based on Java you&#8217;ll need to install java before starting. There are yum and apt repositories available but I&#8217;ll use the manual way for getting the pieces up and running.<\/p>\n<p>The goal is to have one VM running Elasticsearch, Logstash and Kibana and another VM which will run the PostgreSQL instance and Filebeat. Lets start with the first one by installing java and setting up a dedicated user for running the stack:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@elk ~]# yum install -y java-1.8.0-openjdk\n[root@elk ~]# groupadd elk\n[root@elk ~]# useradd -g elk elk\n[root@elk ~]# passwd elk\n[root@elk ~]# mkdir -p \/opt\/elk\n[root@elk ~]# chown elk:elk \/opt\/elk\n[root@elk ~]# su - elk\n[elk@elk ~]$ cd \/opt\/elk\n<\/pre>\n<p>The first of the products we&#8217;re going to install is Elasticsearch which is quite easy (we&#8217;ll not set up a distributed mode, only single node for the scope of this post). All we need to do is to download the tar file, extract and start:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk ~]$ wget https:\/\/download.elastic.co\/elasticsearch\/release\/org\/elasticsearch\/distribution\/tar\/elasticsearch\/2.3.4\/elasticsearch-2.3.4.tar.gz\n[elk@elk elk]$ tar -axf elasticsearch-2.3.4.tar.gz\n[elk@elk elk]$ ls -l\ntotal 26908\ndrwxrwxr-x. 6 elk elk     4096 Jul 27 09:17 elasticsearch-2.3.4\n-rw-rw-r--. 1 elk elk 27547169 Jul  7 15:05 elasticsearch-2.3.4.tar.gz\n[elk@elk elk]$ cd elasticsearch-2.3.4\n[elk@elk elasticsearch-2.3.4]$ bin\/elasticsearch &amp;\n<\/pre>\n<p>This will start up Elasticsearch and print some messages to the screen:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[2016-07-27 09:20:10,529][INFO ][node                     ] [Shinchuko Lotus] version[2.3.4], pid[10112], build[e455fd0\/2016-06-30T11:24:31Z]\n[2016-07-27 09:20:10,534][INFO ][node                     ] [Shinchuko Lotus] initializing ...\n[2016-07-27 09:20:11,090][INFO ][plugins                  ] [Shinchuko Lotus] modules [reindex, lang-expression, lang-groovy], plugins [], sites []\n[2016-07-27 09:20:11,114][INFO ][env                      ] [Shinchuko Lotus] using [1] data paths, mounts [[\/ (rootfs)]], net usable_space [46.8gb], net total_space [48.4gb], spins? [unknown], types [rootfs]\n[2016-07-27 09:20:11,115][INFO ][env                      ] [Shinchuko Lotus] heap size [1015.6mb], compressed ordinary object pointers [true]\n[2016-07-27 09:20:11,115][WARN ][env                      ] [Shinchuko Lotus] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]\n[2016-07-27 09:20:12,637][INFO ][node                     ] [Shinchuko Lotus] initialized\n[2016-07-27 09:20:12,637][INFO ][node                     ] [Shinchuko Lotus] starting ...\n[2016-07-27 09:20:12,686][INFO ][transport                ] [Shinchuko Lotus] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}\n[2016-07-27 09:20:12,690][INFO ][discovery                ] [Shinchuko Lotus] elasticsearch\/zc26XSa5SA-f_Kvm_jfthA\n[2016-07-27 09:20:15,769][INFO ][cluster.service          ] [Shinchuko Lotus] new_master {Shinchuko Lotus}{zc26XSa5SA-f_Kvm_jfthA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)\n[2016-07-27 09:20:15,800][INFO ][gateway                  ] [Shinchuko Lotus] recovered [0] indices into cluster_state\n[2016-07-27 09:20:15,803][INFO ][http                     ] [Shinchuko Lotus] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}\n[2016-07-27 09:20:15,803][INFO ][node                     ] [Shinchuko Lotus] started\n<\/pre>\n<p>The default port is 9200 and you should now be able to talk to Elasticsearch:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk elasticsearch-2.3.4]$ curl -X GET http:\/\/localhost:9200\/\n{\n  \"name\" : \"Shinchuko Lotus\",\n  \"cluster_name\" : \"elasticsearch\",\n  \"version\" : {\n    \"number\" : \"2.3.4\",\n    \"build_hash\" : \"e455fd0c13dceca8dbbdbb1665d068ae55dabe3f\",\n    \"build_timestamp\" : \"2016-06-30T11:24:31Z\",\n    \"build_snapshot\" : false,\n    \"lucene_version\" : \"5.5.0\"\n  },\n  \"tagline\" : \"You Know, for Search\"\n}\n<\/pre>\n<p>Looks good. The next product we&#8217;ll need to install is Kibana. The setup itself is as easy as setting up Elasticsearch:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk elasticsearch-2.3.4]$ cd \/opt\/elk\/\n[elk@elk elk]$ wget https:\/\/download.elastic.co\/kibana\/kibana\/kibana-4.5.3-linux-x64.tar.gz\n[elk@elk elk]$ tar -axf kibana-4.5.3-linux-x64.tar.gz\n[elk@elk elk]$ cd kibana-4.5.3-linux-x64\n[elk@elk kibana-4.5.3-linux-x64]$ grep elasticsearch.url config\/kibana.yml \nelasticsearch.url: \"http:\/\/localhost:9200\"\n[elk@elk kibana-4.5.3-linux-x64]$ bin\/kibana &amp;\n<\/pre>\n<p>Similar to Elasticsearch the startup messages are written to the screen:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n  log   [09:27:30.208] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready\n  log   [09:27:30.237] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch\n  log   [09:27:30.239] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready\n  log   [09:27:30.242] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready\n  log   [09:27:30.253] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready\n  log   [09:27:30.257] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready\n  log   [09:27:30.261] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready\n  log   [09:27:30.263] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready\n  log   [09:27:30.270] [info][listening] Server running at http:\/\/0.0.0.0:5601\n  log   [09:27:35.320] [info][status][plugin:elasticsearch] Status changed from yellow to yellow - No existing Kibana index found\n[2016-07-27 09:27:35,513][INFO ][cluster.metadata         ] [Shinchuko Lotus] [.kibana] creating index, cause [api], templates [], shards [1]\/[1], mappings [config]\n[2016-07-27 09:27:35,938][INFO ][cluster.routing.allocation] [Shinchuko Lotus] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).\n  log   [09:27:38.746] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready\n<\/pre>\n<p>To check if Kibana is really working point your browser to http:\/\/[hostname]:5601 (192.168.22.173 in my case):<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01.png\" alt=\"kibana_01\" width=\"1912\" height=\"552\" class=\"aligncenter size-full wp-image-9937\" \/><\/a><\/p>\n<p>The third product we&#8217;ll need is Logstash. It is almost the same procedure for getting it up and running:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk kibana-4.5.3-linux-x64]$ cd \/opt\/elk\/\n[elk@elk elk]$ wget https:\/\/download.elastic.co\/logstash\/logstash\/logstash-all-plugins-2.3.4.tar.gz\n[elk@elk elk]$ tar -axf logstash-all-plugins-2.3.4.tar.gz\n[elk@elk elk]$ cd logstash-2.3.4\n<\/pre>\n<p>To test if Logstash is running fine start a very <a href=\"https:\/\/www.elastic.co\/guide\/en\/logstash\/current\/first-event.html\" target=\"_blank\">simple pipeline<\/a>:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ bin\/logstash -e 'input { stdin { } } output { stdout {} }'\nSettings: Default pipeline workers: 1\nPipeline main started\n<\/pre>\n<p>Once this is up type something on the command line to check if Logstash is responding:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\nyipphea\n2016-07-27T07:52:43.607Z elk yipphea\n<\/pre>\n<p>Looks good as well. For now we can stop Logstash again by &#8220;Control-c&#8221;:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n^CSIGINT received. Shutting down the agent. {:level=&gt;:warn}\nstopping pipeline {:id=&gt;\"main\"}\nReceived shutdown signal, but pipeline is still waiting for in-flight events\nto be processed. Sending another ^C will force quit Logstash, but this may cause data loss. {:level=&gt;:warn}\n\nPipeline main has been shutdown\n<\/pre>\n<p>Now we need to do some configuration to prepare Logtsash for receiving our PostgreSQL log file(s) through <a href=\"https:\/\/www.elastic.co\/downloads\/beats\/filebeat\" target=\"_blank\">Filebeat<\/a>. Filebeat will be responsible to forward the PostgreSQL log file(s) to Logstash.<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ pwd\n\/opt\/elk\/logstash-2.3.4\n[elk@elk logstash-2.3.4]$ mkdir conf.d\n[elk@elk logstash-2.3.4]$ cat conf.d\/02-beats-input.conf\ninput {\n  beats {\n    port =&gt; 5044\n    ssl =&gt; false\n  }\n}\n<\/pre>\n<p>What this is doing is telling Logstash that it shall create a new input of type <a href=\"https:\/\/www.elastic.co\/products\/beats\" target=\"_blank\">beats<\/a> and listen on port 5044 for incoming data. In addition to this <a href=\"https:\/\/www.elastic.co\/guide\/en\/logstash\/current\/input-plugins.html\" target=\"_blank\">input plugin<\/a> Logstash will need an <a href=\"https:\/\/www.elastic.co\/guide\/en\/logstash\/current\/output-plugins.html\" target=\"_blank\">ouput plugin<\/a> to know what it shall do with data coming in. As we want to send all the data to Elasticsearch we need to specify this:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ cat conf.d\/10-elasticsearch-output.conf\noutput {\n  elasticsearch {\n    hosts =&gt; [\"localhost:9200\"]\n    sniffing =&gt; true\n    manage_template =&gt; false\n    index =&gt; \"%{[@metadata][beat]}-%{+YYYY.MM.dd}\"\n    document_type =&gt; \"%{[@metadata][type]}\"\n  }\n}\n<\/pre>\n<p>Lets test if the configuration is fine (Logstash will read all configuration files in order):<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ bin\/logstash --config \/opt\/elk\/logstash-2.3.4\/conf.d\/ --configtest\nConfiguration OK\n<\/pre>\n<p>As all seems fine we can start Logstash with our new configuration:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ bin\/logstash --config \/opt\/elk\/logstash-2.3.4\/conf.d\/ &amp;\nSettings: Default pipeline workers: 1\nPipeline main started\n<\/pre>\n<p>For being able to easily use the Filebeat index patterns in Kibana we&#8217;ll load the <a href=\"https:\/\/www.elastic.co\/guide\/en\/beats\/libbeat\/current\/load-kibana-dashboards.html\" target=\"_blank\">template dashboards<\/a> provided by Elastic:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ cd \/opt\/elk\/\n[elk@elk elk]$ wget http:\/\/download.elastic.co\/beats\/dashboards\/beats-dashboards-1.2.3.zip\n[elk@elk elk]$ unzip beats-dashboards-1.2.3.zip\n[elk@elk elk]$ cd beats-dashboards-1.2.3\n[elk@elk beats-dashboards-1.2.3]$ .\/load.sh\n<\/pre>\n<p>Time to switch to the PostgreSQL VM to install Filebeat:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/u01\/app\/postgres\/product\/ [PG1] pwd\n\/u01\/app\/postgres\/product\npostgres@centos7:\/u01\/app\/postgres\/product\/ [PG1] wget https:\/\/download.elastic.co\/beats\/filebeat\/filebeat-1.2.3-x86_64.tar.gz\npostgres@centos7:\/u01\/app\/postgres\/product\/ [PG1] tar -axf filebeat-1.2.3-x86_64.tar.gz\n<\/pre>\n<p>Filebeat comes with an index template for Elasticsearch which we will now need to transfer to the host where Elasticsearch runs on for being able to load it:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/u01\/app\/postgres\/product\/filebeat-1.2.3-x86_64\/ [PG1] ls *template*\nfilebeat.template.json\npostgres@centos7:\/u01\/app\/postgres\/product\/filebeat-1.2.3-x86_64\/ [PG1] scp filebeat.template.json elk@192.168.22.173:\/var\/tmp\/\nelk@192.168.22.173's password: \nfilebeat.template.json                                                      100%  814     0.8KB\/s   00:00  \n<\/pre>\n<p>Locally on the host where Elasticsearch runs on we can now load the template into Elasticsearch:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk elk]$ curl -XPUT 'http:\/\/localhost:9200\/_template\/filebeat' -d@\/var\/tmp\/filebeat.template.json\n{\"acknowledged\":true}\n[elk@elk elk]$ \n<\/pre>\n<p>Back to the PostgreSQL VM we need to configure Filebeat itself by adapting the filebeat.yml configuration file:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/u01\/app\/postgres\/product\/filebeat-1.2.3-x86_64\/ [PG1] ls\nfilebeat  filebeat.template.json  filebeat.yml\n<\/pre>\n<p>There are only a few important points to configure. The first one is to tell Filebeat where to look for the PostgreSQL log files:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\nfilebeat:\n  # List of prospectors to fetch data.\n  prospectors:\n    # Each - is a prospector. Below are the prospector specific configurations\n    -\n      # Paths that should be crawled and fetched. Glob based paths.\n      # To fetch all \".log\" files from a specific level of subdirectories\n      # \/var\/log\/*\/*.log can be used.\n      # For each file found under this path, a harvester is started.\n      # Make sure not file is defined twice as this can lead to unexpected behaviour.\n      paths:\n        - \/u02\/pgdata\/PG1\/pg_log\/*.log\n<\/pre>\n<p>Afterwards make sure you disable\/uncomment the Elasticsearch ouput plugin under the &#8220;output&#8221; section:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n  ### Elasticsearch as output\n  #elasticsearch:\n    # Array of hosts to connect to.\n    # Scheme and port can be left out and will be set to the default (http and 9200)\n    # In case you specify and additional path, the scheme is required: http:\/\/localhost:9200\/path\n    # IPv6 addresses should always be defined as: https:\/\/[2001:db8::1]:9200\n    #hosts: [\"localhost:9200\"]\n<\/pre>\n<p>Finally enable the Logstash output plugin in the same section (provide the host and port where you Logstash is running):<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n       ### Logstash as output\n  logstash:\n    # The Logstash hosts\n    hosts: [\"192.168.22.173:5044\"]\n<\/pre>\n<p>The host and port specified here must match to what we specified in 02-beats-input.conf when we configured Logstash above. This should be sufficiant to startup Filebeat:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/u01\/app\/postgres\/product\/filebeat-1.2.3-x86_64\/ [PG1] .\/filebeat &amp;\n<\/pre>\n<p>If everything is working fine we should now be able to ask Elasticsearch for our data from the PostgreSQL log file(s):<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk elk]$ curl -XGET 'http:\/\/localhost:9200\/filebeat-*\/_search?pretty'\n{\n  \"took\" : 4,\n  \"timed_out\" : false,\n  \"_shards\" : {\n    \"total\" : 5,\n    \"successful\" : 5,\n    \"failed\" : 0\n  },\n  \"hits\" : {\n    \"total\" : 971,\n    \"max_score\" : 1.0,\n    \"hits\" : [ {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1F\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 06:57:34.030 CEST - 2 - 20831 -  - @ LOG:  MultiXact member wraparound protections are now enabled\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"type\" : \"log\",\n        \"input_type\" : \"log\",\n        \"fields\" : null,\n        \"count\" : 1,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"offset\" : 112,\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1M\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 07:20:46.060 CEST - 2 - 20835 -  - @ LOG:  autovacuum launcher shutting down\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"fields\" : null,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"offset\" : 948,\n        \"type\" : \"log\",\n        \"count\" : 1,\n        \"input_type\" : \"log\",\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1P\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 07:20:46.919 CEST - 5 - 20832 -  - @ LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=0.000 s, sync=0.000 s, total=0.678 s; sync files=0, longest=0.000 s, average=0.000 s; distance=10908 kB, estimate=10908 kB\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"input_type\" : \"log\",\n        \"count\" : 1,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"type\" : \"log\",\n        \"fields\" : null,\n        \"offset\" : 1198,\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1R\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:36:34.600 CEST - 1 - 2878 -  - @ LOG:  database system was shut down at 2016-05-15 07:20:46 CEST\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"offset\" : 1565,\n        \"input_type\" : \"log\",\n        \"count\" : 1,\n        \"fields\" : null,\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"type\" : \"log\",\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1X\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:39:21.313 CEST - 3 - 3048 - [local] - postgres@postgres STATEMENT:  insert into t1 generate_series(1,1000000);\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"offset\" : 2216,\n        \"type\" : \"log\",\n        \"input_type\" : \"log\",\n        \"count\" : 1,\n        \"fields\" : null,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1e\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:43:24.366 CEST - 3 - 3397 - [local] - postgres@postgres CONTEXT:  while updating tuple (0,1) in relation \\\"t1\\\"\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"offset\" : 3165,\n        \"type\" : \"log\",\n        \"input_type\" : \"log\",\n        \"count\" : 1,\n        \"fields\" : null,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1l\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:43:46.776 CEST - 10 - 3397 - [local] - postgres@postgres CONTEXT:  while updating tuple (0,1) in relation \\\"t1\\\"\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"offset\" : 4045,\n        \"type\" : \"log\",\n        \"count\" : 1,\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"input_type\" : \"log\",\n        \"fields\" : null,\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1r\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:45:39.837 CEST - 9 - 3048 - [local] - postgres@postgres ERROR:  type \\\"b\\\" does not exist at character 28\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"input_type\" : \"log\",\n        \"count\" : 1,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"offset\" : 4799,\n        \"type\" : \"log\",\n        \"fields\" : null,\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1w\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:45:49.843 CEST - 14 - 3048 - [local] - postgres@postgres ERROR:  current transaction is aborted, commands ignored until end of transaction block\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"type\" : \"log\",\n        \"fields\" : null,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"offset\" : 5400,\n        \"input_type\" : \"log\",\n        \"count\" : 1,\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    }, {\n      \"_index\" : \"filebeat-2016.07.27\",\n      \"_type\" : \"log\",\n      \"_id\" : \"AVYrey_1IfCoJOBMaP1x\",\n      \"_score\" : 1.0,\n      \"_source\" : {\n        \"message\" : \"2016-05-15 09:45:49.843 CEST - 15 - 3048 - [local] - postgres@postgres STATEMENT:  alter table t1 add column b int;\",\n        \"@version\" : \"1\",\n        \"@timestamp\" : \"2016-07-27T08:31:44.940Z\",\n        \"offset\" : 5559,\n        \"count\" : 1,\n        \"fields\" : null,\n        \"beat\" : {\n          \"hostname\" : \"centos7.local\",\n          \"name\" : \"centos7.local\"\n        },\n        \"source\" : \"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Sun.log\",\n        \"type\" : \"log\",\n        \"input_type\" : \"log\",\n        \"host\" : \"centos7.local\",\n        \"tags\" : [ \"beats_input_codec_plain_applied\" ]\n      }\n    } ]\n  }\n}\n<\/pre>\n<p>Quite a lot of information, so it is really working \ud83d\ude42 When we can ask Elasticsearch we should be able to use Kibana on the same data, too, shouldn&#8217;t we? Fire up your browser and point it to your Kibana URL (192.168.22.173:5601 in my case). You should see the &#8220;filebeat-*&#8221; index pattern in the upper left:<\/p>\n<p><a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_02.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_02.png\" alt=\"kibana_02\" width=\"1752\" height=\"642\" class=\"aligncenter size-full wp-image-9940\" \/><\/a><\/p>\n<p>Select the &#8220;filebeat-*&#8221; index pattern:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_03.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_03.png\" alt=\"kibana_03\" width=\"1897\" height=\"826\" class=\"aligncenter size-full wp-image-9942\" \/><\/a><\/p>\n<p>To make this index the default one click on the green star:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_04.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_04.png\" alt=\"kibana_04\" width=\"1570\" height=\"228\" class=\"aligncenter size-full wp-image-9943\" \/><\/a><\/p>\n<p>Time to discover our data by using the &#8220;Discover&#8221; menu on the top:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_05.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_05.png\" alt=\"kibana_05\" width=\"1314\" height=\"372\" class=\"aligncenter size-full wp-image-9944\" \/><\/a><\/p>\n<p>The result of that should be that you can see all the PostgreSQL log messages on the screen:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_06.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_06.png\" alt=\"kibana_06\" width=\"1905\" height=\"894\" class=\"aligncenter size-full wp-image-9945\" \/><\/a><\/p>\n<p>Try to search something, e.g. &#8220;checkpoint complete&#8221;:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_07.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_07.png\" alt=\"kibana_07\" width=\"1909\" height=\"838\" class=\"aligncenter size-full wp-image-9946\" \/><\/a><\/p>\n<p>Not much happening on my instance so lets do some checkpoints:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/u01\/app\/postgres\/product\/filebeat-1.2.3-x86_64\/ [PG1] sqh\npsql (9.6beta1 dbi services build)\nType \"help\" for help.\n\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.403 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.379 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.364 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.321 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.370 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.282 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.411 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 101.166 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.392 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.322 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.367 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.320 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.328 ms\n(postgres@[local]:5432) [postgres] &gt; checkpoint;\nCHECKPOINT\nTime: 100.285 ms\n<\/pre>\n<p>What is the picture now:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_08.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_08.png\" alt=\"kibana_08\" width=\"1784\" height=\"597\" class=\"aligncenter size-full wp-image-9947\" \/><\/a><\/p>\n<p>Isn&#8217;t that great? All the information near realtime, just by using a browser. In my case logs are coming from a single PostreSQL instance but logs could be coming from hundreds of instances. Logs could also be coming from webservers, application servers, operating system, network, &#8230; . All centralized in one place ready to analyze. Even better you could use <a href=\"https:\/\/www.elastic.co\/products\/watcher\" target=\"_blank\">Watcher<\/a> to alert on changes on your data. <\/p>\n<p>Ah, I can already hear it: But I need to see my performance metrics as well. No problem, there is the <a href=\"https:\/\/www.elastic.co\/guide\/en\/logstash\/current\/plugins-inputs-jdbc.html\" target=\"_blank\">jdbc input plugin for Logstash<\/a>. What can you do with that? Once configured you can query what ever you want from your database. Lets do a little demo.<\/p>\n<p>As we downloaded Logstash with all plugins included already the jdbc input plugin is already there:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ pwd\n\/opt\/elk\/logstash-2.3.4\n[elk@elk logstash-2.3.4]$ find . -name *jdbc*\n.\/vendor\/bundle\/jruby\/1.9\/gems\/jdbc-sqlite3-3.8.11.2\n.\/vendor\/bundle\/jruby\/1.9\/gems\/jdbc-sqlite3-3.8.11.2\/jdbc-sqlite3.gemspec\n.\/vendor\/bundle\/jruby\/1.9\/gems\/jdbc-sqlite3-3.8.11.2\/lib\/jdbc\n.\/vendor\/bundle\/jruby\/1.9\/gems\/jdbc-sqlite3-3.8.11.2\/lib\/sqlite-jdbc-3.8.11.2.jar\n.\/vendor\/bundle\/jruby\/1.9\/gems\/logstash-input-jdbc-3.1.0\n.\/vendor\/bundle\/jruby\/1.9\/gems\/logstash-input-jdbc-3.1.0\/lib\/logstash\/inputs\/jdbc.rb\n.\/vendor\/bundle\/jruby\/1.9\/gems\/logstash-input-jdbc-3.1.0\/lib\/logstash\/plugin_mixins\/jdbc.rb\n.\/vendor\/bundle\/jruby\/1.9\/gems\/logstash-input-jdbc-3.1.0\/logstash-input-jdbc.gemspec\n.\/vendor\/bundle\/jruby\/1.9\/gems\/sequel-4.36.0\/lib\/sequel\/adapters\/jdbc\n.\/vendor\/bundle\/jruby\/1.9\/gems\/sequel-4.36.0\/lib\/sequel\/adapters\/jdbc\/jdbcprogress.rb\n.\/vendor\/bundle\/jruby\/1.9\/gems\/sequel-4.36.0\/lib\/sequel\/adapters\/jdbc.rb\n.\/vendor\/bundle\/jruby\/1.9\/specifications\/jdbc-sqlite3-3.8.11.2.gemspec\n.\/vendor\/bundle\/jruby\/1.9\/specifications\/logstash-input-jdbc-3.1.0.gemspec\n<\/pre>\n<p>What you&#8217;ll need to provide in addtition is the jdbc driver for the database you want to connect to. In my case for PostgreSQL:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ cd \/opt\/ elk\/\n[elk@elk elk]$ mkdir jdbc\n[elk@elk elk]$ cd jdbc\/\n[elk@elk jdbc]$ wget https:\/\/jdbc.postgresql.org\/download\/postgresql-9.4.1209.jar\n[elk@elk jdbc]$ ls\npostgresql-9.4.1209.jar\n<\/pre>\n<p>All we need to do from here on is to configure another input and output plugin for Logstash:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk conf.d]$ pwd\n\/opt\/elk\/logstash-2.3.4\/conf.d\n[elk@elk conf.d]$ cat 03-jdbc-postgres-input.conf\ninput {\n    jdbc {\n        jdbc_connection_string =&gt; \"jdbc:postgresql:\/\/192.168.22.99:5432\/postgres\"\n        jdbc_user =&gt; \"postgres\"\n        jdbc_password =&gt; \"postgres\"\n        jdbc_validate_connection =&gt; true\n        jdbc_driver_library =&gt; \"\/opt\/elk\/jdbc\/postgresql-9.4.1209.jar\"\n        jdbc_driver_class =&gt; \"org.postgresql.Driver\"\n        statement =&gt; \"SELECT * from pg_stat_activity\"\n    }\n}\noutput {\n    stdout { codec =&gt; json_lines }\n}\n<\/pre>\n<p>Re-test if the configuration is fine:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ bin\/logstash --config \/opt\/elk\/logstash-2.3.4\/conf.d\/ --configtest\nConfiguration OK\n<\/pre>\n<p>And then kill and restart Logstash:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ bin\/logstash --config \/opt\/elk\/logstash-2.3.4\/conf.d\/\n<\/pre>\n<p>You should see data from pg_stat_activity right on the screen:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[elk@elk logstash-2.3.4]$ bin\/logstash --config \/opt\/elk\/logstash-2.3.4\/conf.d\/ \nSettings: Default pipeline workers: 1\nPipeline main started\n{\"datid\":13322,\"datname\":\"postgres\",\"pid\":3887,\"usesysid\":10,\"usename\":\"postgres\",\"application_name\":\"\",\"client_addr\":{\"type\":\"inet\",\"value\":\"192.168.22.173\"},\"client_hostname\":null,\"client_port\":58092,\"backend_start\":\"2016-07-27T13:15:25.421Z\",\"xact_start\":\"2016-07-27T13:15:25.716Z\",\"query_start\":\"2016-07-27T13:15:25.718Z\",\"state_change\":\"2016-07-27T13:15:25.718Z\",\"wait_event_type\":null,\"wait_event\":null,\"state\":\"active\",\"backend_xid\":null,\"backend_xmin\":{\"type\":\"xid\",\"value\":\"1984\"},\"query\":\"SELECT * from pg_stat_activity\",\"@version\":\"1\",\"@timestamp\":\"2016-07-27T13:15:26.712Z\"}\n{\"message\":\"2016-07-27 15:15:25.422 CEST - 1 - 3887 - 192.168.22.173 - [unknown]@[unknown] LOG:  connection received: host=192.168.22.173 port=58092\",\"@version\":\"1\",\"@timestamp\":\"2016-07-27T13:15:32.054Z\",\"source\":\"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Wed.log\",\"offset\":84795,\"type\":\"log\",\"input_type\":\"log\",\"count\":1,\"beat\":{\"hostname\":\"centos7.local\",\"name\":\"centos7.local\"},\"fields\":null,\"host\":\"centos7.local\",\"tags\":[\"beats_input_codec_plain_applied\"]}\n{\"message\":\"2016-07-27 15:15:25.454 CEST - 2 - 3887 - 192.168.22.173 - postgres@postgres LOG:  connection authorized: user=postgres database=postgres\",\"@version\":\"1\",\"@timestamp\":\"2016-07-27T13:15:32.054Z\",\"source\":\"\/u02\/pgdata\/PG1\/pg_log\/postgresql-Wed.log\",\"fields\":null,\"beat\":{\"hostname\":\"centos7.local\",\"name\":\"centos7.local\"},\"count\":1,\"offset\":84932,\"type\":\"log\",\"input_type\":\"log\",\"host\":\"centos7.local\",\"tags\":[\"beats_input_codec_plain_applied\"]}\n<\/pre>\n<p>As we want to have this data in Elasticsearch and analyze it with Kibana adjust the configuration to look like this and then restart Logstash:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\ninput {\n    jdbc {\n        jdbc_connection_string =&gt; \"jdbc:postgresql:\/\/192.168.22.99:5432\/postgres\"\n        jdbc_user =&gt; \"postgres\"\n        jdbc_password =&gt; \"postgres\"\n        jdbc_validate_connection =&gt; true\n        jdbc_driver_library =&gt; \"\/opt\/elk\/jdbc\/postgresql-9.4.1209.jar\"\n        jdbc_driver_class =&gt; \"org.postgresql.Driver\"\n        statement =&gt; \"SELECT * from pg_stat_activity\"\n        schedule =&gt; \"* * * * *\"\n    }\n}\noutput {\n    elasticsearch {\n        index =&gt; \"pg_stat_activity\"\n        document_type =&gt; \"pg_stat_activity\"\n        document_id =&gt; \"%{uid}\"\n        hosts =&gt; [\"localhost:9200\"]\n    }\n}\n<\/pre>\n<p>Once your restarted head over to Kibana and create a new index:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_09.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_09.png\" alt=\"kibana_09\" width=\"1919\" height=\"698\" class=\"aligncenter size-full wp-image-9954\" \/><\/a><\/p>\n<p>When you &#8220;Discover&#8221; you should see the data from pg_stat_activity:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_10.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_10.png\" alt=\"kibana_10\" width=\"1928\" height=\"918\" class=\"aligncenter size-full wp-image-9955\" \/><\/a><\/p>\n<p>Have fun with your data &#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When it comes to centralizing logs of various sources (operating systems, databases, webservers, etc.) the ELK stack is becoming more and more popular in the open source world. ELK stands for Elasticsearch, Logstash and Kibana. Elasticsearch is based on Apache Lucene and the primary goal is to provide distributed search and analytic functions. Logstash is [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":8653,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[86,889,88,890,90,77],"type_dbi":[],"class_list":["post-8642","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-database-administration-monitoring","tag-elasticsearch","tag-filebeat","tag-kibana","tag-log","tag-logstash","tag-postgresql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Elasticsearch, Kibana, Logstash and Filebeat - Centralize all your database logs (and even more) - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Elasticsearch, Kibana, Logstash and Filebeat - Centralize all your database logs (and even more)\" \/>\n<meta property=\"og:description\" content=\"When it comes to centralizing logs of various sources (operating systems, databases, webservers, etc.) the ELK stack is becoming more and more popular in the open source world. ELK stands for Elasticsearch, Logstash and Kibana. Elasticsearch is based on Apache Lucene and the primary goal is to provide distributed search and analytic functions. Logstash is [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2016-07-27T12:02:24+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1912\" \/>\n\t<meta property=\"og:image:height\" content=\"552\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Elasticsearch, Kibana, Logstash and Filebeat &#8211; Centralize all your database logs (and even more)\",\"datePublished\":\"2016-07-27T12:02:24+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\"},\"wordCount\":1122,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png\",\"keywords\":[\"Elasticsearch\",\"Filebeat\",\"Kibana\",\"log\",\"Logstash\",\"PostgreSQL\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\",\"name\":\"Elasticsearch, Kibana, Logstash and Filebeat - Centralize all your database logs (and even more) - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png\",\"datePublished\":\"2016-07-27T12:02:24+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png\",\"contentUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png\",\"width\":1912,\"height\":552},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Elasticsearch, Kibana, Logstash and Filebeat &#8211; Centralize all your database logs (and even more)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Elasticsearch, Kibana, Logstash and Filebeat - Centralize all your database logs (and even more) - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/","og_locale":"en_US","og_type":"article","og_title":"Elasticsearch, Kibana, Logstash and Filebeat - Centralize all your database logs (and even more)","og_description":"When it comes to centralizing logs of various sources (operating systems, databases, webservers, etc.) the ELK stack is becoming more and more popular in the open source world. ELK stands for Elasticsearch, Logstash and Kibana. Elasticsearch is based on Apache Lucene and the primary goal is to provide distributed search and analytic functions. Logstash is [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/","og_site_name":"dbi Blog","article_published_time":"2016-07-27T12:02:24+00:00","og_image":[{"width":1912,"height":552,"url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png","type":"image\/png"}],"author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Elasticsearch, Kibana, Logstash and Filebeat &#8211; Centralize all your database logs (and even more)","datePublished":"2016-07-27T12:02:24+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/"},"wordCount":1122,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png","keywords":["Elasticsearch","Filebeat","Kibana","log","Logstash","PostgreSQL"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/","url":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/","name":"Elasticsearch, Kibana, Logstash and Filebeat - Centralize all your database logs (and even more) - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png","datePublished":"2016-07-27T12:02:24+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/kibana_01-1.png","width":1912,"height":552},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Elasticsearch, Kibana, Logstash and Filebeat &#8211; Centralize all your database logs (and even more)"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/8642","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=8642"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/8642\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/8653"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=8642"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=8642"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=8642"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=8642"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}