{"id":8987,"date":"2016-10-01T13:18:59","date_gmt":"2016-10-01T11:18:59","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/"},"modified":"2016-10-01T13:18:59","modified_gmt":"2016-10-01T11:18:59","slug":"running-postgresql-on-zfs-on-linux-compression","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/","title":{"rendered":"Running PostgreSQL on ZFS on Linux &#8211; Compression"},"content":{"rendered":"<p>In the last posts in this little series we looked at <a href=\"http:\/\/dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux\/\" target=\"_blank\" rel=\"noopener\">how to get a ZFS file system up and running on a CentOS 7 host<\/a> and <a href=\"http:\/\/dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-fun-with-snapshots-and-clones\/\" target=\"_blank\" rel=\"noopener\">how snapshots and clones can be used to simply processes<\/a> such as testing and cloning PostgreSQL instances. In this post we&#8217;ll look at another feature of zfs: Compression.<\/p>\n<p><!--more--><\/p>\n<p>The current status of my ZFS file systems is:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@centos7 ~] zfs list\nNAME            USED  AVAIL  REFER  MOUNTPOINT\npgpool          170M  9.46G  20.5K  \/pgpool\npgpool\/pgdata   169M  9.46G   169M  \/pgpool\/pgdata\n<\/pre>\n<p>To check if compression is enabled:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@centos7 ~] zfs get compression pgpool\/pgdata\nNAME           PROPERTY     VALUE     SOURCE\npgpool\/pgdata  compression  off       default\n<\/pre>\n<p>Lets create another file system and enable compression for it:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@centos7 ~] zfs create pgpool\/pgdatacompressed\n[root@centos7 ~] zfs list\nNAME                      USED  AVAIL  REFER  MOUNTPOINT\npgpool                    170M  9.46G  20.5K  \/pgpool\npgpool\/pgdata             169M  9.46G   169M  \/pgpool\/pgdata\npgpool\/pgdatacompressed    19K  9.46G    19K  \/pgpool\/pgdatacompressed\n[root@centos7 ~] zfs get compression pgpool\/pgdatacompressed\nNAME                     PROPERTY     VALUE     SOURCE\npgpool\/pgdatacompressed  compression  off       default\n[root@centos7 ~] zfs set compression=on pgpool\/pgdatacompressed\n[root@centos7 ~] zfs get compression pgpool\/pgdatacompressed\nNAME                     PROPERTY     VALUE     SOURCE\npgpool\/pgdatacompressed  compression  on        local\n<\/pre>\n<p>You can ask zfs to report the compression ratio for a file system:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@centos7 ~] zfs get compressratio pgpool\/pgdatacompressed\nNAME                     PROPERTY       VALUE  SOURCE\npgpool\/pgdatacompressed  compressratio  1.00x  -\n[root@centos7 ~] chown postgres:postgres \/pgpool\/pgdatacompressed\/\n<\/pre>\n<p>The ratio is 1 which is because we do not have any data yet. Lets copy the PostgreSQL cluster from the uncompressed file system into our new compressed file system:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/home\/postgres\/ [PG1] cp -pr \/pgpool\/pgdata\/* \/pgpool\/pgdatacompressed\/\npostgres@centos7:\/home\/postgres\/ [PG1] ls -l \/pgpool\/pgdatacompressed\/\ntotal 30\ndrwx------. 6 postgres postgres     6 Sep 29 14:00 base\ndrwx------. 2 postgres postgres    54 Sep 29 14:27 global\ndrwx------. 2 postgres postgres     3 Sep 28 15:11 pg_clog\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_commit_ts\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_dynshmem\n-rw-------. 1 postgres postgres  4468 Sep 28 15:11 pg_hba.conf\n-rw-------. 1 postgres postgres  1636 Sep 28 15:11 pg_ident.conf\ndrwxr-xr-x. 2 postgres postgres     2 Sep 28 15:11 pg_log\ndrwx------. 4 postgres postgres     4 Sep 28 15:11 pg_logical\ndrwx------. 4 postgres postgres     4 Sep 28 15:11 pg_multixact\ndrwx------. 2 postgres postgres     3 Sep 29 14:27 pg_notify\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_replslot\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_serial\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_snapshots\ndrwx------. 2 postgres postgres     5 Sep 29 14:46 pg_stat\ndrwx------. 2 postgres postgres     2 Sep 29 14:46 pg_stat_tmp\ndrwx------. 2 postgres postgres     3 Sep 28 15:11 pg_subtrans\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_tblspc\ndrwx------. 2 postgres postgres     2 Sep 28 15:11 pg_twophase\n-rw-------. 1 postgres postgres     4 Sep 28 15:11 PG_VERSION\ndrwx------. 3 postgres postgres     8 Sep 29 14:26 pg_xlog\n-rw-------. 1 postgres postgres    88 Sep 28 15:11 postgresql.auto.conf\n-rw-------. 1 postgres postgres 21270 Sep 28 15:11 postgresql.conf\n-rw-------. 1 postgres postgres    69 Sep 29 14:27 postmaster.opts\n<\/pre>\n<p>We already should see a difference, shouldn&#8217;t we?<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/home\/postgres\/ [PG1] df -h | grep pgdata\npgpool\/pgdata            9.6G  170M  9.4G   2% \/pgpool\/pgdata\npgpool\/pgdatacompressed  9.5G   82M  9.4G   1% \/pgpool\/pgdatacompressed\n<\/pre>\n<p>Not bad, less than half of the size. We should see another compression ratio than 1 now: <\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@centos7 ~] zfs get compressratio pgpool\/pgdatacompressed\nNAME                     PROPERTY       VALUE  SOURCE\npgpool\/pgdatacompressed  compressratio  1.93x  -\n<\/pre>\n<p>Lets generate some data in our two PostgreSQL instances and check the time it takes as well as the size of the file systems afterwards. As in the last post the second instance just gets a different port, everything else is identical:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/home\/postgres\/ [PG1] pg_ctl start -D \/pgpool\/pgdata\npostgres@centos7:\/home\/postgres\/ [PG1] sed -i 's\/#port = 5432\/port=5433\/g' \/pgpool\/pgdatacompressed\/postgresql.conf\npostgres@centos7:\/home\/postgres\/ [PG1] FATAL:  data directory \"\/pgpool\/pgdatacompressed\" has group or world access\npostgres@centos7:\/home\/postgres\/ [PG1] chmod o-rwx,g-rwx \/pgpool\/pgdatacompressed\/\npostgres@centos7:\/home\/postgres\/ [PG1] pg_ctl start -D \/pgpool\/pgdatacompressed\/\n<\/pre>\n<p>This is the script to generate some data:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\ntiming\nc postgres\ndrop database if exists dataload;\ncreate database dataload;\nc dataload\ncreate table dataload ( a bigint\n                      , b varchar(100)\n                      , c timestamp\n                      );\nwith \n  data_generator_num as\n     ( select *\n         from generate_series ( 1\n                              , 1000000 ) nums\n     ) \ninsert into dataload\nselect data_generator_num.nums\n     , md5(data_generator_num.nums::varchar)\n     , current_date+data_generator_num.nums\n from data_generator_num;\n<\/pre>\n<p>I will run the script two times on each instance. For the instance on the uncompressed file system:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\n-- FIRST RUN\npostgres=# i generate_data.sql\nTiming is on.\nYou are now connected to database \"postgres\" as user \"postgres\".\nDROP DATABASE\nTime: 720.626 ms\nCREATE DATABASE\nTime: 4631.212 ms\nYou are now connected to database \"dataload\" as user \"postgres\".\nCREATE TABLE\nTime: 6.517 ms\nINSERT 0 1000000\nTime: 28668.343 ms\n-- SECOND RUN\ndataload=# i generate_data.sql\nTiming is on.\nYou are now connected to database \"postgres\" as user \"postgres\".\nDROP DATABASE\nTime: 774.061 ms\nCREATE DATABASE\nTime: 2721.169 ms\nYou are now connected to database \"dataload\" as user \"postgres\".\nCREATE TABLE\nTime: 7.374 ms\nINSERT 0 1000000\nTime: 32168.043 ms\ndataload=# \n<\/pre>\n<p>For the instance on the compressed file system:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\n-- FIRST RUN\npostgres=# i generate_data.sql\nTiming is on.\nYou are now connected to database \"postgres\" as user \"postgres\".\npsql:generate_data.sql:3: NOTICE:  database \"dataload\" does not exist, skipping\nDROP DATABASE\nTime: 0.850 ms\nCREATE DATABASE\nTime: 4281.965 ms\nYou are now connected to database \"dataload\" as user \"postgres\".\nCREATE TABLE\nTime: 5.120 ms\nINSERT 0 1000000\nTime: 30606.966 ms\n-- SECOND RUN\ndataload=# i generate_data.sql\nTiming is on.\nYou are now connected to database \"postgres\" as user \"postgres\".\nDROP DATABASE\nTime: 2359.120 ms\nCREATE DATABASE\nTime: 3267.151 ms\nYou are now connected to database \"dataload\" as user \"postgres\".\nCREATE TABLE\nTime: 8.665 ms\nINSERT 0 1000000\nTime: 23474.290 ms\ndataload=# \n<\/pre>\n<p>Despite that the numbers are quite bad (5 seconds to create an empty table) the fastest load was the second one on the compressed file system. So at least it is not slower. I have to admit that I did not do any tuning on the file systems and my VM does not have much memory (512m) which is far too less if you work with ZFS (ZFS needs much memory, at least 1gb). <\/p>\n<p>So, what about the size of the data. First lets check what PostgreSQL is telling us:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\n-- instance on the uncompressed file system\ndataload=# select * from pg_size_pretty ( pg_relation_size ( 'dataload' ));\n pg_size_pretty \n----------------\n 81 MB\n(1 row)\n-- instance on the compressed file system\ndataload=# select * from pg_size_pretty ( pg_relation_size ( 'dataload' ));\n pg_size_pretty \n----------------\n 81 MB\n(1 row)\n<\/pre>\n<p>Exactly the same, which is not surprising as PostgreSQL sees the files as if they would be uncompressed (please be aware that the my_app_table from the <a href=\"http:\/\/dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-fun-with-snapshots-and-clones\/\" target=\"_blank\" rel=\"noopener\">last post<\/a> is still there which is why the file system usage in total is larger than you might expect). It is quite funny on how the size is reported on the compressed file system depending on how you ask.<\/p>\n<p>You can use <a href=\"https:\/\/www.postgresql.org\/docs\/current\/static\/oid2name.html\" target=\"_blank\" rel=\"noopener\">oid2name<\/a> to map the file name to a table name:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\npostgres@centos7:\/pgpool\/pgdatacompressed\/base\/24580\/ [PG1] oid2name -d dataload -p 5433 -f 24581\nFrom database \"dataload\":\n  Filenode  Table Name\n----------------------\n     24581    dataload\n<\/pre>\n<p>File 24581 is the table we generated. When you ask for the size by using &#8220;du&#8221; you get:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\npostgres@centos7:\/pgpool\/pgdatacompressed\/base\/24580\/ [PG1] du -h 24581\n48M\t24581\n<\/pre>\n<p>This is the compressed size. When you use &#8220;ls&#8221; you get the uncompressed size:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">\npostgres@centos7:\/pgpool\/pgdatacompressed\/base\/24580\/ [PG1] ls -lh 24581\n-rw-------. 1 postgres postgres 81M Sep 30 10:43 24581\n<\/pre>\n<p>What does &#8220;df&#8221; tell us:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@centos7:\/home\/postgres\/ [PG1] df -h | grep pgdata\npgpool\/pgdata            9.5G  437M  9.1G   5% \/pgpool\/pgdata\npgpool\/pgdatacompressed  9.2G  165M  9.1G   2% \/pgpool\/pgdatacompressed\n<\/pre>\n<p>Not bad, 437M of uncompressed data which is 165M compressed. So, if you are short on space this really can be an option.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the last posts in this little series we looked at how to get a ZFS file system up and running on a CentOS 7 host and how snapshots and clones can be used to simply processes such as testing and cloning PostgreSQL instances. In this post we&#8217;ll look at another feature of zfs: Compression.<\/p>\n","protected":false},"author":29,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[611,73,77,935],"type_dbi":[],"class_list":["post-8987","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","tag-filesytem","tag-linux","tag-postgresql","tag-zfs"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Running PostgreSQL on ZFS on Linux - Compression - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Running PostgreSQL on ZFS on Linux - Compression\" \/>\n<meta property=\"og:description\" content=\"In the last posts in this little series we looked at how to get a ZFS file system up and running on a CentOS 7 host and how snapshots and clones can be used to simply processes such as testing and cloning PostgreSQL instances. In this post we&#8217;ll look at another feature of zfs: Compression.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2016-10-01T11:18:59+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Running PostgreSQL on ZFS on Linux &#8211; Compression\",\"datePublished\":\"2016-10-01T11:18:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\"},\"wordCount\":461,\"commentCount\":0,\"keywords\":[\"filesytem\",\"Linux\",\"PostgreSQL\",\"ZFS\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\",\"name\":\"Running PostgreSQL on ZFS on Linux - Compression - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2016-10-01T11:18:59+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Running PostgreSQL on ZFS on Linux &#8211; Compression\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Running PostgreSQL on ZFS on Linux - Compression - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/","og_locale":"en_US","og_type":"article","og_title":"Running PostgreSQL on ZFS on Linux - Compression","og_description":"In the last posts in this little series we looked at how to get a ZFS file system up and running on a CentOS 7 host and how snapshots and clones can be used to simply processes such as testing and cloning PostgreSQL instances. In this post we&#8217;ll look at another feature of zfs: Compression.","og_url":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/","og_site_name":"dbi Blog","article_published_time":"2016-10-01T11:18:59+00:00","author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Running PostgreSQL on ZFS on Linux &#8211; Compression","datePublished":"2016-10-01T11:18:59+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/"},"wordCount":461,"commentCount":0,"keywords":["filesytem","Linux","PostgreSQL","ZFS"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/","url":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/","name":"Running PostgreSQL on ZFS on Linux - Compression - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2016-10-01T11:18:59+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/running-postgresql-on-zfs-on-linux-compression\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Running PostgreSQL on ZFS on Linux &#8211; Compression"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/8987","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=8987"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/8987\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=8987"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=8987"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=8987"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=8987"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}