{"id":10825,"date":"2018-02-13T19:21:00","date_gmt":"2018-02-13T18:21:00","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/"},"modified":"2018-02-13T19:21:00","modified_gmt":"2018-02-13T18:21:00","slug":"how-we-build-our-customized-postgresql-docker-image","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/","title":{"rendered":"How we build our customized PostgreSQL Docker image"},"content":{"rendered":"<p>Docker becomes more and more popular these days and a lot of companies start to really use it. At one project we decided to build our own customized Docker image instead of using the <a href=\"https:\/\/hub.docker.com\/_\/postgres\/\" target=\"_blank\" rel=\"noopener\">official PostgreSQL one<\/a>. The main reason for that is that we wanted to compile from source so that we only get want is really required. Why having PostgreSQL compiled with tcl support when nobody will ever use that? Here is how we did it &#8230;<\/p>\n<p><!--more--><\/p>\n<p>To dig in right away, this is the simplified Dockerfile:<\/p>\n<pre class=\"brush: text; gutter: true; first-line: 1\">\nFROM debian\n\n# make the \"en_US.UTF-8\" locale so postgres will be utf-8 enabled by default\nENV LANG en_US.utf8\nENV PG_MAJOR 10\nENV PG_VERSION 10.1\nENV PG_SHA256 3ccb4e25fe7a7ea6308dea103cac202963e6b746697366d72ec2900449a5e713\nENV PGDATA \/u02\/pgdata\nENV PGDATABASE \"\" \n    PGUSERNAME \"\" \n    PGPASSWORD \"\"\n\nCOPY docker-entrypoint.sh \/\n\nRUN set -ex \n        \n        &amp;&amp; apt-get update &amp;&amp; apt-get install -y \n           ca-certificates \n           curl \n           procps \n           sysstat \n           libldap2-dev \n           libpython-dev \n           libreadline-dev \n           libssl-dev \n           bison \n           flex \n           libghc-zlib-dev \n           libcrypto++-dev \n           libxml2-dev \n           libxslt1-dev \n           bzip2 \n           make \n           gcc \n           unzip \n           python \n           locales \n        \n        &amp;&amp; rm -rf \/var\/lib\/apt\/lists\/* \n        &amp;&amp; localedef -i en_US -c -f UTF-8 en_US.UTF-8 \n        &amp;&amp; mkdir \/u01\/ \n        \n        &amp;&amp; groupadd -r postgres --gid=999 \n        &amp;&amp; useradd -m -r -g postgres --uid=999 postgres \n        &amp;&amp; chown postgres:postgres \/u01\/ \n        &amp;&amp; mkdir -p \"$PGDATA\" \n        &amp;&amp; chown -R postgres:postgres \"$PGDATA\" \n        &amp;&amp; chmod 700 \"$PGDATA\" \n        \n        &amp;&amp; curl -o \/home\/postgres\/postgresql.tar.bz2 \"https:\/\/ftp.postgresql.org\/pub\/source\/v$PG_VERSION\/postgresql-$PG_VERSION.tar.bz2\" \n        &amp;&amp; echo \"$PG_SHA256 \/home\/postgres\/postgresql.tar.bz2\" | sha256sum -c - \n        &amp;&amp; mkdir -p \/home\/postgres\/src \n        &amp;&amp; chown -R postgres:postgres \/home\/postgres \n        &amp;&amp; su postgres -c \"tar \n                --extract \n                --file \/home\/postgres\/postgresql.tar.bz2 \n                --directory \/home\/postgres\/src \n                --strip-components 1\" \n        &amp;&amp; rm \/home\/postgres\/postgresql.tar.bz2 \n        \n        &amp;&amp; cd \/home\/postgres\/src \n        &amp;&amp; su postgres -c \".\/configure \n                --enable-integer-datetimes \n                --enable-thread-safety \n                --with-pgport=5432 \n                --prefix=\/u01\/app\/postgres\/product\/$PG_VERSION \\\n                --with-ldap \n                --with-python \n                --with-openssl \n                --with-libxml \n                --with-libxslt\" \n        &amp;&amp; su postgres -c \"make -j 4 all\" \n        &amp;&amp; su postgres -c \"make install\" \n        &amp;&amp; su postgres -c \"make -C contrib install\" \n        &amp;&amp; rm -rf \/home\/postgres\/src \n        \n        &amp;&amp; apt-get update &amp;&amp; apt-get purge --auto-remove -y \n           libldap2-dev \n           libpython-dev \n           libreadline-dev \n           libssl-dev \n           libghc-zlib-dev \n           libcrypto++-dev \n           libxml2-dev \n           libxslt1-dev \n           bzip2 \n           gcc \n           make \n           unzip \n        &amp;&amp; apt-get install -y libxml2 \n        &amp;&amp; rm -rf \/var\/lib\/apt\/lists\/*\n\nENV LANG en_US.utf8\nUSER postgres\nEXPOSE 5432\nENTRYPOINT [\"\/docker-entrypoint.sh\"]\n<\/pre>\n<p>We based the image on the latest Debian image, that is line 1. The following lines define the PostgreSQL version we will use and define some environment variables we will user later. What follows is basically installing all the packages required for building PostgreSQL from source, adding the operating system user and group, preparing the directories, fetching the PostgreSQL source code, configure, make and make install. Pretty much straight forward. Finally, to shrink the image, we remove all the packages that are not any more required after PostgreSQL was compiled and installed.<\/p>\n<p>The final setup of the PostgreSQL instance happens in the docker-entrypoint.sh script which is referenced at the very end of the Dockerfile:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n#!\/bin\/bash\n\n# this are the environment variables which need to be set\nPGDATA=${PGDATA}\/${PG_MAJOR}\nPGHOME=\"\/u01\/app\/postgres\/product\/${PG_VERSION}\"\nPGAUTOCONF=${PGDATA}\/postgresql.auto.conf\nPGHBACONF=${PGDATA}\/pg_hba.conf\nPGDATABASENAME=${PGDATABASE}\nPGUSERNAME=${PGUSERNAME}\nPGPASSWD=${PGPASSWORD}\n\n# create the database and the user\n_pg_create_database_and_user()\n{\n    ${PGHOME}\/bin\/psql -c \"create user ${PGUSERNAME} with login password '${PGPASSWD}'\" postgres\n    ${PGHOME}\/bin\/psql -c \"create database ${PGDATABASENAME} with owner = ${PGUSERNAME}\" postgres\n}\n\n# start the PostgreSQL instance\n_pg_prestart()\n{\n    ${PGHOME}\/bin\/pg_ctl -D ${PGDATA} -w start\n}\n\n# start postgres and do not disconnect\n# required for docker\n_pg_start()\n{\n    ${PGHOME}\/bin\/postgres \"-D\" \"${PGDATA}\"\n}\n\n# stop the PostgreSQL instance\n_pg_stop()\n{\n    ${PGHOME}\/bin\/pg_ctl -D ${PGDATA} stop -m fast\n}\n\n# initdb a new cluster\n_pg_initdb()\n{\n    ${PGHOME}\/bin\/initdb -D ${PGDATA} --data-checksums\n}\n\n\n# adjust the postgresql parameters\n_pg_adjust_config() {\n    # PostgreSQL parameters\n    echo \"shared_buffers='128MB'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"effective_cache_size='128MB'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"listen_addresses = '*'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"logging_collector = 'on'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_truncate_on_rotation = 'on'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_filename = 'postgresql-%a.log'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_rotation_age = '1440'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_line_prefix = '%m - %l - %p - %h - %u@%d '\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_directory = 'pg_log'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_min_messages = 'WARNING'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_autovacuum_min_duration = '60s'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_min_error_statement = 'NOTICE'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_min_duration_statement = '30s'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_checkpoints = 'on'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_statement = 'none'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_lock_waits = 'on'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_temp_files = '0'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_timezone = 'Europe\/Zurich'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_connections=on\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_disconnections=on\" &gt;&gt; ${PGAUTOCONF}\n    echo \"log_duration=off\" &gt;&gt; ${PGAUTOCONF}\n    echo \"client_min_messages = 'WARNING'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"wal_level = 'replica'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"hot_standby_feedback = 'on'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"max_wal_senders = '10'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"cluster_name = '${PGDATABASENAME}'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"max_replication_slots = '10'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"work_mem=8MB\" &gt;&gt; ${PGAUTOCONF}\n    echo \"maintenance_work_mem=64MB\" &gt;&gt; ${PGAUTOCONF}\n    echo \"wal_compression=on\" &gt;&gt; ${PGAUTOCONF}\n    echo \"max_wal_senders=20\" &gt;&gt; ${PGAUTOCONF}\n    echo \"shared_preload_libraries='pg_stat_statements'\" &gt;&gt; ${PGAUTOCONF}\n    echo \"autovacuum_max_workers=6\" &gt;&gt; ${PGAUTOCONF}\n    echo \"autovacuum_vacuum_scale_factor=0.1\" &gt;&gt; ${PGAUTOCONF}\n    echo \"autovacuum_vacuum_threshold=50\" &gt;&gt; ${PGAUTOCONF}\n    # Authentication settings in pg_hba.conf\n    echo \"host    all             all             0.0.0.0\/0            md5\" &gt;&gt; ${PGHBACONF}\n}\n\n# initialize and start a new cluster\n_pg_init_and_start()\n{\n    # initialize a new cluster\n    _pg_initdb\n    # set params and access permissions\n    _pg_adjust_config\n    # start the new cluster\n    _pg_prestart\n    # set username and password\n    _pg_create_database_and_user\n}\n\n# check if $PGDATA exists\nif [ -e ${PGDATA} ]; then\n    # when $PGDATA exists we need to check if there are files\n    # because when there are files we do not want to initdb\n    if [ -e \"${PGDATA}\/base\" ]; then\n        # when there is the base directory this\n        # probably is a valid PostgreSQL cluster\n        # so we just start it\n        _pg_prestart\n    else\n        # when there is no base directory then we\n        # should be able to initialize a new cluster\n        # and then start it\n        _pg_init_and_start\n    fi\nelse\n    # initialze and start the new cluster\n    _pg_init_and_start\n    # create PGDATA\n    mkdir -p ${PGDATA}\n    # create the log directory\n    mkdir -p ${PGDATA}\/pg_log\nfi\n# restart and do not disconnect from the postgres daemon\n_pg_stop\n_pg_start\n<\/pre>\n<p>The important point here is: PGDATA is a persistent volume that is linked into the Docker container. When the container comes up we need to check if something that looks like a PostgreSQL data directory is already there. If yes, then we just start the instance with what is there. If nothing is there we create a new instance. Remember: This is just a template and you might need to do more checks in your case. The same is true for what we add to pg_hba.conf here: This is nothing you should do on real systems but can be handy for testing.<\/p>\n<p>Hope this helps &#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Docker becomes more and more popular these days and a lot of companies start to really use it. At one project we decided to build our own customized Docker image instead of using the official PostgreSQL one. The main reason for that is that we wanted to compile from source so that we only get [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[601,77],"type_dbi":[],"class_list":["post-10825","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","tag-docker","tag-postgresql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>How we build our customized PostgreSQL Docker image - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How we build our customized PostgreSQL Docker image\" \/>\n<meta property=\"og:description\" content=\"Docker becomes more and more popular these days and a lot of companies start to really use it. At one project we decided to build our own customized Docker image instead of using the official PostgreSQL one. The main reason for that is that we wanted to compile from source so that we only get [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2018-02-13T18:21:00+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"How we build our customized PostgreSQL Docker image\",\"datePublished\":\"2018-02-13T18:21:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\"},\"wordCount\":317,\"commentCount\":0,\"keywords\":[\"Docker\",\"PostgreSQL\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\",\"name\":\"How we build our customized PostgreSQL Docker image - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2018-02-13T18:21:00+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How we build our customized PostgreSQL Docker image\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How we build our customized PostgreSQL Docker image - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/","og_locale":"en_US","og_type":"article","og_title":"How we build our customized PostgreSQL Docker image","og_description":"Docker becomes more and more popular these days and a lot of companies start to really use it. At one project we decided to build our own customized Docker image instead of using the official PostgreSQL one. The main reason for that is that we wanted to compile from source so that we only get [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/","og_site_name":"dbi Blog","article_published_time":"2018-02-13T18:21:00+00:00","author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"How we build our customized PostgreSQL Docker image","datePublished":"2018-02-13T18:21:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/"},"wordCount":317,"commentCount":0,"keywords":["Docker","PostgreSQL"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/","url":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/","name":"How we build our customized PostgreSQL Docker image - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2018-02-13T18:21:00+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/how-we-build-our-customized-postgresql-docker-image\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"How we build our customized PostgreSQL Docker image"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10825","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=10825"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10825\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=10825"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=10825"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=10825"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=10825"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}