{"id":38391,"date":"2025-05-18T21:51:53","date_gmt":"2025-05-18T19:51:53","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=38391"},"modified":"2025-05-19T11:32:19","modified_gmt":"2025-05-19T09:32:19","slug":"postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/","title":{"rendered":"PostgreSQL 17-18 major upgrade &#8211; blue-green migration with minimal downtime"},"content":{"rendered":"\n<p>As PostgreSQL 18 Beta 1 has been released (<a href=\"https:\/\/www.postgresql.org\/about\/news\/postgresql-18-beta-1-released-3070\/\">link<\/a>) and is packed with lots of cool features like improved IO on reads for example, I wanted to test out the new pg_createsubscriber and pg_upgrade options and how they are going to help us performing smoother major upgrades using the &#8220;blue-green&#8221; migration similar to the ones available on AWS (<a href=\"https:\/\/docs.aws.amazon.com\/AmazonRDS\/latest\/UserGuide\/blue-green-deployments.html\">Using Amazon RDS Blue\/Green Deployments for database updates &#8211; Amazon Relational Database Service<\/a>). <br>The underlying technology is still the same, logical replication, and with still the same <a href=\"https:\/\/www.postgresql.org\/docs\/18\/logical-replication-restrictions.html\">limitations and constraints<\/a> but a much smoother setup and implementation to the point that it has never been so easy to automate such tasks. <br><br><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-pg-createsubscriber-simplifying-logical-replication-setup\">pg_createsubscriber : simplifying logical replication setup<\/h3>\n\n\n\n<p><code>pg_createsubscriber<\/code> automates this workflow by converting a physical standby server into a logical replica. It creates the necessary publications and subscriptions for specified databases without copying the initial table data, focusing solely on synchronizing changes from a specific point in time. This approach is particularly nice with large databases, where copying existing data can be resource-intensive and time-consuming. <br>PostgreSQL 18 Beta 1 introduces an <code>--all<\/code> flag to <code>pg_createsubscriber<\/code>, allowing us to create logical replicas for all databases in an instance with a single command. This simplifies the replication setup process, especially in environments with multiple databases, by eliminating the need to specify each database individually.<\/p>\n\n\n\n<p>The latest version of PostgreSQL offers some new pg_upgrade capabilities like the &#8220;&#8211;swap&#8221; option which <br>a nice speed improvement. If you want to know more about it Daniel Westermann did a <a href=\"https:\/\/www.dbi-services.com\/blog\/postgresql-18-swap-mode-for-pg_upgrade\/\">nice blog post <\/a>about it. <br>PostgreSQL 18 one of the most significant releases in years, packed with improvements. I won&#8217;t talk about all the nice new capabilities here. But one small addition caught my attention while preparing this blog. <code>pg_stat_subscription_stats<\/code> is a new view that allows to track write conflicts that occur during logical replication. This is really nice and means that we get one step closer to have a multi-master capability in PostgreSQL.  <br><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-step-by-step-blue-green-migration\">Step by step blue-green migration <\/h2>\n\n\n\n<p>In my setup for the LAB I have two servers pg-blue (IP: 10.1.0.4) and pg-green (IP: 10.1.0.5). Both servers will start with a PostgreSQL 17.5 instance. The blue server will simulate my production instance that I want to migrate with minimum downtime to a new PostgreSQL 18 beta 1 instance. <br><br><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\udfe6 Step 1: Set Up the Blue (Primary) Environment with PostgreSQL 17<\/h3>\n\n\n\n<p><strong>Install PostgreSQL 17:<\/strong><\/p>\n\n\n\n<p>Install the instance using your preferred way, in my case I am going to install from source as I usually do. <\/p>\n\n\n\n<p><strong>Initialize the database and populate with sample data:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo -i -u postgres\ncreatedb pgbench_test\npgbench -i -s 500 pgbench_test<\/code><\/pre>\n\n\n\n<p><strong>Configure replication settings:<\/strong><\/p>\n\n\n\n<p>Edit <code>postgresql.conf<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># on blue \nwal_level = replica\nmax_wal_senders = 10\nmax_replication_slots = 10<\/code><\/pre>\n\n\n\n<p><strong>Create a replication user:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># on blue\npsql -c \"CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'yourpassword';\"<\/code><\/pre>\n\n\n\n<p><strong>Configure <code>pg_hba.conf<\/code> for replication access:<\/strong><\/p>\n\n\n\n<p>Edit <code>pg_hba.conf<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># on blue\nhost    replication     replicator     10.1.0.5\/32       scram-sha-256<\/code><\/pre>\n\n\n\n<p><strong>Restart PostgreSQL to apply changes:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo systemctl restart postgresql<\/code><\/pre>\n\n\n\n<p><strong>Verify replication settings:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>psql -c \"SELECT * FROM pg_stat_replication;\"<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-step-2-create-a-physical-replica-green-environment\">\ud83d\udfe9 Step 2: Create a Physical Replica (Green Environment) <\/h3>\n\n\n\n<p><strong>On the replica server:<\/strong><\/p>\n\n\n\n<p><strong>Stop PostgreSQL service:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo systemctl stop postgresql<\/code><\/pre>\n\n\n\n<p><strong>Backup existing data directory:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo cp -rf \/u02\/pgdata\/17\/db_5\/  \/u99\/pgdata\/17\/<\/code><\/pre>\n\n\n\n<p><strong>Remove existing data directory:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo rm -rf \/u02\/pgdata\/17\/db_5\/*<\/code><\/pre>\n\n\n\n<p><strong>Perform base backup from primary:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>postgres@PG-green:\/u02\/pgdata\/17\/db_5\/ &#091;pg175] pg_basebackup -h 10.1.0.4 -D \/u02\/pgdata\/17\/db_5 -U replicator -W -P --wal-method=stream\nPassword:\n7687298\/7687298 kB (100%), 1\/1 tablespace<\/code><\/pre>\n\n\n\n<p><strong>Enable standby mode:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>touch \/u02\/pgdata\/17\/db_5\/standby.signal<\/code><\/pre>\n\n\n\n<p><strong>Configure primary connection info:<\/strong><\/p>\n\n\n\n<p>Edit <code>\/etc\/postgresql\/17\/main\/postgresql.conf<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>primary_conninfo = 'host=10.1.0.4 port=5432 user=replicator password=yourpassword'<\/code><\/pre>\n\n\n\n<p><strong>Start PostgreSQL service:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo systemctl start postgresql<\/code><\/pre>\n\n\n\n<p><strong>Verify replication status:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>08:28:41 postgres@PG-blue:\/u02\/pgdata\/17\/db_5\/ &#091;pg175] sqh\npsql (17.5 dbi services build)\nType \"help\" for help.\n\npostgres=# SELECT * FROM pg_stat_replication;\n-&#091; RECORD 1 ]----+------------------------------\npid              | 2014\nusesysid         | 16388\nusename          | replicator\napplication_name | pg17blue\nclient_addr      | 10.1.0.5\nclient_hostname  |\nclient_port      | 45162\nbackend_start    | 2025-05-18 16:34:20.991611+00\nbackend_xmin     | 765\nstate            | streaming\nsent_lsn         | 1\/6D0000A0\nwrite_lsn        | 1\/6D0000A0\nflush_lsn        | 1\/6D0000A0\nreplay_lsn       | 1\/6D0000A0\nwrite_lag        | 00:00:00.190877\nflush_lag        | 00:00:00.195751\nreplay_lag       | 00:00:00.195752\nsync_priority    | 0\nsync_state       | async\nreply_time       | 2025-05-18 16:34:21.207837+00\n\n\n08:28:59 postgres@PG-green:\/u02\/pgdata\/17\/db_5\/ &#091;pg175] sqh\npsql (17.5 dbi services build)\nType \"help\" for help.\n\npostgres=# SELECT pg_is_in_recovery()\n;\n pg_is_in_recovery\n-------------------\n t\n(1 row)\n\npostgres=#<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\udd04 Step 3: Transition from Physical to Logical Replication<\/h3>\n\n\n\n<p><strong>On the primary server:<\/strong><\/p>\n\n\n\n<p><strong>Set <code>wal_level<\/code> to <code>logical<\/code>:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># On blue\npsql -c \"ALTER SYSTEM SET wal_level = logical;\"<\/code><\/pre>\n\n\n\n<p><strong>Restart PostgreSQL:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># On blue\nsudo systemctl restart postgresql<\/code><\/pre>\n\n\n\n<p><strong>Create a publication for all tables:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># On blue\n\nveryolddatabasethatimigratedsincepgversion9=# select *  FROM pg_catalog.pg_publication\n WHERE pubname = 'my_pub';\n-&#091; RECORD 1 ]+-------\noid          | 16413\npubname      | my_pub\npubowner     | 10\npuballtables | t\npubinsert    | t\npubupdate    | t\npubdelete    | t\npubtruncate  | t\npubviaroot   | f<\/code><\/pre>\n\n\n\n<p><strong>On the replica server:<\/strong><\/p>\n\n\n\n<p><strong>Use <code>pg_createsubscriber<\/code> to set up logical replication:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># On green\n17:29:36 postgres@PG-green:\/u02\/pgdata\/17\/db_5\/ &#091;pg175] \/u01\/app\/postgres\/product\/17\/db_5\/bin\/pg_createsubscriber -d veryolddatabasethatimigratedsincepgversion9   --pgdata=\/u02\/pgdata\/17\/db_5\/   --config-file=\/u02\/pgdata\/17\/db_5\/postgresql.conf   --publisher-server='user=replicator password=yourpassword host=10.1.0.4'   -p 5432 -U postgres --publication=my_pub --subscription=my_sub\n2025-05-18 19:30:04.173 CEST - 1 - 3182 -  - @ - 0LOG:  redirecting log output to logging collector process\n2025-05-18 19:30:04.173 CEST - 2 - 3182 -  - @ - 0HINT:  Future log output will appear in directory \"pg_log\".\n2025-05-18 19:30:04.682 CEST - 1 - 3195 -  - @ - 0LOG:  redirecting log output to logging collector process\n2025-05-18 19:30:04.682 CEST - 2 - 3195 -  - @ - 0HINT:  Future log output will appear in directory \"pg_log\".\n17:30:06 postgres@PG-green:\/u02\/pgdata\/17\/db_5\/ &#091;pg175] pgstart\nwaiting for server to start.... done\nserver started\n  <\/code><\/pre>\n\n\n\n<p><strong>Monitor replication status:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\n# On green \n\nSELECT * FROM pg_stat_subscription;\n-&#091; RECORD 1 ]---------+------------------------------\nsubid                 | 24605\nsubname               | my_sub\nworker_type           | apply\npid                   | 3229\nleader_pid            |\nrelid                 |\nreceived_lsn          | 1\/70000CB8\nlast_msg_send_time    | 2025-05-18 17:50:25.344741+00\nlast_msg_receipt_time | 2025-05-18 17:50:25.345546+00\nlatest_end_lsn        | 1\/70000CB8\nlatest_end_time       | 2025-05-18 17:50:25.344741+00\n\n# On Blue\nveryolddatabasethatimigratedsincepgversion9=# SELECT * FROM pg_stat_replication;\n-&#091; RECORD 1 ]----+------------------------------\npid | 2809\nusesysid | 16388\nusename | replicator\napplication_name | my_sub\nclient_addr | 10.1.0.5\nclient_hostname |\nclient_port | 33324\nbackend_start | 2025-05-18 17:30:46.257715+00\nbackend_xmin |\nstate | streaming\nsent_lsn | 1\/70000CB8\nwrite_lsn | 1\/70000CB8\nflush_lsn | 1\/70000CB8\nreplay_lsn | 1\/70000CB8\nwrite_lag |\nflush_lag |\nreplay_lag |\nsync_priority | 0\nsync_state | async\nreply_time | 2025-05-18 17:48:45.220612+00<\/code><\/pre>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">\u2699\ufe0f Step 4: Upgrade the Green Environment to PostgreSQL 18 Using <code>pg_upgrade<\/code> with <code>--swap<\/code><\/h3>\n\n\n\n<p><strong>On the replica server:<\/strong><\/p>\n\n\n\n<p><strong>Install PostgreSQL 18 binaries:<\/strong><\/p>\n\n\n\n<p>You can download the source using the following command to install from source or use the PostgreSQL official repository. <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>wget https:\/\/ftp.postgresql.org\/pub\/source\/v18beta1\/postgresql-18beta1.tar.bz2<\/code><\/pre>\n\n\n\n<p>In my case I want to use the new features of PostgreSQL like IO_uring, numa awareness and oauth, so I will use the following options on my meson configure : <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>...\n-Dlibcurl=enabled \\\n-Dliburing=enabled \\\n-Dlibnuma=enabled \\\n...<\/code><\/pre>\n\n\n\n<p><strong>Stop PostgreSQL service:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo systemctl stop postgresql<\/code><\/pre>\n\n\n\n<p><strong>Run <code>pg_upgrade<\/code> with the <code>--swap<\/code> method:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>export PGDATAOLD=\/u02\/pgdata\/17\/db_5\nexport PGBINOLD=\/u01\/app\/postgres\/product\/17\/db_5\/bin\nexport PGDATANEW=\/u02\/pgdata\/18\/db_beta1\nexport PGBINNEW=\/u01\/app\/postgres\/product\/18\/db_beta1\/bin\n\n18:32:55 postgres@PG-green:\/home\/postgres\/build\/ &#091;pg175] \/u01\/app\/postgres\/product\/18\/db_beta1\/bin\/pg_upgrade \\\n  --old-bindir=$PGBINOLD --new-bindir=$PGBINNEW \\\n  --old-datadir=$PGDATAOLD --new-datadir=$PGDATANEW \\\n  --check\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions                                     ok\nChecking database connection settings                         ok\nChecking database user is the install user                    ok\nChecking for prepared transactions                            ok\nChecking for contrib\/isn with bigint-passing mismatch         ok\nChecking for valid logical replication slots                  ok\nChecking for subscription state                               ok\nChecking data type usage                                      ok\nChecking for objects affected by Unicode update               ok\nChecking for presence of required libraries                   ok\nChecking database user is the install user                    ok\nChecking for prepared transactions                            ok\nChecking for new cluster tablespace directories               ok\nChecking for new cluster configuration for subscriptions      ok\n\n*Clusters are compatible*\n\n\n18:33:02 postgres@PG-green:\/home\/postgres\/build\/ &#091;pg175] \/u01\/app\/postgres\/product\/18\/db_beta1\/bin\/pg_upgrade \\\n  --old-bindir=$PGBINOLD --new-bindir=$PGBINNEW \\\n  --old-datadir=$PGDATAOLD --new-datadir=$PGDATANEW \\\n  --swap\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions                                     ok\nChecking database connection settings                         ok\nChecking database user is the install user                    ok\nChecking for prepared transactions                            ok\nChecking for contrib\/isn with bigint-passing mismatch         ok\nChecking for valid logical replication slots                  ok\nChecking for subscription state                               ok\nChecking data type usage                                      ok\nChecking for objects affected by Unicode update               ok\nCreating dump of global objects                               ok\nCreating dump of database schemas\n                                                              ok\nChecking for presence of required libraries                   ok\nChecking database user is the install user                    ok\nChecking for prepared transactions                            ok\nChecking for new cluster tablespace directories               ok\nChecking for new cluster configuration for subscriptions      ok\n\nIf pg_upgrade fails after this point, you must re-initdb the\nnew cluster before continuing.\n\nPerforming Upgrade\n------------------\nSetting locale and encoding for new cluster                   ok\nAnalyzing all rows in the new cluster                         ok\nFreezing all rows in the new cluster                          ok\nDeleting files from new pg_xact                               ok\nCopying old pg_xact to new server                             ok\nSetting oldest XID for new cluster                            ok\nSetting next transaction ID and epoch for new cluster         ok\nDeleting files from new pg_multixact\/offsets                  ok\nCopying old pg_multixact\/offsets to new server                ok\nDeleting files from new pg_multixact\/members                  ok\nCopying old pg_multixact\/members to new server                ok\nSetting next multixact ID and offset for new cluster          ok\nResetting WAL archives                                        ok\nSetting frozenxid and minmxid counters in new cluster         ok\nRestoring global objects in the new cluster                   ok\nRestoring database schemas in the new cluster\n                                                              ok\nAdding \".old\" suffix to old \"global\/pg_control\"               ok\n\nBecause \"swap\" mode was used, the old cluster can no longer be\nsafely started.\nSwapping data directories\n                                                              ok\nSetting next OID for new cluster                              ok\nSync data directory to disk                                   ok\nCreating script to delete old cluster                         ok\nChecking for extension updates                                ok\n\nUpgrade Complete\n----------------\nSome statistics are not transferred by pg_upgrade.\nOnce you start the new server, consider running these two commands:\n    \/u01\/app\/postgres\/product\/18\/db_beta1\/bin\/vacuumdb --all --analyze-in-stages --missing-stats-only\n    \/u01\/app\/postgres\/product\/18\/db_beta1\/bin\/vacuumdb --all --analyze-only\nRunning this script will delete the old cluster's data files:\n    .\/delete_old_cluster.sh\n\n\n\/u01\/app\/postgres\/product\/18\/db_beta1\/bin\/vacuumdb \\\n  --all --analyze-in-stages --missing-stats-only\n<\/code><\/pre>\n\n\n\n<p><strong>Note:<\/strong> The <code>--swap<\/code> option swaps the old and new data directories, potentially offering a faster upgrade process.<\/p>\n\n\n\n<p><strong>Start PostgreSQL 18 service:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>18:34:34 postgres@PG-green:\/home\/postgres\/ &#091;pg18b] pgstart\nwaiting for server to start.... done\nserver started\n18:34:37 postgres@PG-green:\/home\/postgres\/ &#091;pg18b] sqh\npsql (18beta1 dbi services build)\nType \"help\" for help.\n\npostgres=# SELECT * FROM pg_stat_subscription;\n-&#091; RECORD 1 ]---------+------------------------------\nsubid                 | 16406\nsubname               | my_sub2\nworker_type           | apply\npid                   | 22582\nleader_pid            |\nrelid                 |\nreceived_lsn          | 1\/70000CB8\nlast_msg_send_time    | 2025-05-18 18:35:07.905706+00\nlast_msg_receipt_time | 2025-05-18 18:35:07.905066+00\nlatest_end_lsn        | 1\/70000CB8\nlatest_end_time       | 2025-05-18 18:35:07.905706+00\npostgres=#<\/code><\/pre>\n\n\n\n<p><\/p>\n\n\n\n<p>This is one of the key result. <br>The logical replication setup survives the pg_upgrade process and keeps the synchronization going. <br>Note though, that on large instances, you still want WAL retention to be the duration of the pg_upgrade process obviously.  <\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\udd01 Step 5: Perform Failover and Cutover to the Upgraded Green Environment<\/h3>\n\n\n\n<p><strong>On the primary server:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># On blue\nveryolddatabasethatimigratedsincepgversion9=# ALTER SYSTEM SET default_transaction_read_only = on;\nALTER SYSTEM\n\nveryolddatabasethatimigratedsincepgversion9=# select pg_reload_conf();\n pg_reload_conf\n----------------\n t\n(1 row)\n\n\nveryolddatabasethatimigratedsincepgversion9=# SELECT pg_current_wal_lsn() AS current_lsn;\n current_lsn\n-------------\n 1\/70009120\n(1 row)<\/code><\/pre>\n\n\n\n<p>Here I stopped any new writes on the primary and checked the current_lsn. <\/p>\n\n\n\n<p><strong>On the replica (now upgraded) server:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>veryolddatabasethatimigratedsincepgversion9=# SELECT subname,\n       received_lsn,\n       latest_end_lsn,\n       latest_end_time\n  FROM pg_stat_subscription\n WHERE subname = 'my_sub2';\n subname | received_lsn | latest_end_lsn |        latest_end_time\n---------+--------------+----------------+-------------------------------\n my_sub | 1\/70009120   | 1\/70009120     | 2025-05-18 18:51:02.166188+00\n(1 row)\n\nveryolddatabasethatimigratedsincepgversion9=# DROP SUBSCRIPTION my_sub;\nNOTICE:  dropped replication slot \"my_sub\" on publisher\nDROP SUBSCRIPTION\n<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>SELECT pg_promote();<\/code><\/pre>\n\n\n\n<p>Once the latest LSN has been sent to my PG18 instance I can now drop my subscription and failover the application traffic towards my new shiny upgraded instance and enjoy all its new features. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-conclusion\">Conclusion :  <\/h2>\n\n\n\n<p>In my demo, I didn&#8217;t use the new &#8220;&#8211;all&#8221; flag of pg_createsubscriber allowing to create a logical replication for all databases which is going to be really convenient for an instance hosting a lot of databases. <br>As PostgreSQL 18 moves from Beta to GA, you can adapt this blueprint for any major version jump but note that pg_createsubscriber is available for version 17 and onward. <br>Whether you\u2019re on bare metal or in the cloud, blue-green migrations like this give you a safe way to ship critical major upgrades with zero surprises and minimal downtime. <br>On large infrastructures, I would advice to script and use Ansible playbooks to automate those tasks but integrate health checks at each steps. <br>Additionally, involve your DEV team to run some tests after the pg_upgrade process to be sure to avoid any surprises on the app side and be aware of the mentioned limitations of logical replication.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As PostgreSQL 18 Beta 1 has been released (link) and is packed with lots of cool features like improved IO on reads for example, I wanted to test out the new pg_createsubscriber and pg_upgrade options and how they are going to help us performing smoother major upgrades using the &#8220;blue-green&#8221; migration similar to the ones [&hellip;]<\/p>\n","protected":false},"author":153,"featured_media":37151,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[83],"tags":[3617,77,2601],"type_dbi":[2749],"class_list":["post-38391","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-postgresql","tag-pg18","tag-postgresql","tag-upgrade-2","type-postgresql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>PostgreSQL 17-18 major upgrade - blue-green migration with minimal downtime - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PostgreSQL 17-18 major upgrade - blue-green migration with minimal downtime\" \/>\n<meta property=\"og:description\" content=\"As PostgreSQL 18 Beta 1 has been released (link) and is packed with lots of cool features like improved IO on reads for example, I wanted to test out the new pg_createsubscriber and pg_upgrade options and how they are going to help us performing smoother major upgrades using the &#8220;blue-green&#8221; migration similar to the ones [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-18T19:51:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-19T09:32:19+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/elephant-1822636_1280.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"873\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Adrien Obernesser\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Adrien Obernesser\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/\"},\"author\":{\"name\":\"Adrien Obernesser\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/fd2ab917212ce0200c7618afaa7fdbcd\"},\"headline\":\"PostgreSQL 17-18 major upgrade &#8211; blue-green migration with minimal downtime\",\"datePublished\":\"2025-05-18T19:51:53+00:00\",\"dateModified\":\"2025-05-19T09:32:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/\"},\"wordCount\":873,\"commentCount\":2,\"image\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/02\\\/elephant-1822636_1280.jpg\",\"keywords\":[\"PG18\",\"PostgreSQL\",\"upgrade\"],\"articleSection\":[\"PostgreSQL\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/\",\"name\":\"PostgreSQL 17-18 major upgrade - blue-green migration with minimal downtime - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/02\\\/elephant-1822636_1280.jpg\",\"datePublished\":\"2025-05-18T19:51:53+00:00\",\"dateModified\":\"2025-05-19T09:32:19+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/fd2ab917212ce0200c7618afaa7fdbcd\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/02\\\/elephant-1822636_1280.jpg\",\"contentUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2025\\\/02\\\/elephant-1822636_1280.jpg\",\"width\":1280,\"height\":873},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PostgreSQL 17-18 major upgrade &#8211; blue-green migration with minimal downtime\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/fd2ab917212ce0200c7618afaa7fdbcd\",\"name\":\"Adrien Obernesser\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g\",\"caption\":\"Adrien Obernesser\"},\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/author\\\/adrienobernesser\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"PostgreSQL 17-18 major upgrade - blue-green migration with minimal downtime - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/","og_locale":"en_US","og_type":"article","og_title":"PostgreSQL 17-18 major upgrade - blue-green migration with minimal downtime","og_description":"As PostgreSQL 18 Beta 1 has been released (link) and is packed with lots of cool features like improved IO on reads for example, I wanted to test out the new pg_createsubscriber and pg_upgrade options and how they are going to help us performing smoother major upgrades using the &#8220;blue-green&#8221; migration similar to the ones [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/","og_site_name":"dbi Blog","article_published_time":"2025-05-18T19:51:53+00:00","article_modified_time":"2025-05-19T09:32:19+00:00","og_image":[{"width":1280,"height":873,"url":"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/elephant-1822636_1280.jpg","type":"image\/jpeg"}],"author":"Adrien Obernesser","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Adrien Obernesser","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/"},"author":{"name":"Adrien Obernesser","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd"},"headline":"PostgreSQL 17-18 major upgrade &#8211; blue-green migration with minimal downtime","datePublished":"2025-05-18T19:51:53+00:00","dateModified":"2025-05-19T09:32:19+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/"},"wordCount":873,"commentCount":2,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/elephant-1822636_1280.jpg","keywords":["PG18","PostgreSQL","upgrade"],"articleSection":["PostgreSQL"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/","url":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/","name":"PostgreSQL 17-18 major upgrade - blue-green migration with minimal downtime - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/elephant-1822636_1280.jpg","datePublished":"2025-05-18T19:51:53+00:00","dateModified":"2025-05-19T09:32:19+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/elephant-1822636_1280.jpg","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/elephant-1822636_1280.jpg","width":1280,"height":873},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/postgresql-17-18-major-upgrade-blue-green-migration-with-minimal-downtime\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"PostgreSQL 17-18 major upgrade &#8211; blue-green migration with minimal downtime"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd","name":"Adrien Obernesser","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g","caption":"Adrien Obernesser"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/adrienobernesser\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/38391","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/153"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=38391"}],"version-history":[{"count":50,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/38391\/revisions"}],"predecessor-version":[{"id":38501,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/38391\/revisions\/38501"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/37151"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=38391"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=38391"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=38391"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=38391"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}