{"id":23111,"date":"2023-03-02T09:50:57","date_gmt":"2023-03-02T08:50:57","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=23111"},"modified":"2023-03-02T09:50:59","modified_gmt":"2023-03-02T08:50:59","slug":"starting-postgresql-logical-replication-for-some-tables-based-on-a-dump","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/","title":{"rendered":"Starting PostgreSQL logical replication for some tables based on a dump"},"content":{"rendered":"\n<p>Recently at a customer we faced a somehow strange issue: We&#8217;ve setup logical replication from a public cloud managed PostgreSQL service to a PostgreSQL instance on a VM hosted by the same public cloud provider. For most of the tables that went fine, but for a few of them, throughput for synchronizing the initial snapshot (or initial load) dropped to a degree that it would have taken weeks for this process to complete. PostgreSQL as a managed service in public clouds is fine, as long as all goes smooth. Once you need to troubleshoot an issue, you are limited to what the cloud provider gives you. Usually the limitations start inside PostgreSQL because you don&#8217;t get a real super user. When you want to troubleshoot something from the operating system, you&#8217;re anyway lost. So the only options you have if you really don&#8217;t know what the real issue is, are the dashboards you get from the provider, or creating support tickets, or the PostgreSQL log file. Those can, or cannot, give you the details you need.<\/p>\n\n\n\n<p>In our case, we wanted to move fast and explored the options we had left:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Creating a physical replica to a self managed PostgreSQL on a VM just for being able to troubleshoot the real issue from there on: First we though that might be an option, but it quickly turned it is not: No superuser access<\/li>\n\n\n\n<li>Separating the problematic tables into separate subscriptions and publications did not help as well, we saw the same slowdown in transfer speed<\/li>\n\n\n\n<li>As only four out of a dozen tables had issues we didn&#8217;t want to totally get rid of logical replication<\/li>\n\n\n\n<li>Can we dump \/ reload those four tables and start the logical replication based on the time of the dump? This way we would not loose the benefit of logical replication at all. It turned out this is possible, and this is what this post is about.<\/li>\n<\/ul>\n\n\n\n<p>The source in the customer&#8217;s setup was PostgreSQL 11, so we&#8217;ll use the same version here. For this simple demo we&#8217;ll use <a href=\"https:\/\/www.postgresql.org\/docs\/current\/pgbench.html\" target=\"_blank\" rel=\"noreferrer noopener\">pgbench<\/a> to initialize a simple schema and use these standard tables for the replication:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,6,23]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                              version                                               \n----------------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# \\! pgbench -i -s 10 \ndropping old tables...\ncreating tables...\ngenerating data...\n100000 of 1000000 tuples (10%) done (elapsed 0.10 s, remaining 0.93 s)\n200000 of 1000000 tuples (20%) done (elapsed 0.33 s, remaining 1.31 s)\n300000 of 1000000 tuples (30%) done (elapsed 0.52 s, remaining 1.22 s)\n400000 of 1000000 tuples (40%) done (elapsed 0.74 s, remaining 1.10 s)\n500000 of 1000000 tuples (50%) done (elapsed 0.96 s, remaining 0.96 s)\n600000 of 1000000 tuples (60%) done (elapsed 1.20 s, remaining 0.80 s)\n700000 of 1000000 tuples (70%) done (elapsed 1.44 s, remaining 0.62 s)\n800000 of 1000000 tuples (80%) done (elapsed 1.61 s, remaining 0.40 s)\n900000 of 1000000 tuples (90%) done (elapsed 1.74 s, remaining 0.19 s)\n1000000 of 1000000 tuples (100%) done (elapsed 1.98 s, remaining 0.00 s)\nvacuuming...\ncreating primary keys...\ndone.\npostgres=# \\d\n              List of relations\n Schema |       Name       | Type  |  Owner   \n--------+------------------+-------+----------\n public | pgbench_accounts | table | postgres\n public | pgbench_branches | table | postgres\n public | pgbench_history  | table | postgres\n public | pgbench_tellers  | table | postgres\n(4 rows)\n<\/pre><\/div>\n\n\n<p>Let&#8217;s assume the problematic table is the large one, which is pgbench_accounts. The other three tables replicate fine and we can put them into the same <a href=\"https:\/\/www.postgresql.org\/docs\/current\/logical-replication-publication.html\" target=\"_blank\" rel=\"noreferrer noopener\">publication<\/a>:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,3]; title: ; notranslate\" title=\"\">\npostgres=# create publication pub_test for table pgbench_branches,pgbench_history,pgbench_tellers;\nCREATE PUBLICATION\npostgres=# select * from pg_publication;\n pubname  | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate \n----------+----------+--------------+-----------+-----------+-----------+-------------\n pub_test |       10 | f            | t         | t         | t         | t\n(1 row)\n<\/pre><\/div>\n\n\n<p>On the target instance, which is PostgreSQL 15 in this case, we need to prepare the same table structures as on the source. You can either do that manually, use <a href=\"https:\/\/www.postgresql.org\/docs\/current\/app-pgdump.html\" target=\"_blank\" rel=\"noreferrer noopener\">pg_dump<\/a> to create only the schema, or just use pgbench and truncate the tables:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,6,14,16,18,20]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                                       version                                                        \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# \\! pgbench -i -s 1\ndropping old tables...\ncreating tables...\ngenerating data (client-side)...\n100000 of 100000 tuples (100%) done (elapsed 0.19 s, remaining 0.00 s)\nvacuuming...\ncreating primary keys...\ndone in 0.44 s (drop tables 0.00 s, create tables 0.02 s, client-side generate 0.21 s, vacuum 0.04 s, primary keys 0.16 s).\npostgres=# truncate table pgbench_accounts;\nTRUNCATE TABLE\npostgres=# truncate table pgbench_branches;\nTRUNCATE TABLE\npostgres=# truncate table pgbench_history;\nTRUNCATE TABLE\npostgres=# truncate table pgbench_tellers;\nTRUNCATE TABLE\npostgres=# \n<\/pre><\/div>\n\n\n<p>Once we have the structures, we can create the <a href=\"https:\/\/www.postgresql.org\/docs\/current\/logical-replication-subscription.html\" target=\"_blank\" rel=\"noreferrer noopener\">subscription<\/a> to attach to the just created publication on the source, and wait for the initial snapshot \/ load to complete:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,3,9]; title: ; notranslate\" title=\"\">\npostgres=# create subscription sub_test connection &#039;host=localhost port=5435 user=postgres dbname=postgres&#039; publication pub_test;\nCREATE SUBSCRIPTION\npostgres=# select * from pg_subscription;\n  oid  | subdbid | subskiplsn | subname  | subowner | subenabled | subbinary | substream | subtwophasestate | subdisableonerr |     &gt;\n-------+---------+------------+----------+----------+------------+-----------+-----------+------------------+-----------------+-----&gt;\n 16589 |   16559 | 0\/0        | sub_test |       10 | t          | f         | f         | d                | f               | host&gt;\n(1 row)\n\npostgres=# select count(*) from pgbench_branches;\n count \n-------\n    10\n(1 row)\n<\/pre><\/div>\n\n\n<p>The synchronization for the three tables is up and running and changes are replicated on the fly. This can easily be verified by inserting a row on the source:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,7]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                              version                                               \n----------------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\npostgres=# insert into pgbench_branches values (-1,-1,&#039;aa&#039;);\nINSERT 0 1\n<\/pre><\/div>\n\n\n<p>&#8230; and check for the same row in the target:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,7]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                                       version                                                        \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\npostgres=# select * from pgbench_branches where bid = -1;\n bid | bbalance |                                          filler                                          \n-----+----------+------------------------------------------------------------------------------------------\n  -1 |       -1 | aa                                                                                      \n(1 row)\n<\/pre><\/div>\n\n\n<p>Now we need to handle the remaining table. The first step to do is to create a publication for that table in the source:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; first-line: 1; highlight: [1,6,8]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                              version                                               \n----------------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# create publication pub_test_2 for table pgbench_accounts;\nCREATE PUBLICATION\npostgres=# select * from pg_publication;\n  pubname   | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate \n------------+----------+--------------+-----------+-----------+-----------+-------------\n pub_test   |       10 | f            | t         | t         | t         | t\n pub_test_2 |       10 | f            | t         | t         | t         | t\n(2 rows)\n\n<\/pre><\/div>\n\n\n<p>For being able to start the replication from a specific point in time, we need a snapshot in the source to start from. This can be done by creating a <a href=\"https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html\" target=\"_blank\" rel=\"noreferrer noopener\">replication connection<\/a> to the source database and then define a logical replication slot. The important point here is, that you need to keep this connection open until the replication is fully setup. Otherwise you&#8217;ll loose the snapshot:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,7,8,12]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                              version                                               \n----------------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\npostgres=# \\q\npostgres@debian11pg:\/home\/postgres\/ &#x5B;1112] psql &quot;dbname=postgres replication=database&quot;\npsql (11.12)\nType &quot;help&quot; for help.\n\npostgres=# CREATE_REPLICATION_SLOT my_logical_repl_slot LOGICAL pgoutput;\n      slot_name       | consistent_point |    snapshot_name    | output_plugin \n----------------------+------------------+---------------------+---------------\n my_logical_repl_slot | 0\/37404478       | 00000003-00000097-1 | pgoutput\n(1 row)\n\npostgres=# \n<\/pre><\/div>\n\n\n<p>This gives us the snapshot (00000003-00000097-1) we can export from with pg_dump:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\npostgres@debian11pg:\/home\/postgres\/ &#x5B;1112] pg_dump --snapshot=00000003-00000097-1 -a -t public.pgbench_accounts &gt; pgbench_accounts.sql\n<\/pre><\/div>\n\n\n<p>On the target we&#8217;ll load the pgbench_accounts table:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [0,1,6,22]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                                       version                                                        \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# \\i pgbench_accounts.sql\nSET\nSET\nSET\nSET\nSET\n set_config \n------------\n \n(1 row)\n\nSET\nSET\nSET\nSET\nCOPY 1000000\npostgres=# select count(*) from public.pgbench_accounts;\n  count  \n---------\n 1000000\n(1 row)\n<\/pre><\/div>\n\n\n<p>At this point, at least in real life, changes are still going on in the pgbench_accounts table in the source and we do not have them in the target yet. Now we need a new subscription which attaches to the logical replication slot we&#8217;ve created in the replication connection. The important point here is, that this subscription does not load the initial data automatically, which can be specified using the following options:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,6]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                                       version                                                        \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# create subscription sub_test_2 connection &#039;host=localhost port=5435 user=postgres dbname=postgres&#039; publication pub_test_2 with ( slot_name = &#039;my_logical_repl_slot&#039;, create_slot=&#039;false&#039; , enabled=&#039;false&#039;, copy_data=&#039;false&#039;);\nCREATE SUBSCRIPTION\n<\/pre><\/div>\n\n\n<p>Having that in place we can start the replication:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                                       version                                                        \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# alter subscription sub_test_2 enable;\nALTER SUBSCRIPTION\n<\/pre><\/div>\n\n\n<p>Adding data on the source should now trigger the replication of the new data to the target:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,7]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                              version                                               \n----------------------------------------------------------------------------------------------------\n PostgreSQL 11.12 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\npostgres=# insert into pgbench_accounts select i,i,i,i::text from generate_series(1000001,1000100) i;\nINSERT 0 100\npostgres=# \n<\/pre><\/div>\n\n\n<p>On the target we should see 100 additional rows in the pgbench_accounts table:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; highlight: [1,6]; title: ; notranslate\" title=\"\">\npostgres=# select version();\n                                                       version                                                        \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\npostgres=# select count(*) from public.pgbench_accounts ;\n  count  \n---------\n 1000100\n(1 row)\n<\/pre><\/div>\n\n\n<p>Works as expected. This can be a nice workaround if you have tables which , for whatever reason, cannot be initially replicated in a time which is acceptable for the project.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently at a customer we faced a somehow strange issue: We&#8217;ve setup logical replication from a public cloud managed PostgreSQL service to a PostgreSQL instance on a VM hosted by the same public cloud provider. For most of the tables that went fine, but for a few of them, throughput for synchronizing the initial snapshot [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229,198],"tags":[2862],"type_dbi":[],"class_list":["post-23111","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","category-database-management","tag-postgresl"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Starting PostgreSQL logical replication for some tables based on a dump - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Starting PostgreSQL logical replication for some tables based on a dump\" \/>\n<meta property=\"og:description\" content=\"Recently at a customer we faced a somehow strange issue: We&#8217;ve setup logical replication from a public cloud managed PostgreSQL service to a PostgreSQL instance on a VM hosted by the same public cloud provider. For most of the tables that went fine, but for a few of them, throughput for synchronizing the initial snapshot [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-02T08:50:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-02T08:50:59+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Starting PostgreSQL logical replication for some tables based on a dump\",\"datePublished\":\"2023-03-02T08:50:57+00:00\",\"dateModified\":\"2023-03-02T08:50:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\"},\"wordCount\":774,\"commentCount\":4,\"keywords\":[\"PostgreSL\"],\"articleSection\":[\"Database Administration &amp; Monitoring\",\"Database management\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\",\"name\":\"Starting PostgreSQL logical replication for some tables based on a dump - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2023-03-02T08:50:57+00:00\",\"dateModified\":\"2023-03-02T08:50:59+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Starting PostgreSQL logical replication for some tables based on a dump\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Starting PostgreSQL logical replication for some tables based on a dump - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/","og_locale":"en_US","og_type":"article","og_title":"Starting PostgreSQL logical replication for some tables based on a dump","og_description":"Recently at a customer we faced a somehow strange issue: We&#8217;ve setup logical replication from a public cloud managed PostgreSQL service to a PostgreSQL instance on a VM hosted by the same public cloud provider. For most of the tables that went fine, but for a few of them, throughput for synchronizing the initial snapshot [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/","og_site_name":"dbi Blog","article_published_time":"2023-03-02T08:50:57+00:00","article_modified_time":"2023-03-02T08:50:59+00:00","author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Starting PostgreSQL logical replication for some tables based on a dump","datePublished":"2023-03-02T08:50:57+00:00","dateModified":"2023-03-02T08:50:59+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/"},"wordCount":774,"commentCount":4,"keywords":["PostgreSL"],"articleSection":["Database Administration &amp; Monitoring","Database management"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/","url":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/","name":"Starting PostgreSQL logical replication for some tables based on a dump - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2023-03-02T08:50:57+00:00","dateModified":"2023-03-02T08:50:59+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/starting-postgresql-logical-replication-for-some-tables-based-on-a-dump\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Starting PostgreSQL logical replication for some tables based on a dump"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/23111","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=23111"}],"version-history":[{"count":32,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/23111\/revisions"}],"predecessor-version":[{"id":23163,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/23111\/revisions\/23163"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=23111"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=23111"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=23111"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=23111"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}