{"id":31682,"date":"2024-03-07T15:19:29","date_gmt":"2024-03-07T14:19:29","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=31682"},"modified":"2024-03-07T15:19:32","modified_gmt":"2024-03-07T14:19:32","slug":"getting-started-with-greenplum-5-recovering-from-failed-segment-nodes","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/","title":{"rendered":"Getting started with Greenplum \u2013 5 &#8211; Recovering from failed segment nodes"},"content":{"rendered":"\n<p>This is the next post in this little Greenplum series. This time we&#8217;ll look at how we can recover from a failed segment. If you are looking for the previous post, they are here: <a href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-1-installation\/\" target=\"_blank\" rel=\"noreferrer noopener\">Getting started with Greenplum \u2013 1 \u2013 Installation<\/a>, <a href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\" target=\"_blank\" rel=\"noreferrer noopener\">Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster<\/a>, <a href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-3-behind-the-scenes\/\" target=\"_blank\" rel=\"noreferrer noopener\">Getting started with Greenplum \u2013 3 \u2013 Behind the scenes<\/a>, <a href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-4-backup-restore-databases\/\" target=\"_blank\" rel=\"noreferrer noopener\">Getting started with Greenplum \u2013 4 \u2013 Backup &amp; Restore \u2013 databases<\/a>.<\/p>\n\n\n\n<p>Let&#8217;s quickly come back to what we&#8217;ve deployed currently:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n                                        |-------------------|\n                             |------6000|primary---------   |\n                             |          |     Segment 1 |   |\n                             |      7000|mirror&amp;lt;------| |   |\n                             |          |-------------------|\n                             |                        | |\n            |-------------------|                     | |\n            |                   |                     | |\n        5432|   Coordinator     |                     | |\n            |                   |                     | |\n            |-------------------|                     | |\n                             |                        | |\n                             |          |-------------------|\n                             |------6000|primary ------ |   |\n                                        |     Segment 2 |   |\n                                    7000|mirror&amp;lt;--------|   |\n                                        |-------------------|\n<\/pre><\/div>\n\n\n<p>The coordinator host is the entry point for the application and requests are routed to the segment hosts. The idea behind this is, that you can use the power of multiple (segment) hosts to deliver what you&#8217;ve asked for. The more segment hosts you add, the more compute resources you can use.<\/p>\n\n\n\n<p>The questions is: How can you recover from a failed segment node? With the current deployment this would reduce the compute resources by 50% and you probably want to have this back online as soon as possible.<\/p>\n\n\n\n<p>To get the current status of your segments you can use &#8220;gpstate&#8221;:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: \n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-Greenplum instance status summary\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Coordinator instance                                      = Active\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Coordinator standby                                       = No coordinator standby configured\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total segment instance count from metadata                = 4\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Primary Segment Status\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total primary segments                                    = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total primary segment valid (at coordinator)              = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total primary segment failures (at coordinator)           = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid files missing              = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid files found                = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid PIDs missing               = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid PIDs found                 = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of \/tmp lock files missing                   = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of \/tmp lock files found                     = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number postmaster processes missing                 = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number postmaster processes found                   = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Mirror Segment Status\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total mirror segments                                     = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total mirror segment valid (at coordinator)               = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total mirror segment failures (at coordinator)            = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid files missing              = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid files found                = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid PIDs missing               = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of postmaster.pid PIDs found                 = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of \/tmp lock files missing                   = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number of \/tmp lock files found                     = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number postmaster processes missing                 = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number postmaster processes found                   = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number mirror segments acting as primary segments   = 0\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Total number mirror segments acting as mirror segments    = 2\n20240307:11:03:16:001723 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n\n<\/pre><\/div>\n\n\n<p>This confirms that all is fine as of now. There are two primary and two mirror segments and all of them are up and running. You can also ask &#8220;gpstate&#8221; to only check for segments which have issues:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate -e\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: -e\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segment Mirroring Status Report\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:10:15:001862 gpstate:cdw:gpadmin-&#x5B;INFO]:-All segments are running normally\n<\/pre><\/div>\n\n\n<p>There are some scenarios which can happen to a segment: Either you lose the full node, for whatever reason. Another thing which can happen, is that a specific PostgreSQL instance is not running anymore, either a primary or a mirror segment. Finally it can happen that the PGDATA for specific segment got corrupted.<\/p>\n\n\n\n<p>Let&#8217;s start with an segment instance which went down. To force this, let&#8217;s kill the mirror instance on the first segment node (sdw1):<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,8,9]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@sdw1 ~]$ ps -ef | egrep &quot;7000|mirror&quot; | grep -v grep\ngpadmin     1343       1  0 10:50 ?        00:00:00 \/usr\/local\/greenplum-db-7.1.0\/bin\/postgres -D \/data\/mirror\/gpseg1 -c gp_role=execute\ngpadmin     1344    1343  0 10:50 ?        00:00:00 postgres:  7000, logger process   \ngpadmin     1346    1343  0 10:50 ?        00:00:00 postgres:  7000, startup   recovering 000000010000000000000004\ngpadmin     1348    1343  0 10:50 ?        00:00:00 postgres:  7000, checkpointer   \ngpadmin     1349    1343  0 10:50 ?        00:00:00 postgres:  7000, background writer   \ngpadmin     1372    1343  0 10:50 ?        00:00:01 postgres:  7000, walreceiver   streaming 0\/127CD2A8\n&#x5B;gpadmin@sdw1 ~]$ kill -9 1372 1349 1348 1346 1344 1343\n&#x5B;gpadmin@sdw1 ~]$ ps -ef | egrep &quot;7000|mirror&quot; | grep -v grep\n&#x5B;gpadmin@sdw1 ~]$ \n\n<\/pre><\/div>\n\n\n<p>The coordinator should become aware of this:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate -e\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: -e\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;WARNING]:-pg_stat_replication shows no standby connections\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segment Mirroring Status Report\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-Unsynchronized Segment Pairs\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Current Primary   Port   WAL sync remaining bytes   Mirror   Port\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-   sdw2              6000   Unknown                    sdw1     7000\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-Downed Segments (may include segments where status could not be retrieved)\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Segment   Port   Config status   Status\n20240307:11:44:25:002261 gpstate:cdw:gpadmin-&#x5B;INFO]:-   sdw1      7000   Down            Down in configuration\n\n<\/pre><\/div>\n\n\n<p>This confirms that the segment is down and the coordinator is aware of it. In the most simple case you can just use &#8220;gprecoverseg&#8221; to recover any failed segments like this:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gprecoverseg\n20240307:11:45:33:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting gprecoverseg with args: \n20240307:11:45:33:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:11:45:33:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:11:45:33:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully finished pg_controldata \/data\/primary\/gpseg1 for dbid 3:\nstdout: pg_control version number:            12010700\nCatalog version number:               302307241\nDatabase system identifier:           7340990201631847636\nDatabase cluster state:               in production\npg_control last modified:             Thu 07 Mar 2024 10:55:32 AM CET\nLatest checkpoint location:           0\/127CD1E8\nLatest checkpoint&#039;s REDO location:    0\/127CD1B0\nLatest checkpoint&#039;s REDO WAL file:    000000010000000000000004\nLatest checkpoint&#039;s TimeLineID:       1\nLatest checkpoint&#039;s PrevTimeLineID:   1\nLatest checkpoint&#039;s full_page_writes: on\nLatest checkpoint&#039;s NextXID:          0:545\nLatest checkpoint&#039;s NextGxid:         25\nLatest checkpoint&#039;s NextOID:          17451\nLatest checkpoint&#039;s NextRelfilenode:  16392\nLatest checkpoint&#039;s NextMultiXactId:  1\nLatest checkpoint&#039;s NextMultiOffset:  0\nLatest checkpoint&#039;s oldestXID:        529\nLatest checkpoint&#039;s oldestXID&#039;s DB:   13719\nLatest checkpoint&#039;s oldestActiveXID:  545\nLatest checkpoint&#039;s oldestMultiXid:   1\nLatest checkpoint&#039;s oldestMulti&#039;s DB: 13720\nLatest checkpoint&#039;s oldestCommitTsXid:0\nLatest checkpoint&#039;s newestCommitTsXid:0\nTime of latest checkpoint:            Thu 07 Mar 2024 10:55:32 AM CET\nFake LSN counter for unlogged rels:   0\/3E8\nMinimum recovery ending location:     0\/0\nMin recovery ending loc&#039;s timeline:   0\nBackup start location:                0\/0\nBackup end location:                  0\/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              750\nmax_worker_processes setting:         12\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           250\nmax_locks_per_xact setting:           128\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  32768\nBlocks per segment of large relation: 32768\nWAL block size:                       32768\nBytes per WAL segment:                67108864\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        8140\nSize of a large-object chunk:         8192\nDate\/time type storage:               64-bit integers\nFloat4 argument passing:              by value\nFloat8 argument passing:              by value\nData page checksum version:           1\nMock authentication nonce:            ae03e3dc891309211ede650a2e28c1b7dac2e510970a912d9f428761f243896e\n\nstderr: \n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully finished pg_controldata \/data\/mirror\/gpseg1 for dbid 5:\nstdout: pg_control version number:            12010700\nCatalog version number:               302307241\nDatabase system identifier:           7340990201631847636\nDatabase cluster state:               in archive recovery\npg_control last modified:             Thu 07 Mar 2024 11:00:32 AM CET\nLatest checkpoint location:           0\/127CD1E8\nLatest checkpoint&#039;s REDO location:    0\/127CD1B0\nLatest checkpoint&#039;s REDO WAL file:    000000010000000000000004\nLatest checkpoint&#039;s TimeLineID:       1\nLatest checkpoint&#039;s PrevTimeLineID:   1\nLatest checkpoint&#039;s full_page_writes: on\nLatest checkpoint&#039;s NextXID:          0:545\nLatest checkpoint&#039;s NextGxid:         25\nLatest checkpoint&#039;s NextOID:          17451\nLatest checkpoint&#039;s NextRelfilenode:  16392\nLatest checkpoint&#039;s NextMultiXactId:  1\nLatest checkpoint&#039;s NextMultiOffset:  0\nLatest checkpoint&#039;s oldestXID:        529\nLatest checkpoint&#039;s oldestXID&#039;s DB:   13719\nLatest checkpoint&#039;s oldestActiveXID:  545\nLatest checkpoint&#039;s oldestMultiXid:   1\nLatest checkpoint&#039;s oldestMulti&#039;s DB: 13720\nLatest checkpoint&#039;s oldestCommitTsXid:0\nLatest checkpoint&#039;s newestCommitTsXid:0\nTime of latest checkpoint:            Thu 07 Mar 2024 10:55:32 AM CET\nFake LSN counter for unlogged rels:   0\/3E8\nMinimum recovery ending location:     0\/127CD2A8\nMin recovery ending loc&#039;s timeline:   1\nBackup start location:                0\/0\nBackup end location:                  0\/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              750\nmax_worker_processes setting:         12\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           250\nmax_locks_per_xact setting:           128\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  32768\nBlocks per segment of large relation: 32768\nWAL block size:                       32768\nBytes per WAL segment:                67108864\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        8140\nSize of a large-object chunk:         8192\nDate\/time type storage:               64-bit integers\nFloat4 argument passing:              by value\nFloat8 argument passing:              by value\nData page checksum version:           1\nMock authentication nonce:            ae03e3dc891309211ede650a2e28c1b7dac2e510970a912d9f428761f243896e\n\nstderr: \n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Heap checksum setting is consistent between coordinator and the segments that are candidates for recoverseg\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Greenplum instance recovery parameters\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery type              = Standard\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery 1 of 1\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Synchronization mode                 = Incremental\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance host                 = sdw1\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance address              = sdw1\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance directory            = \/data\/mirror\/gpseg1\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance port                 = 7000\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance host        = sdw2\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance address     = sdw2\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance directory   = \/data\/primary\/gpseg1\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance port        = 6000\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Target                      = in-place\n20240307:11:45:34:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n\nContinue with segment recovery procedure Yy|Nn (default=N):\n<\/pre><\/div>\n\n\n<p>Once we confirm this, the failed instance should recover from the primary segment on the same host:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [2]; title: ; notranslate\" title=\"\">\nContinue with segment recovery procedure Yy|Nn (default=N):\n&gt; y\n20240307:11:48:44:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting to create new pg_hba.conf on primary segments\n20240307:11:48:44:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-killing existing walsender process on primary sdw2:6000 to refresh replication connection\n20240307:11:48:44:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully modified pg_hba.conf on primary segments to allow replication connections\n20240307:11:48:44:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-1 segment(s) to recover\n20240307:11:48:44:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Ensuring 1 failed segment(s) are stopped\n20240307:11:48:45:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Setting up the required segments for recovery\n20240307:11:48:45:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Updating configuration for mirrors\n20240307:11:48:45:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Initiating segment recovery. Upon completion, will start the successfully recovered segments\n20240307:11:48:45:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-era is 5519b53b4b2c1dab_240307105028\nsdw1 (dbid 5): skipping pg_rewind on mirror as standby.signal is present\n20240307:11:48:46:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Triggering FTS probe\n20240307:11:48:46:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-********************************\n20240307:11:48:46:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Segments successfully recovered.\n20240307:11:48:46:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-********************************\n20240307:11:48:46:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovered mirror segments need to sync WAL with primary segments.\n20240307:11:48:46:002349 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Use &#039;gpstate -e&#039; to check progress of WAL sync remaining bytes\n\n<\/pre><\/div>\n\n\n<p>Asking for any failed segments again confirms that all went well and the failed instance is back online:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate -e\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: -e\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segment Mirroring Status Report\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:11:49:16:002438 gpstate:cdw:gpadmin-&#x5B;INFO]:-All segments are running normally\n\n<\/pre><\/div>\n\n\n<p>This was the easy case. What happens if we remove PGGDATA of a primary segment? We&#8217;ll use the same node for this test and remove PGDATA of the primary instance on node sdw1:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,12,13]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@sdw1 ~]$ ps -ef | egrep &quot;6000|primary&quot; | grep -v grep\ngpadmin     1329       1  0 10:50 ?        00:00:00 \/usr\/local\/greenplum-db-7.1.0\/bin\/postgres -D \/data\/primary\/gpseg0 -c gp_role=execute\ngpadmin     1345    1329  0 10:50 ?        00:00:00 postgres:  6000, logger process   \ngpadmin     1352    1329  0 10:50 ?        00:00:00 postgres:  6000, checkpointer   \ngpadmin     1353    1329  0 10:50 ?        00:00:00 postgres:  6000, background writer   \ngpadmin     1354    1329  0 10:50 ?        00:00:00 postgres:  6000, walwriter   \ngpadmin     1355    1329  0 10:50 ?        00:00:00 postgres:  6000, autovacuum launcher   \ngpadmin     1356    1329  0 10:50 ?        00:00:00 postgres:  6000, stats collector   \ngpadmin     1357    1329  0 10:50 ?        00:00:00 postgres:  6000, logical replication launcher   \ngpadmin     1360    1329  0 10:50 ?        00:00:00 postgres:  6000, walsender gpadmin 192.168.122.202(40808) streaming 0\/127E5FE8\n&#x5B;gpadmin@sdw1 ~]$ rm -rf \/data\/primary\/gpseg0\/*\n&#x5B;gpadmin@sdw1 ~]$ ps -ef | egrep &quot;6000|primary&quot; | grep -v grep\n&#x5B;gpadmin@sdw1 ~]$ \n<\/pre><\/div>\n\n\n<p>Again, the coordinator node is of course aware of that:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate -e\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: -e\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;WARNING]:-pg_stat_replication shows no standby connections\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segment Mirroring Status Report\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segments with Primary and Mirror Roles Switched\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Current Primary   Port   Mirror   Port\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-   sdw2              7000   sdw1     6000\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Unsynchronized Segment Pairs\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Current Primary   Port   WAL sync remaining bytes   Mirror   Port\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-   sdw2              7000   Unknown                    sdw1     6000\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-Downed Segments (may include segments where status could not be retrieved)\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Segment   Port   Config status   Status\n20240307:15:00:20:004247 gpstate:cdw:gpadmin-&#x5B;INFO]:-   sdw1      6000   Down            Down in configuration\n<\/pre><\/div>\n\n\n<p>Trying to recover in the same way as before:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gprecoverseg \n20240307:15:01:37:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting gprecoverseg with args: \n20240307:15:01:37:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:01:37:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:01:37:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully finished pg_controldata \/data\/mirror\/gpseg0 for dbid 4:\nstdout: pg_control version number:            12010700\nCatalog version number:               302307241\nDatabase system identifier:           7340990201624057424\nDatabase cluster state:               in production\npg_control last modified:             Thu 07 Mar 2024 02:59:57 PM CET\nLatest checkpoint location:           0\/127E6088\nLatest checkpoint&#039;s REDO location:    0\/127E6018\nLatest checkpoint&#039;s REDO WAL file:    000000020000000000000004\nLatest checkpoint&#039;s TimeLineID:       2\nLatest checkpoint&#039;s PrevTimeLineID:   2\nLatest checkpoint&#039;s full_page_writes: on\nLatest checkpoint&#039;s NextXID:          0:545\nLatest checkpoint&#039;s NextGxid:         25\nLatest checkpoint&#039;s NextOID:          17451\nLatest checkpoint&#039;s NextRelfilenode:  16392\nLatest checkpoint&#039;s NextMultiXactId:  1\nLatest checkpoint&#039;s NextMultiOffset:  0\nLatest checkpoint&#039;s oldestXID:        529\nLatest checkpoint&#039;s oldestXID&#039;s DB:   13719\nLatest checkpoint&#039;s oldestActiveXID:  545\nLatest checkpoint&#039;s oldestMultiXid:   1\nLatest checkpoint&#039;s oldestMulti&#039;s DB: 13720\nLatest checkpoint&#039;s oldestCommitTsXid:0\nLatest checkpoint&#039;s newestCommitTsXid:0\nTime of latest checkpoint:            Thu 07 Mar 2024 02:59:57 PM CET\nFake LSN counter for unlogged rels:   0\/3E8\nMinimum recovery ending location:     0\/0\nMin recovery ending loc&#039;s timeline:   0\nBackup start location:                0\/0\nBackup end location:                  0\/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              750\nmax_worker_processes setting:         12\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           250\nmax_locks_per_xact setting:           128\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  32768\nBlocks per segment of large relation: 32768\nWAL block size:                       32768\nBytes per WAL segment:                67108864\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        8140\nSize of a large-object chunk:         8192\nDate\/time type storage:               64-bit integers\nFloat4 argument passing:              by value\nFloat8 argument passing:              by value\nData page checksum version:           1\nMock authentication nonce:            300975858d3712b8d6ecd6583814e3bc603e12304aa764d1c659721e205dc0ad\n\nstderr: \n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;WARNING]:-cannot access pg_controldata for dbid 2 on host sdw1\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Heap checksum setting is consistent between coordinator and the segments that are candidates for recoverseg\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Greenplum instance recovery parameters\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery type              = Standard\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery 1 of 1\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Synchronization mode                 = Incremental\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance host                 = sdw1\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance address              = sdw1\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance directory            = \/data\/primary\/gpseg0\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance port                 = 6000\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance host        = sdw2\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance address     = sdw2\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance directory   = \/data\/mirror\/gpseg0\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance port        = 7000\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Target                      = in-place\n20240307:15:01:38:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n\nContinue with segment recovery procedure Yy|Nn (default=N):\n&gt; y\n20240307:15:01:52:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting to create new pg_hba.conf on primary segments\n20240307:15:01:52:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-killing existing walsender process on primary sdw2:7000 to refresh replication connection\n20240307:15:01:52:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully modified pg_hba.conf on primary segments to allow replication connections\n20240307:15:01:52:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-1 segment(s) to recover\n20240307:15:01:52:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Ensuring 1 failed segment(s) are stopped\n20240307:15:01:52:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Setting up the required segments for recovery\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Updating configuration for mirrors\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Initiating segment recovery. Upon completion, will start the successfully recovered segments\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-era is 5519b53b4b2c1dab_240307105028\nsdw1 (dbid 2): pg_rewind: fatal: could not open file &quot;\/data\/primary\/gpseg0\/global\/pg_control&quot; for reading: No such file or directory\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------------\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Failed to recover the following segments. You must run either gprecoverseg --differential or gprecoverseg -F for all incremental failures\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:- hostname: sdw1; port: 6000; logfile: \/home\/gpadmin\/gpAdminLogs\/pg_rewind.20240307_150152.dbid2.out; recoverytype: incremental\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Triggering FTS probe\n20240307:15:01:53:004431 gprecoverseg:cdw:gpadmin-&#x5B;ERROR]:-gprecoverseg failed. Please check the output for more details.\n\n<\/pre><\/div>\n\n\n<p>This fails because &#8220;pg_control&#8221; is not available anymore. By default recovery will do a incremental recovery and this cannot anymore work. Now we need to do a full recovery:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gprecoverseg -F\n20240307:15:03:15:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting gprecoverseg with args: -F\n20240307:15:03:15:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:03:15:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:03:15:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully finished pg_controldata \/data\/mirror\/gpseg0 for dbid 4:\nstdout: pg_control version number:            12010700\nCatalog version number:               302307241\nDatabase system identifier:           7340990201624057424\nDatabase cluster state:               in production\npg_control last modified:             Thu 07 Mar 2024 03:01:53 PM CET\nLatest checkpoint location:           0\/127E6180\nLatest checkpoint&#039;s REDO location:    0\/127E6148\nLatest checkpoint&#039;s REDO WAL file:    000000020000000000000004\nLatest checkpoint&#039;s TimeLineID:       2\nLatest checkpoint&#039;s PrevTimeLineID:   2\nLatest checkpoint&#039;s full_page_writes: on\nLatest checkpoint&#039;s NextXID:          0:545\nLatest checkpoint&#039;s NextGxid:         25\nLatest checkpoint&#039;s NextOID:          17451\nLatest checkpoint&#039;s NextRelfilenode:  16392\nLatest checkpoint&#039;s NextMultiXactId:  1\nLatest checkpoint&#039;s NextMultiOffset:  0\nLatest checkpoint&#039;s oldestXID:        529\nLatest checkpoint&#039;s oldestXID&#039;s DB:   13719\nLatest checkpoint&#039;s oldestActiveXID:  545\nLatest checkpoint&#039;s oldestMultiXid:   1\nLatest checkpoint&#039;s oldestMulti&#039;s DB: 13720\nLatest checkpoint&#039;s oldestCommitTsXid:0\nLatest checkpoint&#039;s newestCommitTsXid:0\nTime of latest checkpoint:            Thu 07 Mar 2024 03:01:53 PM CET\nFake LSN counter for unlogged rels:   0\/3E8\nMinimum recovery ending location:     0\/0\nMin recovery ending loc&#039;s timeline:   0\nBackup start location:                0\/0\nBackup end location:                  0\/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              750\nmax_worker_processes setting:         12\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           250\nmax_locks_per_xact setting:           128\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  32768\nBlocks per segment of large relation: 32768\nWAL block size:                       32768\nBytes per WAL segment:                67108864\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        8140\nSize of a large-object chunk:         8192\nDate\/time type storage:               64-bit integers\nFloat4 argument passing:              by value\nFloat8 argument passing:              by value\nData page checksum version:           1\nMock authentication nonce:            300975858d3712b8d6ecd6583814e3bc603e12304aa764d1c659721e205dc0ad\n\nstderr: \n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;WARNING]:-cannot access pg_controldata for dbid 2 on host sdw1\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Heap checksum setting is consistent between coordinator and the segments that are candidates for recoverseg\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Greenplum instance recovery parameters\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery type              = Standard\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery 1 of 1\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Synchronization mode                 = Full\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance host                 = sdw1\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance address              = sdw1\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance directory            = \/data\/primary\/gpseg0\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance port                 = 6000\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance host        = sdw2\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance address     = sdw2\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance directory   = \/data\/mirror\/gpseg0\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance port        = 7000\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Target                      = in-place\n20240307:15:03:16:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n\nContinue with segment recovery procedure Yy|Nn (default=N):\n&gt; y\n20240307:15:03:21:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting to create new pg_hba.conf on primary segments\n20240307:15:03:21:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-killing existing walsender process on primary sdw2:7000 to refresh replication connection\n20240307:15:03:21:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully modified pg_hba.conf on primary segments to allow replication connections\n20240307:15:03:21:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-1 segment(s) to recover\n20240307:15:03:21:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Ensuring 1 failed segment(s) are stopped\n20240307:15:03:21:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Setting up the required segments for recovery\n20240307:15:03:22:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Updating configuration for mirrors\n20240307:15:03:22:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Initiating segment recovery. Upon completion, will start the successfully recovered segments\n20240307:15:03:22:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-era is 5519b53b4b2c1dab_240307105028\nsdw1 (dbid 2): pg_basebackup: base backup completed\n20240307:15:03:25:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Triggering FTS probe\n20240307:15:03:25:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-********************************\n20240307:15:03:25:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Segments successfully recovered.\n20240307:15:03:25:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-********************************\n20240307:15:03:25:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovered mirror segments need to sync WAL with primary segments.\n20240307:15:03:25:004491 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Use &#039;gpstate -e&#039; to check progress of WAL sync remaining bytes\n<\/pre><\/div>\n\n\n<p>This worked but now we see this when we ask for failed segment nodes:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate -e\n20240307:15:04:35:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: -e\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segment Mirroring Status Report\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segments with Primary and Mirror Roles Switched\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-   Current Primary   Port   Mirror   Port\n20240307:15:04:36:004552 gpstate:cdw:gpadmin-&#x5B;INFO]:-   sdw2              7000   sdw1     6000\n<\/pre><\/div>\n\n\n<p>The reason is, that not all instances are in their preferred role anymore:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; highlight: [1,7,8]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ psql -c &quot;select * from gp_segment_configuration&quot; postgres\n dbid | content | role | preferred_role | mode | status | port | hostname | address |          datadir          \n------+---------+------+----------------+------+--------+------+----------+---------+---------------------------\n    1 |      -1 | p    | p              | n    | u      | 5432 | cdw      | cdw     | \/data\/coordinator\/gpseg-1\n    3 |       1 | p    | p              | s    | u      | 6000 | sdw2     | sdw2    | \/data\/primary\/gpseg1\n    5 |       1 | m    | m              | s    | u      | 7000 | sdw1     | sdw1    | \/data\/mirror\/gpseg1\n    4 |       0 | p    | m              | s    | u      | 7000 | sdw2     | sdw2    | \/data\/mirror\/gpseg0\n    2 |       0 | m    | p              | s    | u      | 6000 | sdw1     | sdw1    | \/data\/primary\/gpseg0\n\n<\/pre><\/div>\n\n\n<p>When you are in such a state you should re-balance the segments:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gprecoverseg -r\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting gprecoverseg with args: -r\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Greenplum instance recovery parameters\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery type              = Rebalance\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Unbalanced segment 1 of 2\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance host        = sdw2\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance address     = sdw2\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance directory   = \/data\/mirror\/gpseg0\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance port        = 7000\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Balanced role                   = Mirror\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Current role                    = Primary\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Unbalanced segment 2 of 2\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance host        = sdw1\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance address     = sdw1\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance directory   = \/data\/primary\/gpseg0\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Unbalanced instance port        = 6000\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Balanced role                   = Primary\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Current role                    = Mirror\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;WARNING]:-This operation will cancel queries that are currently executing.\n20240307:15:08:32:004692 gprecoverseg:cdw:gpadmin-&#x5B;WARNING]:-Connections to the database however will not be interrupted.\n\nContinue with segment rebalance procedure Yy|Nn (default=N):\n&gt; y\n20240307:15:09:53:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Determining primary and mirror segment pairs to rebalance\n20240307:15:09:53:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Allowed replay lag during rebalance is 10 GB\n20240307:15:09:53:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Stopping unbalanced primary segments...\n.\n20240307:15:09:55:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Triggering segment reconfiguration\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting segment synchronization\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-=============================START ANOTHER RECOVER=========================================\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully finished pg_controldata \/data\/primary\/gpseg0 for dbid 2:\nstdout: pg_control version number:            12010700\nCatalog version number:               302307241\nDatabase system identifier:           7340990201624057424\nDatabase cluster state:               in production\npg_control last modified:             Thu 07 Mar 2024 03:10:00 PM CET\nLatest checkpoint location:           0\/18000280\nLatest checkpoint&#039;s REDO location:    0\/18000210\nLatest checkpoint&#039;s REDO WAL file:    000000030000000000000006\nLatest checkpoint&#039;s TimeLineID:       3\nLatest checkpoint&#039;s PrevTimeLineID:   3\nLatest checkpoint&#039;s full_page_writes: on\nLatest checkpoint&#039;s NextXID:          0:545\nLatest checkpoint&#039;s NextGxid:         25\nLatest checkpoint&#039;s NextOID:          17451\nLatest checkpoint&#039;s NextRelfilenode:  16392\nLatest checkpoint&#039;s NextMultiXactId:  1\nLatest checkpoint&#039;s NextMultiOffset:  0\nLatest checkpoint&#039;s oldestXID:        529\nLatest checkpoint&#039;s oldestXID&#039;s DB:   13719\nLatest checkpoint&#039;s oldestActiveXID:  545\nLatest checkpoint&#039;s oldestMultiXid:   1\nLatest checkpoint&#039;s oldestMulti&#039;s DB: 13720\nLatest checkpoint&#039;s oldestCommitTsXid:0\nLatest checkpoint&#039;s newestCommitTsXid:0\nTime of latest checkpoint:            Thu 07 Mar 2024 03:10:00 PM CET\nFake LSN counter for unlogged rels:   0\/3E8\nMinimum recovery ending location:     0\/0\nMin recovery ending loc&#039;s timeline:   0\nBackup start location:                0\/0\nBackup end location:                  0\/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              750\nmax_worker_processes setting:         12\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           250\nmax_locks_per_xact setting:           128\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  32768\nBlocks per segment of large relation: 32768\nWAL block size:                       32768\nBytes per WAL segment:                67108864\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        8140\nSize of a large-object chunk:         8192\nDate\/time type storage:               64-bit integers\nFloat4 argument passing:              by value\nFloat8 argument passing:              by value\nData page checksum version:           1\nMock authentication nonce:            300975858d3712b8d6ecd6583814e3bc603e12304aa764d1c659721e205dc0ad\n\nstderr: \n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully finished pg_controldata \/data\/mirror\/gpseg0 for dbid 4:\nstdout: pg_control version number:            12010700\nCatalog version number:               302307241\nDatabase system identifier:           7340990201624057424\nDatabase cluster state:               shut down\npg_control last modified:             Thu 07 Mar 2024 03:09:54 PM CET\nLatest checkpoint location:           0\/18000158\nLatest checkpoint&#039;s REDO location:    0\/18000158\nLatest checkpoint&#039;s REDO WAL file:    000000020000000000000006\nLatest checkpoint&#039;s TimeLineID:       2\nLatest checkpoint&#039;s PrevTimeLineID:   2\nLatest checkpoint&#039;s full_page_writes: on\nLatest checkpoint&#039;s NextXID:          0:545\nLatest checkpoint&#039;s NextGxid:         25\nLatest checkpoint&#039;s NextOID:          17451\nLatest checkpoint&#039;s NextRelfilenode:  16392\nLatest checkpoint&#039;s NextMultiXactId:  1\nLatest checkpoint&#039;s NextMultiOffset:  0\nLatest checkpoint&#039;s oldestXID:        529\nLatest checkpoint&#039;s oldestXID&#039;s DB:   13719\nLatest checkpoint&#039;s oldestActiveXID:  0\nLatest checkpoint&#039;s oldestMultiXid:   1\nLatest checkpoint&#039;s oldestMulti&#039;s DB: 13720\nLatest checkpoint&#039;s oldestCommitTsXid:0\nLatest checkpoint&#039;s newestCommitTsXid:0\nTime of latest checkpoint:            Thu 07 Mar 2024 03:09:54 PM CET\nFake LSN counter for unlogged rels:   0\/3E8\nMinimum recovery ending location:     0\/0\nMin recovery ending loc&#039;s timeline:   0\nBackup start location:                0\/0\nBackup end location:                  0\/0\nEnd-of-backup record required:        no\nwal_level setting:                    replica\nwal_log_hints setting:                off\nmax_connections setting:              750\nmax_worker_processes setting:         12\nmax_wal_senders setting:              10\nmax_prepared_xacts setting:           250\nmax_locks_per_xact setting:           128\ntrack_commit_timestamp setting:       off\nMaximum data alignment:               8\nDatabase block size:                  32768\nBlocks per segment of large relation: 32768\nWAL block size:                       32768\nBytes per WAL segment:                67108864\nMaximum length of identifiers:        64\nMaximum columns in an index:          32\nMaximum size of a TOAST chunk:        8140\nSize of a large-object chunk:         8192\nDate\/time type storage:               64-bit integers\nFloat4 argument passing:              by value\nFloat8 argument passing:              by value\nData page checksum version:           1\nMock authentication nonce:            300975858d3712b8d6ecd6583814e3bc603e12304aa764d1c659721e205dc0ad\n\nstderr: \n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Heap checksum setting is consistent between coordinator and the segments that are candidates for recoverseg\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Greenplum instance recovery parameters\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery type              = Standard\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovery 1 of 1\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Synchronization mode                 = Incremental\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance host                 = sdw2\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance address              = sdw2\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance directory            = \/data\/mirror\/gpseg0\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Failed instance port                 = 7000\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance host        = sdw1\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance address     = sdw1\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance directory   = \/data\/primary\/gpseg0\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Source instance port        = 6000\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-   Recovery Target                      = in-place\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:----------------------------------------------------------\n20240307:15:10:02:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Starting to create new pg_hba.conf on primary segments\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-killing existing walsender process on primary sdw1:6000 to refresh replication connection\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Successfully modified pg_hba.conf on primary segments to allow replication connections\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-1 segment(s) to recover\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Ensuring 1 failed segment(s) are stopped\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Setting up the required segments for recovery\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Updating configuration for mirrors\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Initiating segment recovery. Upon completion, will start the successfully recovered segments\n20240307:15:10:03:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-era is 5519b53b4b2c1dab_240307105028\nsdw2 (dbid 4): pg_rewind: no rewind required\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Triggering FTS probe\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-********************************\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Segments successfully recovered.\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-********************************\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Recovered mirror segments need to sync WAL with primary segments.\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-Use &#039;gpstate -e&#039; to check progress of WAL sync remaining bytes\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-==============================END ANOTHER RECOVER==========================================\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-******************************************************************\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-The rebalance operation has completed successfully.\n20240307:15:10:04:004692 gprecoverseg:cdw:gpadmin-&#x5B;INFO]:-******************************************************************\n<\/pre><\/div>\n\n\n<p>Once this completed we are back to normal operations:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpstate -e\n20240307:15:11:15:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-Starting gpstate with args: -e\n20240307:15:11:15:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-local Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240307:15:11:15:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-coordinator Greenplum Version: &#039;PostgreSQL 12.12 (Greenplum Database 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), 64-bit compiled on Jan 19 2024 06:51:45 Bhuvnesh C.&#039;\n20240307:15:11:15:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240307:15:11:15:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-Gathering data from segments...\n20240307:15:11:16:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:11:16:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-Segment Mirroring Status Report\n20240307:15:11:16:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240307:15:11:16:004815 gpstate:cdw:gpadmin-&#x5B;INFO]:-All segments are running normally\n\n&#x5B;gpadmin@cdw ~]$ psql -c &quot;select * from gp_segment_configuration&quot; postgres\n dbid | content | role | preferred_role | mode | status | port | hostname | address |          datadir          \n------+---------+------+----------------+------+--------+------+----------+---------+---------------------------\n    1 |      -1 | p    | p              | n    | u      | 5432 | cdw      | cdw     | \/data\/coordinator\/gpseg-1\n    3 |       1 | p    | p              | s    | u      | 6000 | sdw2     | sdw2    | \/data\/primary\/gpseg1\n    5 |       1 | m    | m              | s    | u      | 7000 | sdw1     | sdw1    | \/data\/mirror\/gpseg1\n    2 |       0 | p    | p              | s    | u      | 6000 | sdw1     | sdw1    | \/data\/primary\/gpseg0\n    4 |       0 | m    | m              | s    | u      | 7000 | sdw2     | sdw2    | \/data\/mirror\/gpseg0\n(5 rows)\n<\/pre><\/div>\n\n\n<p>What we did here was recovering &#8220;in-place&#8221;, which means on the the same node. Recovery onto another node is possible as well as long as the new node comes with the same configuration as the current node (Greenplum release, OS version, &#8230;).<\/p>\n\n\n\n<p>The important point is, that you definitely should go with mirror segments. Of course you need double the space per node,  but it makes recovery a lot easier.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is the next post in this little Greenplum series. This time we&#8217;ll look at how we can recover from a failed segment. If you are looking for the previous post, they are here: Getting started with Greenplum \u2013 1 \u2013 Installation, Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster, [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229,198],"tags":[3276,77],"type_dbi":[],"class_list":["post-31682","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","category-database-management","tag-greenplum","tag-postgresql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Getting started with Greenplum \u2013 5 - Recovering from failed segment nodes - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Getting started with Greenplum \u2013 5 - Recovering from failed segment nodes\" \/>\n<meta property=\"og:description\" content=\"This is the next post in this little Greenplum series. This time we&#8217;ll look at how we can recover from a failed segment. If you are looking for the previous post, they are here: Getting started with Greenplum \u2013 1 \u2013 Installation, Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster, [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-07T14:19:29+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-07T14:19:32+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Getting started with Greenplum \u2013 5 &#8211; Recovering from failed segment nodes\",\"datePublished\":\"2024-03-07T14:19:29+00:00\",\"dateModified\":\"2024-03-07T14:19:32+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\"},\"wordCount\":576,\"commentCount\":0,\"keywords\":[\"Greenplum\",\"PostgreSQL\"],\"articleSection\":[\"Database Administration &amp; Monitoring\",\"Database management\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\",\"name\":\"Getting started with Greenplum \u2013 5 - Recovering from failed segment nodes - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2024-03-07T14:19:29+00:00\",\"dateModified\":\"2024-03-07T14:19:32+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Getting started with Greenplum \u2013 5 &#8211; Recovering from failed segment nodes\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Getting started with Greenplum \u2013 5 - Recovering from failed segment nodes - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/","og_locale":"en_US","og_type":"article","og_title":"Getting started with Greenplum \u2013 5 - Recovering from failed segment nodes","og_description":"This is the next post in this little Greenplum series. This time we&#8217;ll look at how we can recover from a failed segment. If you are looking for the previous post, they are here: Getting started with Greenplum \u2013 1 \u2013 Installation, Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster, [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/","og_site_name":"dbi Blog","article_published_time":"2024-03-07T14:19:29+00:00","article_modified_time":"2024-03-07T14:19:32+00:00","author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Getting started with Greenplum \u2013 5 &#8211; Recovering from failed segment nodes","datePublished":"2024-03-07T14:19:29+00:00","dateModified":"2024-03-07T14:19:32+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/"},"wordCount":576,"commentCount":0,"keywords":["Greenplum","PostgreSQL"],"articleSection":["Database Administration &amp; Monitoring","Database management"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/","url":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/","name":"Getting started with Greenplum \u2013 5 - Recovering from failed segment nodes - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2024-03-07T14:19:29+00:00","dateModified":"2024-03-07T14:19:32+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-5-recovering-from-failed-segment-nodes\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Getting started with Greenplum \u2013 5 &#8211; Recovering from failed segment nodes"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31682","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=31682"}],"version-history":[{"count":18,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31682\/revisions"}],"predecessor-version":[{"id":31709,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31682\/revisions\/31709"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=31682"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=31682"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=31682"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=31682"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}