{"id":31478,"date":"2024-02-29T13:11:05","date_gmt":"2024-02-29T12:11:05","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=31478"},"modified":"2024-02-29T13:11:07","modified_gmt":"2024-02-29T12:11:07","slug":"getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/","title":{"rendered":"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster"},"content":{"rendered":"\n<p>In the <a href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-1-installation\/\" target=\"_blank\" rel=\"noreferrer noopener\">last post<\/a> we&#8217;ve configured the operating system for Greenplum and completed the installation. In this post we&#8217;ll create the so called &#8220;Data Storage Areas&#8221; (which is just a mount point or directory) and initialize the cluster. All the work is performed on the &#8220;Coordinator Host&#8221; and &#8220;gpssh&#8221; is used to perform the work on the remote systems.<\/p>\n\n\n\n<p>For this playground environment we&#8217;ll just use a directory for the storage area. In a real setup you should of course use a dedicated, separate mount point. We start on the coordinator node:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,2]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ sudo mkdir -p \/data\/coordinator\n&#x5B;gpadmin@rocky9-gp7-master ~]$ sudo chown gpadmin:gpadmin \/data\/coordinator\/\n<\/pre><\/div>\n\n\n<p>Using &#8220;gpssh&#8221; we do the same on the two segment hosts:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,3,5,7]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment1 -e &quot;sudo mkdir -p \/data\/coordinator&quot;\n&#x5B;rocky9-gp7-segment1] sudo mkdir -p \/data\/coordinator\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment2 -e &quot;sudo mkdir -p \/data\/coordinator&quot;\n&#x5B;rocky9-gp7-segment2] sudo mkdir -p \/data\/coordinator\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment1 -e &quot;sudo chown gpadmin:gpadmin \/data\/coordinator\/&quot;\n&#x5B;rocky9-gp7-segment1] sudo chown gpadmin:gpadmin \/data\/coordinator\/\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment2 -e &quot;sudo chown gpadmin:gpadmin \/data\/coordinator\/&quot;\n&#x5B;rocky9-gp7-segment2] sudo chown gpadmin:gpadmin \/data\/coordinator\/\n<\/pre><\/div>\n\n\n<p>This storage area is used to store system catalog tables and metadata. It is not used to store any user data.<\/p>\n\n\n\n<p>The storage areas on the segment hosts will store user data, so they need to be bigger. All of the segment nodes should provide a storage area for the so called &#8220;primary segments&#8221;. Those segments are the active ones and will be used by default for serving client requests. In addition there should be a storage area for so called &#8220;mirror segments&#8221;. Those segments will be used in case the primary segment becomes unavailable. For that reason a mirror segment must always be on another host than it&#8217;s primary segment (more on that later).<\/p>\n\n\n\n<p>Before we use &#8220;gpssh&#8221; to do this, let&#8217;s create a file which only contains the host names of the segment hosts:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,2]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ echo &quot;rocky9-gp7-segment1\nrocky9-gp7-segment2&quot; &gt; ~\/hostfile_gpssh_segonly\n<\/pre><\/div>\n\n\n<p>Having this in place we can easily create the directories on the segment nodes:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,4,7]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e &#039;sudo mkdir -p \/data\/primary&#039;\n&#x5B;rocky9-gp7-segment1] sudo mkdir -p \/data\/primary\n&#x5B;rocky9-gp7-segment2] sudo mkdir -p \/data\/primary\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e &#039;sudo mkdir -p \/data\/mirror&#039;\n&#x5B;rocky9-gp7-segment1] sudo mkdir -p \/data\/mirror\n&#x5B;rocky9-gp7-segment2] sudo mkdir -p \/data\/mirror\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e &#039;sudo chown gpadmin:gpadmin \/data\/*&#039;\n&#x5B;rocky9-gp7-segment2] sudo chown gpadmin:gpadmin \/data\/*\n&#x5B;rocky9-gp7-segment1] sudo chown gpadmin:gpadmin \/data\/*\n<\/pre><\/div>\n\n\n<p>Greenplum comes with a utility can you can use to validate your systems when it comes to network, disk and memory performance. The utility is called &#8220;gpcheckperf&#8221; and this, e.g., will run a network performance test:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpcheckperf -f hostfile_exkeys -r N -d \/tmp\n&#x5B;INFO] --buffer-size value is not specified or invalid. Using default (8 kilobytes)\n\/usr\/local\/greenplum-db-7.1.0\/bin\/gpcheckperf -f hostfile_exkeys -r N -d \/tmp\n-------------------\n--  NETPERF TEST\n-------------------\n\n====================\n==  RESULT 2024-02-28T16:41:39.049314\n====================\nNetperf bisection bandwidth test\nrocky9-gp7-master -&amp;gt; rocky9-gp7-segment1 = 1971.150000\nrocky9-gp7-segment2 -&amp;gt; rocky9-gp7-master = 1688.660000\nrocky9-gp7-segment1 -&amp;gt; rocky9-gp7-master = 1310.830000\nrocky9-gp7-master -&amp;gt; rocky9-gp7-segment2 = 1377.070000\n\nSummary:\nsum = 6347.71 MB\/sec\nmin = 1310.83 MB\/sec\nmax = 1971.15 MB\/sec\navg = 1586.93 MB\/sec\nmedian = 1688.66 MB\/sec\n\n&#x5B;Warning] connection between rocky9-gp7-segment2 and rocky9-gp7-master is no good\n&#x5B;Warning] connection between rocky9-gp7-segment1 and rocky9-gp7-master is no good\n&#x5B;Warning] connection between rocky9-gp7-master and rocky9-gp7-segment2 is no good\n<\/pre><\/div>\n\n\n<p>I don&#8217;t care about this warnings because this is just a test, you should care if you do a real setup, of course. Running a disk I\/O test can  be done like this (this will run <a href=\"https:\/\/en.wikipedia.org\/wiki\/Dd_(Unix)\" target=\"_blank\" rel=\"noreferrer noopener\">dd<\/a> tests on all the segment nodes):<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpcheckperf -f hostfile_gpssh_segonly -r ds -D -d \/data\/primary -d \/data\/mirror\n&#x5B;INFO] --buffer-size value is not specified or invalid. Using default (8 kilobytes)\n\/usr\/local\/greenplum-db-7.1.0\/bin\/gpcheckperf -f hostfile_gpssh_segonly -r ds -D -d \/data\/primary -d \/data\/mirror\n&#x5B;Warning] Using 7650140160 bytes for disk performance test. This might take some time\n--------------------\n--  DISK WRITE TEST\n--------------------\n--------------------\n--  DISK READ TEST\n--------------------\n--------------------\n--  STREAM TEST\n--------------------\n\n====================\n==  RESULT 2024-02-28T16:49:58.607351\n====================\n\n disk write avg time (sec): 109.30\n disk write tot bytes: 15300296704\n disk write tot bandwidth (MB\/s): 133.51\n disk write min bandwidth (MB\/s): 66.31 &#x5B;rocky9-gp7-segment1]\n disk write max bandwidth (MB\/s): 67.19 &#x5B;rocky9-gp7-segment2]\n -- per host bandwidth --\n    disk write bandwidth (MB\/s): 66.31 &#x5B;rocky9-gp7-segment1]\n    disk write bandwidth (MB\/s): 67.19 &#x5B;rocky9-gp7-segment2]\n\n\n disk read avg time (sec): 58.48\n disk read tot bytes: 15300296704\n disk read tot bandwidth (MB\/s): 250.04\n disk read min bandwidth (MB\/s): 119.41 &#x5B;rocky9-gp7-segment1]\n disk read max bandwidth (MB\/s): 130.63 &#x5B;rocky9-gp7-segment2]\n -- per host bandwidth --\n    disk read bandwidth (MB\/s): 130.63 &#x5B;rocky9-gp7-segment2]\n    disk read bandwidth (MB\/s): 119.41 &#x5B;rocky9-gp7-segment1]\n\n\n stream tot bandwidth (MB\/s): 66240.30\n stream min bandwidth (MB\/s): 32732.80 &#x5B;rocky9-gp7-segment1]\n stream max bandwidth (MB\/s): 33507.50 &#x5B;rocky9-gp7-segment2]\n -- per host bandwidth --\n    stream bandwidth (MB\/s): 32732.80 &#x5B;rocky9-gp7-segment1]\n    stream bandwidth (MB\/s): 33507.50 &#x5B;rocky9-gp7-segment2]\n\n<\/pre><\/div>\n\n\n<p>Assuming that we&#8217;re happy with the performance statistics we can proceed and initialize the cluster. With a community PostgreSQL installation you would do this with <a href=\"https:\/\/www.postgresql.org\/docs\/current\/app-initdb.html\">initdb<\/a>, and actually initdb and many other utilities you know from PostgreSQL are available on the system:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ ls -la \/usr\/local\/greenplum-db\/bin\/\ntotal 81796\ndrwxr-xr-x  8 gpadmin gpadmin     4096 Feb 28 16:36 .\ndrwxr-xr-x 11 gpadmin gpadmin     4096 Feb 28 14:52 ..\n-rwxr-xr-x  1 gpadmin gpadmin    66665 Feb  8 21:01 analyzedb\n-rwxr-xr-x  1 gpadmin gpadmin   259104 Feb  8 21:01 clusterdb\n-rwxr-xr-x  1 gpadmin gpadmin   254416 Feb  8 21:01 createdb\n-rwxr-xr-x  1 gpadmin gpadmin   265176 Feb  8 21:01 createuser\n-rwxr-xr-x  1 gpadmin gpadmin   238480 Feb  8 21:01 dropdb\n-rwxr-xr-x  1 gpadmin gpadmin   238352 Feb  8 21:01 dropuser\n-rwxr-xr-x  1 gpadmin gpadmin  2754648 Feb  8 21:01 ecpg\n-rwxr-xr-x  1 gpadmin gpadmin    17248 Feb  8 21:01 gpactivatestandby\n-rwxr-xr-x  1 gpadmin gpadmin      494 Feb  8 21:01 gpaddmirrors\n-rwxr-xr-x  1 gpadmin gpadmin   137764 Feb  8 21:01 gpcheckcat\ndrwxr-xr-x  3 gpadmin gpadmin     4096 Feb 28 14:52 gpcheckcat_modules\n-rwxr-xr-x  1 gpadmin gpadmin    29980 Feb  8 21:01 gpcheckperf\n-rwxr-xr-x  1 gpadmin gpadmin     6682 Feb  8 21:01 gpcheckresgroupimpl\n-rwxr-xr-x  1 gpadmin gpadmin     3230 Feb  8 21:01 gpcheckresgroupv2impl\n-rwxr-xr-x  1 gpadmin gpadmin    23374 Feb  8 21:01 gpconfig\ndrwxr-xr-x  3 gpadmin gpadmin     4096 Feb 28 14:52 gpconfig_modules\n-rwxr-xr-x  1 gpadmin gpadmin    13754 Feb  8 21:01 gpdeletesystem\n-rwxr-xr-x  1 gpadmin gpadmin   114969 Feb  8 21:01 gpexpand\n-rwxr-xr-x  1 gpadmin gpadmin   407208 Feb  8 21:01 gpfdist\n-rwxr-xr-x  1 gpadmin gpadmin    34959 Feb  8 21:01 gpinitstandby\n-rwxr-xr-x  1 gpadmin gpadmin    83564 Feb  8 21:01 gpinitsystem\n-rwxr-xr-x  1 gpadmin gpadmin      189 Feb  8 21:01 gpload\n-rw-r--r--  1 gpadmin gpadmin      202 Feb  8 21:01 gpload.bat\n-rwxr-xr-x  1 gpadmin gpadmin   113900 Feb  8 21:01 gpload.py\n-rwxr-xr-x  1 gpadmin gpadmin    21018 Feb  8 21:01 gplogfilter\n-rwxr-xr-x  1 gpadmin gpadmin    15333 Feb  8 21:01 gpmemreport\n-rwxr-xr-x  1 gpadmin gpadmin     8032 Feb  8 21:01 gpmemwatcher\n-rwxr-xr-x  1 gpadmin gpadmin    21646 Feb  8 21:01 gpmovemirrors\n-rwxr-xr-x  1 gpadmin gpadmin      548 Feb  8 21:01 gprecoverseg\n-rwxr-xr-x  1 gpadmin gpadmin     1162 Feb  8 21:01 gpreload\n-rwxr-xr-x  1 gpadmin gpadmin    10723 Feb  8 21:01 gpsd\n-rwxr-xr-x  1 gpadmin gpadmin     9258 Feb  8 21:01 gpssh\n-rwxr-xr-x  1 gpadmin gpadmin    32516 Feb  8 21:01 gpssh-exkeys\ndrwxr-xr-x  3 gpadmin gpadmin       70 Feb 28 14:52 gpssh_modules\n-rwxr-xr-x  1 gpadmin gpadmin    37579 Feb  8 21:01 gpstart\n-rwxr-xr-x  1 gpadmin gpadmin      422 Feb  8 21:01 gpstate\n-rwxr-xr-x  1 gpadmin gpadmin    45588 Feb  8 21:01 gpstop\n-rwxr-xr-x  1 gpadmin gpadmin     4074 Feb  8 21:01 gpsync\n-rwxr-xr-x  1 gpadmin gpadmin   528656 Feb  8 21:01 initdb\ndrwxr-xr-x  4 gpadmin gpadmin     4096 Feb 28 14:52 lib\n-rwxr-xr-x  1 gpadmin gpadmin    17611 Feb  8 21:01 minirepro\n-rwxr-xr-x  1 gpadmin gpadmin   163568 Feb  8 21:01 pg_archivecleanup\n-rwxr-xr-x  1 gpadmin gpadmin   459656 Feb  8 21:01 pg_basebackup\n-rwxr-xr-x  1 gpadmin gpadmin   667784 Feb  8 21:01 pgbench\n-rwxr-xr-x  1 gpadmin gpadmin   224176 Feb  8 21:01 pg_checksums\n-rwxr-xr-x  1 gpadmin gpadmin   150736 Feb  8 21:01 pg_config\n-rwxr-xr-x  1 gpadmin gpadmin   177072 Feb  8 21:01 pg_controldata\n-rwxr-xr-x  1 gpadmin gpadmin   235296 Feb  8 21:01 pg_ctl\n-rwxr-xr-x  1 gpadmin gpadmin  1591264 Feb  8 21:01 pg_dump\n-rwxr-xr-x  1 gpadmin gpadmin   371784 Feb  8 21:01 pg_dumpall\n-rwxr-xr-x  1 gpadmin gpadmin   239264 Feb  8 21:01 pg_isready\n-rwxr-xr-x  1 gpadmin gpadmin   327200 Feb  8 21:01 pg_receivewal\n-rwxr-xr-x  1 gpadmin gpadmin   331168 Feb  8 21:01 pg_recvlogical\n-rwxr-xr-x  1 gpadmin gpadmin   211880 Feb  8 21:01 pg_resetwal\n-rwxr-xr-x  1 gpadmin gpadmin   764392 Feb  8 21:01 pg_restore\n-rwxr-xr-x  1 gpadmin gpadmin   480400 Feb  8 21:01 pg_rewind\n-rwxr-xr-x  1 gpadmin gpadmin   171944 Feb  8 21:01 pg_test_fsync\n-rwxr-xr-x  1 gpadmin gpadmin   144336 Feb  8 21:01 pg_test_timing\n-rwxr-xr-x  1 gpadmin gpadmin   606048 Feb  8 21:01 pg_upgrade\n-rwxr-xr-x  1 gpadmin gpadmin   454504 Feb  8 21:01 pg_waldump\n-rwxr-xr-x  1 gpadmin gpadmin 67633848 Feb  8 21:01 postgres\nlrwxrwxrwx  1 gpadmin gpadmin        8 Feb  8 21:01 postmaster -&gt; postgres\n-rwxr-xr-x  1 gpadmin gpadmin  1826136 Feb  8 21:01 psql\ndrwxr-xr-x  2 gpadmin gpadmin       35 Feb 28 14:52 __pycache__\n-rwxr-xr-x  1 gpadmin gpadmin   267224 Feb  8 21:01 reindexdb\ndrwxr-xr-x  2 gpadmin gpadmin       20 Feb 28 14:52 stream\n-rwxr-xr-x  1 gpadmin gpadmin   287832 Feb  8 21:01 vacuumdb\n<\/pre><\/div>\n\n\n<p>The Greenplum system will work across multiple nodes and all of them will host PostgreSQL instances (called segment and coordinator instances). To make this easier to setup Greenplum comes with its own version of &#8220;initdb&#8221; which is called &#8220;gpinitsystem&#8221;:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpinitsystem --version\ngpinitsystem 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source\n<\/pre><\/div>\n\n\n<p>Before the system can be initialized we need to create the Greenplum database configuration file. There is a template we can use as a starting point:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,3,4,5]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ echo $GPHOME\n\/usr\/local\/greenplum-db-7.1.0\n&#x5B;gpadmin@rocky9-gp7-master ~]$ mkdir \/home\/gpadmin\/gpconfigs\/\n&#x5B;gpadmin@rocky9-gp7-master ~]$ cp $GPHOME\/docs\/cli_help\/gpconfigs\/gpinitsystem_config \/home\/gpadmin\/gpconfigs\/gpinitsystem_config\n&#x5B;gpadmin@rocky9-gp7-master ~]$ vi \/home\/gpadmin\/gpconfigs\/gpinitsystem_config\n<\/pre><\/div>\n\n\n<p>For the scope of this demo system all which needs to be adjusted are the data and mirror directories:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ egrep &quot;DATA_DIRECTORY|MIRROR_DATA_DIRECTORY|MIRROR_PORT_BASE&quot; \/home\/gpadmin\/gpconfigs\/gpinitsystem_config | egrep -v &quot;^#&quot;\ndeclare -a DATA_DIRECTORY=(\/data\/primary)\nMIRROR_PORT_BASE=7000\ndeclare -a MIRROR_DATA_DIRECTORY=(\/data\/mirror)\n<\/pre><\/div>\n\n\n<p>This config and the hosts file which contains the segments need to be passed to &#8220;gpinitsystem&#8221;:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpinitsystem -c gpconfigs\/gpinitsystem_config -h hostfile_gpssh_segonly\n20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking configuration parameters, please wait...\n20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Reading Greenplum configuration file gpconfigs\/gpinitsystem_config\n20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Locale has not been set in gpconfigs\/gpinitsystem_config, will set to default value\n20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:-Coordinator hostname cdw does not match hostname output\n20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking to see if cdw can be resolved on this host\nssh: Could not resolve hostname cdw: Name or service not known\nssh: Could not resolve hostname cdw: Name or service not known\n20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;FATAL]:-Coordinator hostname in configuration file is cdw\n20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;FATAL]:-Operating system command returns rocky9-gp7-master.it.dbi-services.com\n20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;FATAL]:-Unable to resolve cdw on this host\n20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;FATAL]:-Coordinator hostname in gpinitsystem configuration file must be cdw Script Exiting!\n<\/pre><\/div>\n\n\n<p>It seems the hostname of the coordinator node needs to be &#8220;cdw&#8221;, so lets add this to the host files on all nodes:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,2]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ sudo vi \/etc\/hosts\n&#x5B;gpadmin@rocky9-gp7-master ~]$ cat \/etc\/hosts\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n192.168.122.200 rocky9-gp7-master rocky9-gp7-master.it.dbi-services.com cdw cdw.it.dbi-services.com\n192.168.122.201 rocky9-gp7-segment1 rocky9-gp7-segment1.it.dbi-services.com\n192.168.122.202 rocky9-gp7-segment2 rocky9-gp7-segment2.it.dbi-services.com\n<\/pre><\/div>\n\n\n<p>Running it once more and it looks much better:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpinitsystem -c gpconfigs\/gpinitsystem_config -h hostfile_gpssh_segonly\n20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking configuration parameters, please wait...\n20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Reading Greenplum configuration file gpconfigs\/gpinitsystem_config\n20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Locale has not been set in gpconfigs\/gpinitsystem_config, will set to default value\n20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:-Coordinator hostname cdw does not match hostname output\n20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking to see if cdw can be resolved on this host\nThe authenticity of host &#039;cdw (192.168.122.200)&#039; can&#039;t be established.\nED25519 key fingerprint is SHA256:Tdo3AwqH109Mgc30keTbDcusFii8PSft0FXWTUS0Tb0.\nThis host key is known by the following other names\/addresses:\n    ~\/.ssh\/known_hosts:1: rocky9-gp7-segment1\n    ~\/.ssh\/known_hosts:4: rocky9-gp7-segment2\n    ~\/.ssh\/known_hosts:5: rocky9-gp7-master\nAre you sure you want to continue connecting (yes\/no\/&#x5B;fingerprint])? yes\nWarning: Permanently added &#039;cdw&#039; (ED25519) to the list of known hosts.\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Can resolve cdw to this host\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-No DATABASE_NAME set, will exit following template1 updates\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-COORDINATOR_MAX_CONNECT not set, will set to default value 250\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking configuration parameters, Completed\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Commencing multi-home checks, please wait...\n..\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Configuring build for standard array\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Commencing multi-home checks, Completed\n20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Building primary segment instance array, please wait...\n....\n20240229:11:45:32:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Building group mirror array type , please wait...\n....\n20240229:11:45:34:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking Coordinator host\n20240229:11:45:34:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking new segment hosts, please wait...\n........\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Checking new segment hosts, Completed\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Database Creation Parameters\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator Configuration\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator hostname       = cdw\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator port           = 5432\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator instance dir   = \/data\/coordinator\/gpseg-1\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator LOCALE         = \n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum segment prefix   = gpseg\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator Database       = \n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator connections    = 250\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator buffers        = 128000kB\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Segment connections        = 750\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Segment buffers            = 128000kB\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Encoding                   = UNICODE\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Postgres param file        = Off\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Initdb to be used          = \/usr\/local\/greenplum-db-7.1.0\/bin\/initdb\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-GP_LIBRARY_PATH is         = \/usr\/local\/greenplum-db-7.1.0\/lib\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-HEAP_CHECKSUM is           = on\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-HBA_HOSTNAMES is           = 0\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Ulimit check               = Passed\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Array host connect type    = Single hostname per node\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator IP address &#x5B;1]      = ::1\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator IP address &#x5B;2]      = 192.168.122.200\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator IP address &#x5B;3]      = fe80::5054:ff:fe5d:fef7\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Standby Coordinator             = Not Configured\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Number of primary segments = 2\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Total Database segments    = 4\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Trusted shell              = ssh\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Number segment hosts       = 2\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Mirror port base           = 7000\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Number of mirror segments  = 2\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Mirroring config           = ON\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Mirroring type             = Group\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:----------------------------------------\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Primary Segment Configuration\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:----------------------------------------\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment1.it.dbi-services.com     6000    rocky9-gp7-segment1     \/data\/primary\/gpseg0        2\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment1.it.dbi-services.com     6001    rocky9-gp7-segment1     \/data\/primary\/gpseg1        3\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment2.it.dbi-services.com     6000    rocky9-gp7-segment2     \/data\/primary\/gpseg2        4\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment2.it.dbi-services.com     6001    rocky9-gp7-segment2     \/data\/primary\/gpseg3        5\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Mirror Segment Configuration\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment2.it.dbi-services.com     7000    rocky9-gp7-segment2     \/data\/mirror\/gpseg0 6\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment2.it.dbi-services.com     7001    rocky9-gp7-segment2     \/data\/mirror\/gpseg1 7\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment1.it.dbi-services.com     7000    rocky9-gp7-segment1     \/data\/mirror\/gpseg2 8\n20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-rocky9-gp7-segment1.it.dbi-services.com     7001    rocky9-gp7-segment1     \/data\/mirror\/gpseg3 9\n\nContinue with Greenplum creation Yy|Nn (default=N):\n<\/pre><\/div>\n\n\n<p>Confirming the question leads to this:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&gt; Y\n20240229:11:48:12:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Building the Coordinator instance database, please wait...\n20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Starting the Coordinator in admin mode\n20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Commencing parallel build of primary segment instances\n20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Spawning parallel processes    batch &#x5B;1], please wait...\n....\n20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Waiting for parallel processes batch &#x5B;1], please wait...\n.........\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Parallel process exit status\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Total processes marked as completed           = 4\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Total processes marked as killed              = 0\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Total processes marked as failed              = 0\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Removing back out file\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-No errors generated from parallel processes\n20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Restarting the Greenplum instance in production mode\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Starting gpstop with args: -a -l \/home\/gpadmin\/gpAdminLogs -m -d \/data\/coordinator\/gpseg-1\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Gathering information and validating the environment...\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Obtaining Greenplum Coordinator catalog information\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Commencing Coordinator instance shutdown with mode=&#039;smart&#039;\n20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator segment instance directory=\/data\/coordinator\/gpseg-1\n20240229:11:48:24:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Stopping coordinator segment and waiting for user connections to finish ...\nserver shutting down\n20240229:11:48:25:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Attempting forceful termination of any leftover coordinator process\n20240229:11:48:25:006091 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Terminating processes for segment \/data\/coordinator\/gpseg-1\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Starting gpstart with args: -a -l \/home\/gpadmin\/gpAdminLogs -d \/data\/coordinator\/gpseg-1\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Gathering information and validating the environment...\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Binary Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Catalog Version: &#039;302307241&#039;\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Starting Coordinator instance in admin mode\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=None $GPHOME\/bin\/pg_ctl -D \/data\/coordinator\/gpseg-1 -l \/data\/coordinator\/gpseg-1\/log\/startup.log -w -t 600 -o &quot; -c gp_role=utility &quot; start\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Obtaining Greenplum Coordinator catalog information\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Setting new coordinator era\n20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator Started...\nThe authenticity of host &#039;rocky9-gp7-segment2.it.dbi-services.com (192.168.122.202)&#039; can&#039;t be established.\nED25519 key fingerprint is SHA256:Tdo3AwqH109Mgc30keTbDcusFii8PSft0FXWTUS0Tb0.\nThis host key is known by the following other names\/addresses:\n    ~\/.ssh\/known_hosts:1: rocky9-gp7-segment1\n    ~\/.ssh\/known_hosts:4: rocky9-gp7-segment2\n    ~\/.ssh\/known_hosts:5: rocky9-gp7-master\n    ~\/.ssh\/known_hosts:6: cdw\nThe authenticity of host &#039;rocky9-gp7-segment1.it.dbi-services.com (192.168.122.201)&#039; can&#039;t be established.\nED25519 key fingerprint is SHA256:Tdo3AwqH109Mgc30keTbDcusFii8PSft0FXWTUS0Tb0.\nThis host key is known by the following other names\/addresses:\n    ~\/.ssh\/known_hosts:1: rocky9-gp7-segment1\n    ~\/.ssh\/known_hosts:4: rocky9-gp7-segment2\n    ~\/.ssh\/known_hosts:5: rocky9-gp7-master\n    ~\/.ssh\/known_hosts:6: cdw\nAre you sure you want to continue connecting (yes\/no\/&#x5B;fingerprint])? yes\n\n20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;WARNING]:-One or more hosts are not reachable via SSH.\n20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;WARNING]:-Host rocky9-gp7-segment1.it.dbi-services.com is unreachable\n20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;WARNING]:-Marking segment 2 down because rocky9-gp7-segment1.it.dbi-services.com is unreachable\n20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-&#x5B;CRITICAL]:-gpstart failed. (Reason=&#039;&#039;NoneType&#039; object has no attribute &#039;getSegmentHostName&#039;&#039;) exiting...\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:-Failed to start Greenplum instance; review gpstart output to\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:- determine why gpstart failed and reinitialize cluster after resolving\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:- issues.  Not all initialization tasks have completed so the cluster\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:- should not be used.\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:-gpinitsystem will now try to stop the cluster\n20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Starting gpstop with args: -a -l \/home\/gpadmin\/gpAdminLogs -i -d \/data\/coordinator\/gpseg-1\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Gathering information and validating the environment...\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Obtaining Greenplum Coordinator catalog information\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Commencing Coordinator instance shutdown with mode=&#039;immediate&#039;\n20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Coordinator segment instance directory=\/data\/coordinator\/gpseg-1\n\n20240229:11:51:07:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Attempting forceful termination of any leftover coordinator process\n20240229:11:51:07:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Terminating processes for segment \/data\/coordinator\/gpseg-1\n20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-No standby coordinator host configured\n20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Targeting dbid &#x5B;2, 3, 4, 5] for shutdown\n20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Commencing parallel segment instance shutdown, please wait...\n20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-0.00% of jobs completed\n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-100.00% of jobs completed\n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-   Segments stopped successfully      = 4\n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-   Segments with errors during stop   = 0\n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Successfully shutdown 4 of 4 segment instances \n20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Database successfully shutdown with no errors reported\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;INFO]:-Successfully shutdown the Greenplum instance\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:-Failed to start Greenplum instance; review gpstart output to\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:- determine why gpstart failed and reinitialize cluster after resolving\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:- issues.  Not all initialization tasks have completed so the cluster\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:- should not be used.\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;WARN]:\n20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-&#x5B;FATAL]: starting new instance failed; Script Exiting!\n\n<\/pre><\/div>\n\n\n<p>This happens when you do not fully read the <a href=\"https:\/\/docs.vmware.com\/en\/VMware-Greenplum\/7\/greenplum-database\/install_guide-prep_os.html\" target=\"_blank\" rel=\"noreferrer noopener\">documentation<\/a>: &#8220;The Greenplum Database host naming convention for the coordinator host is\u00a0<code>cdw<\/code>\u00a0and for the standby coordinator host is\u00a0<code>scdw<\/code>.<\/p>\n\n\n\n<p>The segment host naming convention is sdwN where sdw is a prefix and N is an integer. For example, segment host names would be\u00a0<code>sdw1<\/code>,\u00a0<code>sdw2<\/code>\u00a0and so on. NIC bonding is recommended for hosts with multiple interfaces, but when the interfaces are not bonded, the convention is to append a dash (<code>-<\/code>) and number to the host name. For example,\u00a0<code>sdw1-1<\/code>\u00a0and\u00a0<code>sdw1-2<\/code>\u00a0are the two interface names for host\u00a0<code>sdw1<\/code>.&#8221; <\/p>\n\n\n\n<p>So, lets fix this (also change the hostname on each node):<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,2,10,14]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ sudo vi \/etc\/hosts\n&#x5B;gpadmin@rocky9-gp7-master ~]$ cat \/etc\/hosts\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n192.168.122.200 cdw cdw.it.dbi-services.com\n192.168.122.201 sdw1 sdw1.it.dbi-services.com\n192.168.122.202 sdw2 sdw2.it.dbi-services.com\n\n&#x5B;gpadmin@rocky9-gp7-master ~]$ cat hostfile_gpssh_segonly \nsdw1\nsdw2\n\n&#x5B;gpadmin@rocky9-gp7-master ~]$ cat hostfile_exkeys \ncdw\nsdw1\nsdw2\n<\/pre><\/div>\n\n\n<p>Before initializing the system again, lets cleanup what was already created:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e &quot;rm -rf \/data\/primary\/*; rm -rf \/data\/mirror\/*; rm -rf \/data\/coordinator\/*&quot;\n&#x5B;sdw2] rm -rf \/data\/primary\/*; rm -rf \/data\/mirror\/*; rm -rf \/data\/coordinator\/*\n&#x5B;sdw1] rm -rf \/data\/primary\/*; rm -rf \/data\/mirror\/*; rm -rf \/data\/coordinator\/*\n&#x5B;gpadmin@rocky9-gp7-master ~]$ rm -rf \/data\/coordinator\/*\n<\/pre><\/div>\n\n\n<p>Next try:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpinitsystem -c gpconfigs\/gpinitsystem_config -h hostfile_gpssh_segonly\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Checking configuration parameters, please wait...\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Reading Greenplum configuration file gpconfigs\/gpinitsystem_config\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Locale has not been set in gpconfigs\/gpinitsystem_config, will set to default value\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-No DATABASE_NAME set, will exit following template1 updates\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-COORDINATOR_MAX_CONNECT not set, will set to default value 250\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Checking configuration parameters, Completed\n20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Commencing multi-home checks, please wait...\n..\n20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Configuring build for standard array\n20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Commencing multi-home checks, Completed\n20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Building primary segment instance array, please wait...\n..\n20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Building group mirror array type , please wait...\n..\n20240229:12:51:05:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Checking Coordinator host\n20240229:12:51:05:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Checking new segment hosts, please wait...\n....\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Checking new segment hosts, Completed\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Greenplum Database Creation Parameters\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator Configuration\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator hostname       = cdw\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator port           = 5432\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator instance dir   = \/data\/coordinator\/gpseg-1\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator LOCALE         = \n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Greenplum segment prefix   = gpseg\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator Database       = \n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator connections    = 250\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator buffers        = 128000kB\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Segment connections        = 750\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Segment buffers            = 128000kB\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Encoding                   = UNICODE\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Postgres param file        = Off\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Initdb to be used          = \/usr\/local\/greenplum-db-7.1.0\/bin\/initdb\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-GP_LIBRARY_PATH is         = \/usr\/local\/greenplum-db-7.1.0\/lib\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-HEAP_CHECKSUM is           = on\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-HBA_HOSTNAMES is           = 0\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Ulimit check               = Passed\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Array host connect type    = Single hostname per node\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator IP address &#x5B;1]      = ::1\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator IP address &#x5B;2]      = 192.168.122.200\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Coordinator IP address &#x5B;3]      = fe80::5054:ff:fe5d:fef7\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Standby Coordinator             = Not Configured\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Number of primary segments = 1\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total Database segments    = 2\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Trusted shell              = ssh\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Number segment hosts       = 2\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Mirror port base           = 7000\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Number of mirror segments  = 1\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Mirroring config           = ON\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Mirroring type             = Group\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:----------------------------------------\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Greenplum Primary Segment Configuration\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:----------------------------------------\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-sdw1  6000    sdw1    \/data\/primary\/gpseg0   2\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-sdw2  6000    sdw2    \/data\/primary\/gpseg1   3\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Greenplum Mirror Segment Configuration\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:---------------------------------------\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-sdw2  7000    sdw2    \/data\/mirror\/gpseg0    4\n20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-sdw1  7000    sdw1    \/data\/mirror\/gpseg1    5\n\nContinue with Greenplum creation Yy|Nn (default=N):\n&gt; y\n20240229:12:51:12:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Building the Coordinator instance database, please wait...\n20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Starting the Coordinator in admin mode\n20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Commencing parallel build of primary segment instances\n20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Spawning parallel processes    batch &#x5B;1], please wait...\n..\n20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Waiting for parallel processes batch &#x5B;1], please wait...\n.......\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Parallel process exit status\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total processes marked as completed           = 2\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total processes marked as killed              = 0\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total processes marked as failed              = 0\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Removing back out file\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-No errors generated from parallel processes\n20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Restarting the Greenplum instance in production mode\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Starting gpstop with args: -a -l \/home\/gpadmin\/gpAdminLogs -m -d \/data\/coordinator\/gpseg-1\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Gathering information and validating the environment...\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Obtaining Greenplum Coordinator catalog information\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Greenplum Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Commencing Coordinator instance shutdown with mode=&#039;smart&#039;\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Coordinator segment instance directory=\/data\/coordinator\/gpseg-1\n20240229:12:51:20:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Stopping coordinator segment and waiting for user connections to finish ...\nserver shutting down\n20240229:12:51:21:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Attempting forceful termination of any leftover coordinator process\n20240229:12:51:21:016290 gpstop:cdw:gpadmin-&#x5B;INFO]:-Terminating processes for segment \/data\/coordinator\/gpseg-1\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Starting gpstart with args: -a -l \/home\/gpadmin\/gpAdminLogs -d \/data\/coordinator\/gpseg-1\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Gathering information and validating the environment...\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Greenplum Binary Version: &#039;postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source&#039;\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Greenplum Catalog Version: &#039;302307241&#039;\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Starting Coordinator instance in admin mode\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=None $GPHOME\/bin\/pg_ctl -D \/data\/coordinator\/gpseg-1 -l \/data\/coordinator\/gpseg-1\/log\/startup.log -w -t 600 -o &quot; -c gp_role=utility &quot; start\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Obtaining Greenplum Coordinator catalog information\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Obtaining Segment details from coordinator...\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Setting new coordinator era\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Coordinator Started...\n20240229:12:51:23:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Shutting down coordinator\n20240229:12:51:24:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Commencing parallel segment instance startup, please wait...\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Process results...\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-   Successful segment starts                                            = 2\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-   Failed segment starts                                                = 0\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Successfully started 2 of 2 segment instances \n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-----------------------------------------------------\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Starting Coordinator instance cdw directory \/data\/coordinator\/gpseg-1 \n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=b37f5ee82ead4186_240229125123 $GPHOME\/bin\/pg_ctl -D \/data\/coordinator\/gpseg-1 -l \/data\/coordinator\/gpseg-1\/log\/startup.log -w -t 600 -o &quot; -c gp_role=dispatch &quot; start\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Command pg_ctl reports Coordinator cdw instance active\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Connecting to db template1 on host localhost\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-No standby coordinator configured.  skipping...\n20240229:12:51:25:016530 gpstart:cdw:gpadmin-&#x5B;INFO]:-Database successfully started\n20240229:12:51:25:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Completed restart of Greenplum instance in production mode\n20240229:12:51:25:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Creating core GPDB extensions\n20240229:12:51:25:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Importing system collations\n20240229:12:51:27:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Commencing parallel build of mirror segment instances\n20240229:12:51:27:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Spawning parallel processes    batch &#x5B;1], please wait...\n..\n20240229:12:51:27:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Waiting for parallel processes batch &#x5B;1], please wait...\n......\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Parallel process exit status\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total processes marked as completed           = 2\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total processes marked as killed              = 0\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Total processes marked as failed              = 0\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:------------------------------------------------\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Scanning utility log file for any warning messages\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;WARN]:-*******************************************************\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;WARN]:-Scan of log file indicates that some warnings or errors\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;WARN]:-were generated during the array creation\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Please review contents of log file\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-\/home\/gpadmin\/gpAdminLogs\/gpinitsystem_20240229.log\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-To determine level of criticality\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-These messages could be from a previous run of the utility\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-that was called today!\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;WARN]:-*******************************************************\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Greenplum Database instance successfully created\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-------------------------------------------------------\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-To complete the environment configuration, please \n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-update gpadmin .bashrc file with the following\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-1. Ensure that the greenplum_path.sh file is sourced\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-2. Add &quot;export COORDINATOR_DATA_DIRECTORY=\/data\/coordinator\/gpseg-1&quot;\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-   to access the Greenplum scripts for this instance:\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-   or, use -d \/data\/coordinator\/gpseg-1 option for the Greenplum scripts\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-   Example gpstate -d \/data\/coordinator\/gpseg-1\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Script log file = \/home\/gpadmin\/gpAdminLogs\/gpinitsystem_20240229.log\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-To remove instance, run gpdeletesystem utility\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-To initialize a Standby Coordinator Segment for this Greenplum instance\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Review options for gpinitstandby\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-------------------------------------------------------\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-The Coordinator \/data\/coordinator\/gpseg-1\/pg_hba.conf post gpinitsystem\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-has been configured to allow all hosts within this new\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-array to intercommunicate. Any hosts external to this\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-new array must be explicitly added to this file\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-Refer to the Greenplum Admin support guide which is\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-located in the \/usr\/local\/greenplum-db-7.1.0\/docs directory\n20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-&#x5B;INFO]:-------------------------------------------------------\n<\/pre><\/div>\n\n\n<p>All fine. The last step from the documentation is to set the time zone with &#8220;gpconfig&#8221;. To list the current time zone which is used by the system:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpconfig -s TimeZone\nValues on all segments are consistent\nGUC              : TimeZone\nCoordinator value: Europe\/Zurich\nSegment     value: Europe\/Zurich\n<\/pre><\/div>\n\n\n<p>To set the time zone:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [1,3,4]; title: ; notranslate\" title=\"\">\n&#x5B;gpadmin@cdw ~]$ gpconfig -c TimeZone -v &#039;Europe\/Zurich&#039;\nEnvironment Variable COORDINATOR_DATA_DIRECTORY not set!\n&#x5B;gpadmin@cdw ~]$ export COORDINATOR_DATA_DIRECTORY=\/data\/coordinator\/gpseg-1\/\n&#x5B;gpadmin@cdw ~]$ gpconfig -c TimeZone -v &#039;Europe\/Zurich&#039;\n20240229:13:01:03:017941 gpconfig:cdw:gpadmin-&#x5B;INFO]:-completed successfully with parameters &#039;-c TimeZone -v Europe\/Zurich&#039;\n<\/pre><\/div>\n\n\n<p>That&#8217;s it for the scope of this post. In the next post we&#8217;ll look in more detail what got created and how the PostgreSQL instances interact with each other.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the last post we&#8217;ve configured the operating system for Greenplum and completed the installation. In this post we&#8217;ll create the so called &#8220;Data Storage Areas&#8221; (which is just a mount point or directory) and initialize the cluster. All the work is performed on the &#8220;Coordinator Host&#8221; and &#8220;gpssh&#8221; is used to perform the work [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229,198],"tags":[3276,77],"type_dbi":[],"class_list":["post-31478","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","category-database-management","tag-greenplum","tag-postgresql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster\" \/>\n<meta property=\"og:description\" content=\"In the last post we&#8217;ve configured the operating system for Greenplum and completed the installation. In this post we&#8217;ll create the so called &#8220;Data Storage Areas&#8221; (which is just a mount point or directory) and initialize the cluster. All the work is performed on the &#8220;Coordinator Host&#8221; and &#8220;gpssh&#8221; is used to perform the work [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-29T12:11:05+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-29T12:11:07+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster\",\"datePublished\":\"2024-02-29T12:11:05+00:00\",\"dateModified\":\"2024-02-29T12:11:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\"},\"wordCount\":713,\"commentCount\":0,\"keywords\":[\"Greenplum\",\"PostgreSQL\"],\"articleSection\":[\"Database Administration &amp; Monitoring\",\"Database management\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\",\"name\":\"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2024-02-29T12:11:05+00:00\",\"dateModified\":\"2024-02-29T12:11:07+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/","og_locale":"en_US","og_type":"article","og_title":"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster","og_description":"In the last post we&#8217;ve configured the operating system for Greenplum and completed the installation. In this post we&#8217;ll create the so called &#8220;Data Storage Areas&#8221; (which is just a mount point or directory) and initialize the cluster. All the work is performed on the &#8220;Coordinator Host&#8221; and &#8220;gpssh&#8221; is used to perform the work [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/","og_site_name":"dbi Blog","article_published_time":"2024-02-29T12:11:05+00:00","article_modified_time":"2024-02-29T12:11:07+00:00","author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster","datePublished":"2024-02-29T12:11:05+00:00","dateModified":"2024-02-29T12:11:07+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/"},"wordCount":713,"commentCount":0,"keywords":["Greenplum","PostgreSQL"],"articleSection":["Database Administration &amp; Monitoring","Database management"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/","url":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/","name":"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2024-02-29T12:11:05+00:00","dateModified":"2024-02-29T12:11:07+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/getting-started-with-greenplum-2-initializing-and-bringing-up-the-cluster\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Getting started with Greenplum \u2013 2 \u2013 Initializing and bringing up the cluster"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31478","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=31478"}],"version-history":[{"count":15,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31478\/revisions"}],"predecessor-version":[{"id":31519,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/31478\/revisions\/31519"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=31478"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=31478"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=31478"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=31478"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}