In the last post we’ve configured the operating system for Greenplum and completed the installation. In this post we’ll create the so called “Data Storage Areas” (which is just a mount point or directory) and initialize the cluster. All the work is performed on the “Coordinator Host” and “gpssh” is used to perform the work on the remote systems.

For this playground environment we’ll just use a directory for the storage area. In a real setup you should of course use a dedicated, separate mount point. We start on the coordinator node:

[gpadmin@rocky9-gp7-master ~]$ sudo mkdir -p /data/coordinator
[gpadmin@rocky9-gp7-master ~]$ sudo chown gpadmin:gpadmin /data/coordinator/

Using “gpssh” we do the same on the two segment hosts:

[gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment1 -e "sudo mkdir -p /data/coordinator"
[rocky9-gp7-segment1] sudo mkdir -p /data/coordinator
[gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment2 -e "sudo mkdir -p /data/coordinator"
[rocky9-gp7-segment2] sudo mkdir -p /data/coordinator
[gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment1 -e "sudo chown gpadmin:gpadmin /data/coordinator/"
[rocky9-gp7-segment1] sudo chown gpadmin:gpadmin /data/coordinator/
[gpadmin@rocky9-gp7-master ~]$ gpssh -h rocky9-gp7-segment2 -e "sudo chown gpadmin:gpadmin /data/coordinator/"
[rocky9-gp7-segment2] sudo chown gpadmin:gpadmin /data/coordinator/

This storage area is used to store system catalog tables and metadata. It is not used to store any user data.

The storage areas on the segment hosts will store user data, so they need to be bigger. All of the segment nodes should provide a storage area for the so called “primary segments”. Those segments are the active ones and will be used by default for serving client requests. In addition there should be a storage area for so called “mirror segments”. Those segments will be used in case the primary segment becomes unavailable. For that reason a mirror segment must always be on another host than it’s primary segment (more on that later).

Before we use “gpssh” to do this, let’s create a file which only contains the host names of the segment hosts:

[gpadmin@rocky9-gp7-master ~]$ echo "rocky9-gp7-segment1
rocky9-gp7-segment2" > ~/hostfile_gpssh_segonly

Having this in place we can easily create the directories on the segment nodes:

[gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e 'sudo mkdir -p /data/primary'
[rocky9-gp7-segment1] sudo mkdir -p /data/primary
[rocky9-gp7-segment2] sudo mkdir -p /data/primary
[gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e 'sudo mkdir -p /data/mirror'
[rocky9-gp7-segment1] sudo mkdir -p /data/mirror
[rocky9-gp7-segment2] sudo mkdir -p /data/mirror
[gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e 'sudo chown gpadmin:gpadmin /data/*'
[rocky9-gp7-segment2] sudo chown gpadmin:gpadmin /data/*
[rocky9-gp7-segment1] sudo chown gpadmin:gpadmin /data/*

Greenplum comes with a utility can you can use to validate your systems when it comes to network, disk and memory performance. The utility is called “gpcheckperf” and this, e.g., will run a network performance test:

[gpadmin@rocky9-gp7-master ~]$ gpcheckperf -f hostfile_exkeys -r N -d /tmp
[INFO] --buffer-size value is not specified or invalid. Using default (8 kilobytes)
/usr/local/greenplum-db-7.1.0/bin/gpcheckperf -f hostfile_exkeys -r N -d /tmp
-------------------
--  NETPERF TEST
-------------------

====================
==  RESULT 2024-02-28T16:41:39.049314
====================
Netperf bisection bandwidth test
rocky9-gp7-master -> rocky9-gp7-segment1 = 1971.150000
rocky9-gp7-segment2 -> rocky9-gp7-master = 1688.660000
rocky9-gp7-segment1 -> rocky9-gp7-master = 1310.830000
rocky9-gp7-master -> rocky9-gp7-segment2 = 1377.070000

Summary:
sum = 6347.71 MB/sec
min = 1310.83 MB/sec
max = 1971.15 MB/sec
avg = 1586.93 MB/sec
median = 1688.66 MB/sec

[Warning] connection between rocky9-gp7-segment2 and rocky9-gp7-master is no good
[Warning] connection between rocky9-gp7-segment1 and rocky9-gp7-master is no good
[Warning] connection between rocky9-gp7-master and rocky9-gp7-segment2 is no good

I don’t care about this warnings because this is just a test, you should care if you do a real setup, of course. Running a disk I/O test can be done like this (this will run dd tests on all the segment nodes):

[gpadmin@rocky9-gp7-master ~]$ gpcheckperf -f hostfile_gpssh_segonly -r ds -D -d /data/primary -d /data/mirror
[INFO] --buffer-size value is not specified or invalid. Using default (8 kilobytes)
/usr/local/greenplum-db-7.1.0/bin/gpcheckperf -f hostfile_gpssh_segonly -r ds -D -d /data/primary -d /data/mirror
[Warning] Using 7650140160 bytes for disk performance test. This might take some time
--------------------
--  DISK WRITE TEST
--------------------
--------------------
--  DISK READ TEST
--------------------
--------------------
--  STREAM TEST
--------------------

====================
==  RESULT 2024-02-28T16:49:58.607351
====================

 disk write avg time (sec): 109.30
 disk write tot bytes: 15300296704
 disk write tot bandwidth (MB/s): 133.51
 disk write min bandwidth (MB/s): 66.31 [rocky9-gp7-segment1]
 disk write max bandwidth (MB/s): 67.19 [rocky9-gp7-segment2]
 -- per host bandwidth --
    disk write bandwidth (MB/s): 66.31 [rocky9-gp7-segment1]
    disk write bandwidth (MB/s): 67.19 [rocky9-gp7-segment2]


 disk read avg time (sec): 58.48
 disk read tot bytes: 15300296704
 disk read tot bandwidth (MB/s): 250.04
 disk read min bandwidth (MB/s): 119.41 [rocky9-gp7-segment1]
 disk read max bandwidth (MB/s): 130.63 [rocky9-gp7-segment2]
 -- per host bandwidth --
    disk read bandwidth (MB/s): 130.63 [rocky9-gp7-segment2]
    disk read bandwidth (MB/s): 119.41 [rocky9-gp7-segment1]


 stream tot bandwidth (MB/s): 66240.30
 stream min bandwidth (MB/s): 32732.80 [rocky9-gp7-segment1]
 stream max bandwidth (MB/s): 33507.50 [rocky9-gp7-segment2]
 -- per host bandwidth --
    stream bandwidth (MB/s): 32732.80 [rocky9-gp7-segment1]
    stream bandwidth (MB/s): 33507.50 [rocky9-gp7-segment2]

Assuming that we’re happy with the performance statistics we can proceed and initialize the cluster. With a community PostgreSQL installation you would do this with initdb, and actually initdb and many other utilities you know from PostgreSQL are available on the system:

[gpadmin@rocky9-gp7-master ~]$ ls -la /usr/local/greenplum-db/bin/
total 81796
drwxr-xr-x  8 gpadmin gpadmin     4096 Feb 28 16:36 .
drwxr-xr-x 11 gpadmin gpadmin     4096 Feb 28 14:52 ..
-rwxr-xr-x  1 gpadmin gpadmin    66665 Feb  8 21:01 analyzedb
-rwxr-xr-x  1 gpadmin gpadmin   259104 Feb  8 21:01 clusterdb
-rwxr-xr-x  1 gpadmin gpadmin   254416 Feb  8 21:01 createdb
-rwxr-xr-x  1 gpadmin gpadmin   265176 Feb  8 21:01 createuser
-rwxr-xr-x  1 gpadmin gpadmin   238480 Feb  8 21:01 dropdb
-rwxr-xr-x  1 gpadmin gpadmin   238352 Feb  8 21:01 dropuser
-rwxr-xr-x  1 gpadmin gpadmin  2754648 Feb  8 21:01 ecpg
-rwxr-xr-x  1 gpadmin gpadmin    17248 Feb  8 21:01 gpactivatestandby
-rwxr-xr-x  1 gpadmin gpadmin      494 Feb  8 21:01 gpaddmirrors
-rwxr-xr-x  1 gpadmin gpadmin   137764 Feb  8 21:01 gpcheckcat
drwxr-xr-x  3 gpadmin gpadmin     4096 Feb 28 14:52 gpcheckcat_modules
-rwxr-xr-x  1 gpadmin gpadmin    29980 Feb  8 21:01 gpcheckperf
-rwxr-xr-x  1 gpadmin gpadmin     6682 Feb  8 21:01 gpcheckresgroupimpl
-rwxr-xr-x  1 gpadmin gpadmin     3230 Feb  8 21:01 gpcheckresgroupv2impl
-rwxr-xr-x  1 gpadmin gpadmin    23374 Feb  8 21:01 gpconfig
drwxr-xr-x  3 gpadmin gpadmin     4096 Feb 28 14:52 gpconfig_modules
-rwxr-xr-x  1 gpadmin gpadmin    13754 Feb  8 21:01 gpdeletesystem
-rwxr-xr-x  1 gpadmin gpadmin   114969 Feb  8 21:01 gpexpand
-rwxr-xr-x  1 gpadmin gpadmin   407208 Feb  8 21:01 gpfdist
-rwxr-xr-x  1 gpadmin gpadmin    34959 Feb  8 21:01 gpinitstandby
-rwxr-xr-x  1 gpadmin gpadmin    83564 Feb  8 21:01 gpinitsystem
-rwxr-xr-x  1 gpadmin gpadmin      189 Feb  8 21:01 gpload
-rw-r--r--  1 gpadmin gpadmin      202 Feb  8 21:01 gpload.bat
-rwxr-xr-x  1 gpadmin gpadmin   113900 Feb  8 21:01 gpload.py
-rwxr-xr-x  1 gpadmin gpadmin    21018 Feb  8 21:01 gplogfilter
-rwxr-xr-x  1 gpadmin gpadmin    15333 Feb  8 21:01 gpmemreport
-rwxr-xr-x  1 gpadmin gpadmin     8032 Feb  8 21:01 gpmemwatcher
-rwxr-xr-x  1 gpadmin gpadmin    21646 Feb  8 21:01 gpmovemirrors
-rwxr-xr-x  1 gpadmin gpadmin      548 Feb  8 21:01 gprecoverseg
-rwxr-xr-x  1 gpadmin gpadmin     1162 Feb  8 21:01 gpreload
-rwxr-xr-x  1 gpadmin gpadmin    10723 Feb  8 21:01 gpsd
-rwxr-xr-x  1 gpadmin gpadmin     9258 Feb  8 21:01 gpssh
-rwxr-xr-x  1 gpadmin gpadmin    32516 Feb  8 21:01 gpssh-exkeys
drwxr-xr-x  3 gpadmin gpadmin       70 Feb 28 14:52 gpssh_modules
-rwxr-xr-x  1 gpadmin gpadmin    37579 Feb  8 21:01 gpstart
-rwxr-xr-x  1 gpadmin gpadmin      422 Feb  8 21:01 gpstate
-rwxr-xr-x  1 gpadmin gpadmin    45588 Feb  8 21:01 gpstop
-rwxr-xr-x  1 gpadmin gpadmin     4074 Feb  8 21:01 gpsync
-rwxr-xr-x  1 gpadmin gpadmin   528656 Feb  8 21:01 initdb
drwxr-xr-x  4 gpadmin gpadmin     4096 Feb 28 14:52 lib
-rwxr-xr-x  1 gpadmin gpadmin    17611 Feb  8 21:01 minirepro
-rwxr-xr-x  1 gpadmin gpadmin   163568 Feb  8 21:01 pg_archivecleanup
-rwxr-xr-x  1 gpadmin gpadmin   459656 Feb  8 21:01 pg_basebackup
-rwxr-xr-x  1 gpadmin gpadmin   667784 Feb  8 21:01 pgbench
-rwxr-xr-x  1 gpadmin gpadmin   224176 Feb  8 21:01 pg_checksums
-rwxr-xr-x  1 gpadmin gpadmin   150736 Feb  8 21:01 pg_config
-rwxr-xr-x  1 gpadmin gpadmin   177072 Feb  8 21:01 pg_controldata
-rwxr-xr-x  1 gpadmin gpadmin   235296 Feb  8 21:01 pg_ctl
-rwxr-xr-x  1 gpadmin gpadmin  1591264 Feb  8 21:01 pg_dump
-rwxr-xr-x  1 gpadmin gpadmin   371784 Feb  8 21:01 pg_dumpall
-rwxr-xr-x  1 gpadmin gpadmin   239264 Feb  8 21:01 pg_isready
-rwxr-xr-x  1 gpadmin gpadmin   327200 Feb  8 21:01 pg_receivewal
-rwxr-xr-x  1 gpadmin gpadmin   331168 Feb  8 21:01 pg_recvlogical
-rwxr-xr-x  1 gpadmin gpadmin   211880 Feb  8 21:01 pg_resetwal
-rwxr-xr-x  1 gpadmin gpadmin   764392 Feb  8 21:01 pg_restore
-rwxr-xr-x  1 gpadmin gpadmin   480400 Feb  8 21:01 pg_rewind
-rwxr-xr-x  1 gpadmin gpadmin   171944 Feb  8 21:01 pg_test_fsync
-rwxr-xr-x  1 gpadmin gpadmin   144336 Feb  8 21:01 pg_test_timing
-rwxr-xr-x  1 gpadmin gpadmin   606048 Feb  8 21:01 pg_upgrade
-rwxr-xr-x  1 gpadmin gpadmin   454504 Feb  8 21:01 pg_waldump
-rwxr-xr-x  1 gpadmin gpadmin 67633848 Feb  8 21:01 postgres
lrwxrwxrwx  1 gpadmin gpadmin        8 Feb  8 21:01 postmaster -> postgres
-rwxr-xr-x  1 gpadmin gpadmin  1826136 Feb  8 21:01 psql
drwxr-xr-x  2 gpadmin gpadmin       35 Feb 28 14:52 __pycache__
-rwxr-xr-x  1 gpadmin gpadmin   267224 Feb  8 21:01 reindexdb
drwxr-xr-x  2 gpadmin gpadmin       20 Feb 28 14:52 stream
-rwxr-xr-x  1 gpadmin gpadmin   287832 Feb  8 21:01 vacuumdb

The Greenplum system will work across multiple nodes and all of them will host PostgreSQL instances (called segment and coordinator instances). To make this easier to setup Greenplum comes with its own version of “initdb” which is called “gpinitsystem”:

[gpadmin@rocky9-gp7-master ~]$ gpinitsystem --version
gpinitsystem 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source

Before the system can be initialized we need to create the Greenplum database configuration file. There is a template we can use as a starting point:

[gpadmin@rocky9-gp7-master ~]$ echo $GPHOME
/usr/local/greenplum-db-7.1.0
[gpadmin@rocky9-gp7-master ~]$ mkdir /home/gpadmin/gpconfigs/
[gpadmin@rocky9-gp7-master ~]$ cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config
[gpadmin@rocky9-gp7-master ~]$ vi /home/gpadmin/gpconfigs/gpinitsystem_config

For the scope of this demo system all which needs to be adjusted are the data and mirror directories:

[gpadmin@rocky9-gp7-master ~]$ egrep "DATA_DIRECTORY|MIRROR_DATA_DIRECTORY|MIRROR_PORT_BASE" /home/gpadmin/gpconfigs/gpinitsystem_config | egrep -v "^#"
declare -a DATA_DIRECTORY=(/data/primary)
MIRROR_PORT_BASE=7000
declare -a MIRROR_DATA_DIRECTORY=(/data/mirror)

This config and the hosts file which contains the segments need to be passed to “gpinitsystem”:

[gpadmin@rocky9-gp7-master ~]$ gpinitsystem -c gpconfigs/gpinitsystem_config -h hostfile_gpssh_segonly
20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Reading Greenplum configuration file gpconfigs/gpinitsystem_config
20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Locale has not been set in gpconfigs/gpinitsystem_config, will set to default value
20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:-Coordinator hostname cdw does not match hostname output
20240229:11:39:11:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking to see if cdw can be resolved on this host
ssh: Could not resolve hostname cdw: Name or service not known
ssh: Could not resolve hostname cdw: Name or service not known
20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[FATAL]:-Coordinator hostname in configuration file is cdw
20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[FATAL]:-Operating system command returns rocky9-gp7-master.it.dbi-services.com
20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[FATAL]:-Unable to resolve cdw on this host
20240229:11:39:20:001290 gpinitsystem:rocky9-gp7-master:gpadmin-[FATAL]:-Coordinator hostname in gpinitsystem configuration file must be cdw Script Exiting!

It seems the hostname of the coordinator node needs to be “cdw”, so lets add this to the host files on all nodes:

[gpadmin@rocky9-gp7-master ~]$ sudo vi /etc/hosts
[gpadmin@rocky9-gp7-master ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.122.200 rocky9-gp7-master rocky9-gp7-master.it.dbi-services.com cdw cdw.it.dbi-services.com
192.168.122.201 rocky9-gp7-segment1 rocky9-gp7-segment1.it.dbi-services.com
192.168.122.202 rocky9-gp7-segment2 rocky9-gp7-segment2.it.dbi-services.com

Running it once more and it looks much better:

[gpadmin@rocky9-gp7-master ~]$ gpinitsystem -c gpconfigs/gpinitsystem_config -h hostfile_gpssh_segonly
20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Reading Greenplum configuration file gpconfigs/gpinitsystem_config
20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Locale has not been set in gpconfigs/gpinitsystem_config, will set to default value
20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:-Coordinator hostname cdw does not match hostname output
20240229:11:45:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking to see if cdw can be resolved on this host
The authenticity of host 'cdw (192.168.122.200)' can't be established.
ED25519 key fingerprint is SHA256:Tdo3AwqH109Mgc30keTbDcusFii8PSft0FXWTUS0Tb0.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:1: rocky9-gp7-segment1
    ~/.ssh/known_hosts:4: rocky9-gp7-segment2
    ~/.ssh/known_hosts:5: rocky9-gp7-master
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'cdw' (ED25519) to the list of known hosts.
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Can resolve cdw to this host
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-COORDINATOR_MAX_CONNECT not set, will set to default value 250
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking configuration parameters, Completed
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
..
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Configuring build for standard array
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20240229:11:45:31:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Building primary segment instance array, please wait...
....
20240229:11:45:32:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Building group mirror array type , please wait...
....
20240229:11:45:34:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking Coordinator host
20240229:11:45:34:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking new segment hosts, please wait...
........
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Checking new segment hosts, Completed
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Database Creation Parameters
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:---------------------------------------
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator Configuration
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:---------------------------------------
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator hostname       = cdw
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator port           = 5432
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator instance dir   = /data/coordinator/gpseg-1
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator LOCALE         = 
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum segment prefix   = gpseg
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator Database       = 
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator connections    = 250
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator buffers        = 128000kB
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Segment connections        = 750
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Segment buffers            = 128000kB
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Encoding                   = UNICODE
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Postgres param file        = Off
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Initdb to be used          = /usr/local/greenplum-db-7.1.0/bin/initdb
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-GP_LIBRARY_PATH is         = /usr/local/greenplum-db-7.1.0/lib
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-HEAP_CHECKSUM is           = on
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-HBA_HOSTNAMES is           = 0
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Ulimit check               = Passed
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Array host connect type    = Single hostname per node
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator IP address [1]      = ::1
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator IP address [2]      = 192.168.122.200
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator IP address [3]      = fe80::5054:ff:fe5d:fef7
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Standby Coordinator             = Not Configured
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Number of primary segments = 2
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Total Database segments    = 4
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Trusted shell              = ssh
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Number segment hosts       = 2
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Mirror port base           = 7000
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Number of mirror segments  = 2
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Mirroring config           = ON
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Mirroring type             = Group
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:----------------------------------------
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:----------------------------------------
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment1.it.dbi-services.com     6000    rocky9-gp7-segment1     /data/primary/gpseg0        2
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment1.it.dbi-services.com     6001    rocky9-gp7-segment1     /data/primary/gpseg1        3
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment2.it.dbi-services.com     6000    rocky9-gp7-segment2     /data/primary/gpseg2        4
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment2.it.dbi-services.com     6001    rocky9-gp7-segment2     /data/primary/gpseg3        5
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:---------------------------------------
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:---------------------------------------
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment2.it.dbi-services.com     7000    rocky9-gp7-segment2     /data/mirror/gpseg0 6
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment2.it.dbi-services.com     7001    rocky9-gp7-segment2     /data/mirror/gpseg1 7
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment1.it.dbi-services.com     7000    rocky9-gp7-segment1     /data/mirror/gpseg2 8
20240229:11:45:39:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-rocky9-gp7-segment1.it.dbi-services.com     7001    rocky9-gp7-segment1     /data/mirror/gpseg3 9

Continue with Greenplum creation Yy|Nn (default=N):

Confirming the question leads to this:

> Y
20240229:11:48:12:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Building the Coordinator instance database, please wait...
20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Starting the Coordinator in admin mode
20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
....
20240229:11:48:14:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
.........
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:------------------------------------------------
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Parallel process exit status
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:------------------------------------------------
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Total processes marked as completed           = 4
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Total processes marked as killed              = 0
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Total processes marked as failed              = 0
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:------------------------------------------------
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Removing back out file
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-No errors generated from parallel processes
20240229:11:48:23:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /data/coordinator/gpseg-1
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Gathering information and validating the environment...
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Obtaining Greenplum Coordinator catalog information
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source'
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Commencing Coordinator instance shutdown with mode='smart'
20240229:11:48:23:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator segment instance directory=/data/coordinator/gpseg-1
20240229:11:48:24:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Stopping coordinator segment and waiting for user connections to finish ...
server shutting down
20240229:11:48:25:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Attempting forceful termination of any leftover coordinator process
20240229:11:48:25:006091 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Terminating processes for segment /data/coordinator/gpseg-1
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/coordinator/gpseg-1
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Gathering information and validating the environment...
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source'
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Catalog Version: '302307241'
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Starting Coordinator instance in admin mode
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=None $GPHOME/bin/pg_ctl -D /data/coordinator/gpseg-1 -l /data/coordinator/gpseg-1/log/startup.log -w -t 600 -o " -c gp_role=utility " start
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Obtaining Greenplum Coordinator catalog information
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Setting new coordinator era
20240229:11:48:26:006330 gpstart:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator Started...
The authenticity of host 'rocky9-gp7-segment2.it.dbi-services.com (192.168.122.202)' can't be established.
ED25519 key fingerprint is SHA256:Tdo3AwqH109Mgc30keTbDcusFii8PSft0FXWTUS0Tb0.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:1: rocky9-gp7-segment1
    ~/.ssh/known_hosts:4: rocky9-gp7-segment2
    ~/.ssh/known_hosts:5: rocky9-gp7-master
    ~/.ssh/known_hosts:6: cdw
The authenticity of host 'rocky9-gp7-segment1.it.dbi-services.com (192.168.122.201)' can't be established.
ED25519 key fingerprint is SHA256:Tdo3AwqH109Mgc30keTbDcusFii8PSft0FXWTUS0Tb0.
This host key is known by the following other names/addresses:
    ~/.ssh/known_hosts:1: rocky9-gp7-segment1
    ~/.ssh/known_hosts:4: rocky9-gp7-segment2
    ~/.ssh/known_hosts:5: rocky9-gp7-master
    ~/.ssh/known_hosts:6: cdw
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes

20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-[WARNING]:-One or more hosts are not reachable via SSH.
20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-[WARNING]:-Host rocky9-gp7-segment1.it.dbi-services.com is unreachable
20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-[WARNING]:-Marking segment 2 down because rocky9-gp7-segment1.it.dbi-services.com is unreachable
20240229:11:51:06:006330 gpstart:rocky9-gp7-master:gpadmin-[CRITICAL]:-gpstart failed. (Reason=''NoneType' object has no attribute 'getSegmentHostName'') exiting...
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:-Failed to start Greenplum instance; review gpstart output to
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:- determine why gpstart failed and reinitialize cluster after resolving
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:- issues.  Not all initialization tasks have completed so the cluster
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:- should not be used.
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:-gpinitsystem will now try to stop the cluster
20240229:11:51:06:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -i -d /data/coordinator/gpseg-1
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Gathering information and validating the environment...
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Obtaining Greenplum Coordinator catalog information
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source'
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Commencing Coordinator instance shutdown with mode='immediate'
20240229:11:51:06:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Coordinator segment instance directory=/data/coordinator/gpseg-1

20240229:11:51:07:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Attempting forceful termination of any leftover coordinator process
20240229:11:51:07:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Terminating processes for segment /data/coordinator/gpseg-1
20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-No standby coordinator host configured
20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Targeting dbid [2, 3, 4, 5] for shutdown
20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
20240229:11:51:08:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-0.00% of jobs completed
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-100.00% of jobs completed
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-----------------------------------------------------
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-   Segments stopped successfully      = 4
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-   Segments with errors during stop   = 0
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-----------------------------------------------------
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Successfully shutdown 4 of 4 segment instances 
20240229:11:51:09:006412 gpstop:rocky9-gp7-master:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[INFO]:-Successfully shutdown the Greenplum instance
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:-Failed to start Greenplum instance; review gpstart output to
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:- determine why gpstart failed and reinitialize cluster after resolving
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:- issues.  Not all initialization tasks have completed so the cluster
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:- should not be used.
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[WARN]:
20240229:11:51:09:001611 gpinitsystem:rocky9-gp7-master:gpadmin-[FATAL]: starting new instance failed; Script Exiting!

This happens when you do not fully read the documentation: “The Greenplum Database host naming convention for the coordinator host is cdw and for the standby coordinator host is scdw.

The segment host naming convention is sdwN where sdw is a prefix and N is an integer. For example, segment host names would be sdw1sdw2 and so on. NIC bonding is recommended for hosts with multiple interfaces, but when the interfaces are not bonded, the convention is to append a dash (-) and number to the host name. For example, sdw1-1 and sdw1-2 are the two interface names for host sdw1.”

So, lets fix this (also change the hostname on each node):

[gpadmin@rocky9-gp7-master ~]$ sudo vi /etc/hosts
[gpadmin@rocky9-gp7-master ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.122.200 cdw cdw.it.dbi-services.com
192.168.122.201 sdw1 sdw1.it.dbi-services.com
192.168.122.202 sdw2 sdw2.it.dbi-services.com

[gpadmin@rocky9-gp7-master ~]$ cat hostfile_gpssh_segonly 
sdw1
sdw2

[gpadmin@rocky9-gp7-master ~]$ cat hostfile_exkeys 
cdw
sdw1
sdw2

Before initializing the system again, lets cleanup what was already created:

[gpadmin@rocky9-gp7-master ~]$ gpssh -f hostfile_gpssh_segonly -e "rm -rf /data/primary/*; rm -rf /data/mirror/*; rm -rf /data/coordinator/*"
[sdw2] rm -rf /data/primary/*; rm -rf /data/mirror/*; rm -rf /data/coordinator/*
[sdw1] rm -rf /data/primary/*; rm -rf /data/mirror/*; rm -rf /data/coordinator/*
[gpadmin@rocky9-gp7-master ~]$ rm -rf /data/coordinator/*

Next try:

[gpadmin@cdw ~]$ gpinitsystem -c gpconfigs/gpinitsystem_config -h hostfile_gpssh_segonly
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Reading Greenplum configuration file gpconfigs/gpinitsystem_config
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Locale has not been set in gpconfigs/gpinitsystem_config, will set to default value
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-COORDINATOR_MAX_CONNECT not set, will set to default value 250
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Checking configuration parameters, Completed
20240229:12:51:03:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
..
20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Configuring build for standard array
20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Building primary segment instance array, please wait...
..
20240229:12:51:04:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Building group mirror array type , please wait...
..
20240229:12:51:05:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Checking Coordinator host
20240229:12:51:05:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Checking new segment hosts, please wait...
....
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Checking new segment hosts, Completed
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Greenplum Database Creation Parameters
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:---------------------------------------
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator Configuration
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:---------------------------------------
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator hostname       = cdw
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator port           = 5432
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator instance dir   = /data/coordinator/gpseg-1
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator LOCALE         = 
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Greenplum segment prefix   = gpseg
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator Database       = 
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator connections    = 250
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator buffers        = 128000kB
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Segment connections        = 750
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Segment buffers            = 128000kB
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Encoding                   = UNICODE
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Postgres param file        = Off
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Initdb to be used          = /usr/local/greenplum-db-7.1.0/bin/initdb
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-GP_LIBRARY_PATH is         = /usr/local/greenplum-db-7.1.0/lib
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-HEAP_CHECKSUM is           = on
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-HBA_HOSTNAMES is           = 0
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Ulimit check               = Passed
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Array host connect type    = Single hostname per node
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator IP address [1]      = ::1
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator IP address [2]      = 192.168.122.200
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Coordinator IP address [3]      = fe80::5054:ff:fe5d:fef7
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Standby Coordinator             = Not Configured
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Number of primary segments = 1
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total Database segments    = 2
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Trusted shell              = ssh
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Number segment hosts       = 2
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Mirror port base           = 7000
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Number of mirror segments  = 1
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Mirroring config           = ON
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Mirroring type             = Group
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:----------------------------------------
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:----------------------------------------
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-sdw1  6000    sdw1    /data/primary/gpseg0   2
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-sdw2  6000    sdw2    /data/primary/gpseg1   3
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:---------------------------------------
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:---------------------------------------
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-sdw2  7000    sdw2    /data/mirror/gpseg0    4
20240229:12:51:08:013410 gpinitsystem:cdw:gpadmin-[INFO]:-sdw1  7000    sdw1    /data/mirror/gpseg1    5

Continue with Greenplum creation Yy|Nn (default=N):
> y
20240229:12:51:12:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Building the Coordinator instance database, please wait...
20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Starting the Coordinator in admin mode
20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
..
20240229:12:51:13:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
.......
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:------------------------------------------------
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Parallel process exit status
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:------------------------------------------------
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total processes marked as completed           = 2
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total processes marked as killed              = 0
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total processes marked as failed              = 0
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:------------------------------------------------
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Removing back out file
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-No errors generated from parallel processes
20240229:12:51:20:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /data/coordinator/gpseg-1
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Obtaining Greenplum Coordinator catalog information
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source'
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Commencing Coordinator instance shutdown with mode='smart'
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Coordinator segment instance directory=/data/coordinator/gpseg-1
20240229:12:51:20:016290 gpstop:cdw:gpadmin-[INFO]:-Stopping coordinator segment and waiting for user connections to finish ...
server shutting down
20240229:12:51:21:016290 gpstop:cdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover coordinator process
20240229:12:51:21:016290 gpstop:cdw:gpadmin-[INFO]:-Terminating processes for segment /data/coordinator/gpseg-1
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/coordinator/gpseg-1
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 7.1.0 build commit:e7c2b1f14bb42a1018ac57d14f4436880e0a0515 Open Source'
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Greenplum Catalog Version: '302307241'
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Starting Coordinator instance in admin mode
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=None $GPHOME/bin/pg_ctl -D /data/coordinator/gpseg-1 -l /data/coordinator/gpseg-1/log/startup.log -w -t 600 -o " -c gp_role=utility " start
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Obtaining Greenplum Coordinator catalog information
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Obtaining Segment details from coordinator...
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Setting new coordinator era
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Coordinator Started...
20240229:12:51:23:016530 gpstart:cdw:gpadmin-[INFO]:-Shutting down coordinator
20240229:12:51:24:016530 gpstart:cdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-Process results...
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-----------------------------------------------------
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-   Successful segment starts                                            = 2
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-----------------------------------------------------
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-Successfully started 2 of 2 segment instances 
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-----------------------------------------------------
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-Starting Coordinator instance cdw directory /data/coordinator/gpseg-1 
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-CoordinatorStart pg_ctl cmd is env GPSESSID=0000000000 GPERA=b37f5ee82ead4186_240229125123 $GPHOME/bin/pg_ctl -D /data/coordinator/gpseg-1 -l /data/coordinator/gpseg-1/log/startup.log -w -t 600 -o " -c gp_role=dispatch " start
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-Command pg_ctl reports Coordinator cdw instance active
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-Connecting to db template1 on host localhost
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-No standby coordinator configured.  skipping...
20240229:12:51:25:016530 gpstart:cdw:gpadmin-[INFO]:-Database successfully started
20240229:12:51:25:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20240229:12:51:25:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Creating core GPDB extensions
20240229:12:51:25:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Importing system collations
20240229:12:51:27:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20240229:12:51:27:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
..
20240229:12:51:27:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
......
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:------------------------------------------------
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Parallel process exit status
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:------------------------------------------------
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total processes marked as completed           = 2
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total processes marked as killed              = 0
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Total processes marked as failed              = 0
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:------------------------------------------------
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[WARN]:-*******************************************************
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[WARN]:-were generated during the array creation
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Please review contents of log file
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20240229.log
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-To determine level of criticality
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-These messages could be from a previous run of the utility
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-that was called today!
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[WARN]:-*******************************************************
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Greenplum Database instance successfully created
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-------------------------------------------------------
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-To complete the environment configuration, please 
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-2. Add "export COORDINATOR_DATA_DIRECTORY=/data/coordinator/gpseg-1"
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-   or, use -d /data/coordinator/gpseg-1 option for the Greenplum scripts
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-   Example gpstate -d /data/coordinator/gpseg-1
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20240229.log
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-To initialize a Standby Coordinator Segment for this Greenplum instance
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Review options for gpinitstandby
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-------------------------------------------------------
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-The Coordinator /data/coordinator/gpseg-1/pg_hba.conf post gpinitsystem
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-new array must be explicitly added to this file
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-located in the /usr/local/greenplum-db-7.1.0/docs directory
20240229:12:51:34:013410 gpinitsystem:cdw:gpadmin-[INFO]:-------------------------------------------------------

All fine. The last step from the documentation is to set the time zone with “gpconfig”. To list the current time zone which is used by the system:

[gpadmin@cdw ~]$ gpconfig -s TimeZone
Values on all segments are consistent
GUC              : TimeZone
Coordinator value: Europe/Zurich
Segment     value: Europe/Zurich

To set the time zone:

[gpadmin@cdw ~]$ gpconfig -c TimeZone -v 'Europe/Zurich'
Environment Variable COORDINATOR_DATA_DIRECTORY not set!
[gpadmin@cdw ~]$ export COORDINATOR_DATA_DIRECTORY=/data/coordinator/gpseg-1/
[gpadmin@cdw ~]$ gpconfig -c TimeZone -v 'Europe/Zurich'
20240229:13:01:03:017941 gpconfig:cdw:gpadmin-[INFO]:-completed successfully with parameters '-c TimeZone -v Europe/Zurich'

That’s it for the scope of this post. In the next post we’ll look in more detail what got created and how the PostgreSQL instances interact with each other.