{"id":7568,"date":"2016-04-18T05:51:23","date_gmt":"2016-04-18T03:51:23","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/"},"modified":"2016-04-18T05:51:23","modified_gmt":"2016-04-18T03:51:23","slug":"maintenance-scenarios-with-edb-failover-manager-2-primary-node","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/","title":{"rendered":"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node"},"content":{"rendered":"<p>In the <a href=\"http:\/\/dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-1-standby-node\/\" target=\"_blank\" rel=\"noopener\">last post<\/a> I looked at how you can do maintenance operations on the standby node when you are working in a PostgreSQL cluster protected by EDB Failover Manager. In this post I&#8217;ll look on how you can do maintenance on the primary node (better: the node where the primary instance currently runs on). This requires slightly more work and attention. Lets go.<\/p>\n<p><!--more--><\/p>\n<p>As a quick reminder this is the setup:<\/p>\n<table>\n<tr>\n<th>IP<\/th>\n<th>Description<\/th>\n<\/tr>\n<tr>\n<td>192.168.22.243<\/td>\n<td>Current PostgreSQL hot standby instance<\/td>\n<\/tr>\n<tr>\n<td>192.168.22.245<\/td>\n<td>Currernt PostgreSQL primary instance<\/td>\n<\/tr>\n<tr>\n<td>192.168.22.244<\/td>\n<td>EDB Failover Manager Witness Node + EDB BART<\/td>\n<\/tr>\n<tr>\n<td>192.168.22.250<\/td>\n<td>Virtual IP that is used for client connections to the master database<\/td>\n<\/tr>\n<\/table>\n<p>When we want to do maintenance on the current primary node this will require a fail over of the PostgreSQL instance. In addition the VIP shall fail over too to provide the clients the same address to connect as before the fail over. Lets check the current status of the fail over cluster:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@edbbart:\/home\/postgres\/ [pg950] efmstat \nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.245       UP     UP        \n\tWitness     192.168.22.244       UP     N\/A       \n\tStandby     192.168.22.243       UP     UP        \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.245 192.168.22.243\n\nStandby priority host list:\n\t192.168.22.243\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.245       0\/350000D0       \n\tStandby     192.168.22.243       0\/350000D0       \n\n\tStandby database(s) in sync with master. It is safe to promote.\n<\/pre>\n<p>The last line is telling us that it is probably safe to promote. Consider these two cases:<\/p>\n<ul>\n<li>Case 1: You want to perform a maintenance which requires the reboot the whole node<\/li>\n<li>Case 2: You want to perform a maintenance which requires only the restart of the PostgreSQL instance on that node<\/li>\n<\/ul>\n<p>It is important to distinguish the two cases because it will impact on how you have to deal with EDB Failover Manager. Lets see what happens in case one when we do a promote and then reboot the node. I&#8217;ll do the promote command on the witness node but it doesn&#8217;t really matter as you can execute it from any node in the fail over cluster:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n-bash-4.2$ \/usr\/efm-2.0\/bin\/efm promote efm\nPromote command accepted by local agent. Proceeding with promotion. Run the 'cluster-status' command for information about the new cluster state.\n<\/pre>\n<p>What happened. First lets check the cluster status:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n-bash-4.2$ \/usr\/efm-2.0\/bin\/efm cluster-status efm\nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tIdle        192.168.22.245       UP     UNKNOWN   \n\tWitness     192.168.22.244       UP     N\/A       \n\tMaster      192.168.22.243       UP     UP        \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.245 192.168.22.243\n\nStandby priority host list:\n\t(List is empty.)\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.243       0\/350001E0       \n\n\tNo standby databases were found.\n\nIdle Node Status (idle nodes ignored in XLog location comparisons):\n\n\tAddress              XLog Loc         Info\n\t--------------------------------------------------------------\n\t192.168.22.245       0\/350000D0       DB is not in recovery.\n<\/pre>\n<p>The old master is gone (Idle\/UNKNOWN) and the old standby became the new master. You can double check this if you login to the new master and check if the instance is still in recovery mode:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@edbppas:\/home\/postgres\/ [PGSITE1] ip a\n1: lo:  mtu 65536 qdisc noqueue state UNKNOWN \n    link\/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n    inet 127.0.0.1\/8 scope host lo\n       valid_lft forever preferred_lft forever\n    inet6 ::1\/128 scope host \n       valid_lft forever preferred_lft forever\n2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000\n    link\/ether 08:00:27:6d:d8:b7 brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.15\/24 brd 10.0.2.255 scope global dynamic enp0s3\n       valid_lft 85491sec preferred_lft 85491sec\n    inet6 fe80::a00:27ff:fe6d:d8b7\/64 scope link \n       valid_lft forever preferred_lft forever\n3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000\n    link\/ether 08:00:27:e4:19:ec brd ff:ff:ff:ff:ff:ff\n    inet 192.168.22.243\/24 brd 192.168.22.255 scope global enp0s8\n       valid_lft forever preferred_lft forever\n    inet 192.168.22.250\/24 brd 192.168.22.255 scope global secondary enp0s8:0\n       valid_lft forever preferred_lft forever\n    inet6 fe80::a00:27ff:fee4:19ec\/64 scope link \n       valid_lft forever preferred_lft forever\npostgres@edbppas:\/home\/postgres\/ [PGSITE1] sqh\npsql.bin (9.5.0.5)\nType \"help\" for help.\n\npostgres=# select * from pg_is_in_recovery();\n pg_is_in_recovery \n-------------------\n f\n(1 row)\n<\/pre>\n<p>From the IP configuration output it is also proved that the VIP failed over to the new master node. What is the status of the old master instance?<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@ppasstandby:\/home\/postgres\/ [PGSITE2] ip a\n1: lo:  mtu 65536 qdisc noqueue state UNKNOWN \n    link\/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n    inet 127.0.0.1\/8 scope host lo\n       valid_lft forever preferred_lft forever\n    inet6 ::1\/128 scope host \n       valid_lft forever preferred_lft forever\n2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000\n    link\/ether 08:00:27:19:6f:0a brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.15\/24 brd 10.0.2.255 scope global dynamic enp0s3\n       valid_lft 85364sec preferred_lft 85364sec\n    inet6 fe80::a00:27ff:fe19:6f0a\/64 scope link \n       valid_lft forever preferred_lft forever\n3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000\n    link\/ether 08:00:27:ba:c0:6a brd ff:ff:ff:ff:ff:ff\n    inet 192.168.22.245\/24 brd 192.168.22.255 scope global enp0s8\n       valid_lft forever preferred_lft forever\n    inet6 fe80::a00:27ff:feba:c06a\/64 scope link \n       valid_lft forever preferred_lft forever\npostgres@ppasstandby:\/home\/postgres\/ [PGSITE2] sqh\npsql.bin (9.5.0.5)\nType \"help\" for help.\n\npostgres=# select * from pg_is_in_recovery();                               \n pg_is_in_recovery \n-------------------\n f\n(1 row)\n<\/pre>\n<p>This is dangerous, because: You now have two masters. If any of your applications uses the local IP address to connect then it would be still connected to the standby instance. Failover Manager created a new recovery.conf file though:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@ppasstandby:\/home\/postgres\/ [PGSITE2] cat $PGDATA\/recovery.conf\n# EDB Failover Manager\n# This generated recovery.conf file prevents the db server from accidentally\n# being restarted as a master since a failover or promotion has occurred\nstandby_mode = on\nrestore_command = 'echo 2&gt;\"recovery suspended on failed server node\"; exit 1'\n<\/pre>\n<p>But as long as you do not restart the old primary instance the instance will accept modifications. If you restart:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n2016-04-11 11:37:40.114 GMT - 8 - 3454 -  - @ LOCATION:  ReadRecord, xlog.c:3983\n2016-04-11 11:37:40.114 GMT - 6 - 3452 -  - @ LOG:  00000: database system is ready to accept read only connections\n<\/pre>\n<p>&#8230; this can not do any harm anymore. But the required step to restart automatically by EDB Failover Manager is missing. It is left to you to perform an immediate restart of the old master instance to prevent the two master scenario. This should definitely be improved in EDB Failover Manager.<\/p>\n<p>Once maintenance is finished and the old master node is rebooted what is the status of the cluster? Lets check:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@edbbart ~] \/usr\/edb-efm\/bin\/efm cluster-status efm\nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.243       UP     UP        \n\tWitness     192.168.22.244       UP     N\/A       \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.243\n\nStandby priority host list:\n\t(List is empty.)\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.243       0\/37021258       \n\n\tNo standby databases were found.\n<\/pre>\n<p>The information of the old master completely disappeared. So how can we recover from that and bring back the old configuration? The first step is to add the old master node back to the allowed hosts of the cluster:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@edbbart ~] \/usr\/edb-efm\/bin\/efm add-node efm 192.168.22.245\nadd-node signal sent to local agent.\n[root@edbbart ~] \n<\/pre>\n<p>This should result in the following output:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@edbbart ~] \/usr\/edb-efm\/bin\/efm cluster-status efm | grep -A 2 \"Allowed\"\nAllowed node host list:\n\t192.168.22.244 192.168.22.243 192.168.22.245\n<\/pre>\n<p>Now we need to rebuild the old master as a new standby. This can be done in various ways, the two most common are to create a new basebackup of the new master or to use <a href=\"http:\/\/www.postgresql.org\/docs\/current\/static\/app-pgrewind.html\" target=\"_blank\" rel=\"noopener\">pg_rewind<\/a>. I&#8217;ll us pg_rewind here. So to rebuild the old master as a new standby:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@ppasstandby:\/u02\/pgdata\/PGSITE2\/ [PGSITE2] pg_rewind -D \/u02\/pgdata\/PGSITE2\/ --source-server=\"port=4445 host=192.168.22.243 user=postgres dbname=postgres\"\nservers diverged at WAL position 0\/350000D0 on timeline 2\nrewinding from last common checkpoint at 0\/35000028 on timeline 2\nDone!\n<\/pre>\n<p>Make sure your recovery.conf matches your environment;<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@ppasstandby:\/u02\/pgdata\/PGSITE2\/ [PGSITE2] cat recovery.conf\nstandby_mode = 'on'\nprimary_slot_name = 'standby1'\nprimary_conninfo = 'user=postgres password=admin123 host=192.168.22.243 port=4445 sslmode=prefer sslcompression=1'\nrecovery_target_timeline = 'latest'\ntrigger_file='\/u02\/pgdata\/PGSITE2\/trigger_file'\n<\/pre>\n<p>Start the new standby instance and check the log file:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n2016-04-17 11:21:40.021 GMT - 1 - 2440 -  - @ LOG:  database system was interrupted while in recovery at log time 2016-04-17 11:09:28 GMT\n2016-04-17 11:21:40.021 GMT - 2 - 2440 -  - @ HINT:  If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.\n2016-04-17 11:21:40.554 GMT - 3 - 2440 -  - @ LOG:  entering standby mode\n2016-04-17 11:21:40.608 GMT - 4 - 2440 -  - @ LOG:  redo starts at 0\/35000098\n2016-04-17 11:21:40.704 GMT - 5 - 2440 -  - @ LOG:  consistent recovery state reached at 0\/37047858\n2016-04-17 11:21:40.704 GMT - 6 - 2440 -  - @ LOG:  invalid record length at 0\/37047858\n2016-04-17 11:21:40.704 GMT - 4 - 2438 -  - @ LOG:  database system is ready to accept read only connections\n2016-04-17 11:21:40.836 GMT - 1 - 2444 -  - @ LOG:  started streaming WAL from primary at 0\/37000000 on timeline 3\n<\/pre>\n<p>So far for the database part. To bring back the EDB failover manager configuration we need to adjust the efm.nodes file on the new standby to include all the hosts in the configuration:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@ppasstandby efm-2.0] pwd\n\/etc\/efm-2.0\n[root@ppasstandby efm-2.0] cat efm.nodes \n# List of node address:port combinations separated by whitespace.\n192.168.22.244:9998 192.168.22.243:9998 192.168.22.245:9998\n<\/pre>\n<p>Once this is done EDB Failover Manager can be restarted and the configuration is fine again:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@ppasstandby efm-2.0] systemctl start efm-2.0.service\n[root@ppasstandby efm-2.0] \/usr\/efm-2.0\/bin\/efm cluster-status efm\nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tWitness     192.168.22.244       UP     N\/A       \n\tStandby     192.168.22.245       UP     UP        \n\tMaster      192.168.22.243       UP     UP        \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.243 192.168.22.245\n\nStandby priority host list:\n\t192.168.22.245\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.243       0\/37048540       \n\tStandby     192.168.22.245       0\/37048540       \n\n\tStandby database(s) in sync with master. It is safe to promote.\n<\/pre>\n<p>Coming to the second scenario: When you do not need to reboot the server but only need to take down the master database for maintenance what are the steps to follow? As with scenario 1 you&#8217;ll have to promote to activate the standby database and immediately shutdown the old master:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@edbbart ~]# \/usr\/edb-efm\/bin\/efm promote efm\nPromote command accepted by local agent. Proceeding with promotion. Run the 'cluster-status' command for information about the new cluster state.\n[root@edbbart ~]# \/usr\/edb-efm\/bin\/efm cluster-status efm \nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tIdle        192.168.22.243       UP     UNKNOWN   \n\tWitness     192.168.22.244       UP     N\/A       \n\tMaster      192.168.22.245       UP     UP        \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.243 192.168.22.245\n\nStandby priority host list:\n\t(List is empty.)\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.245       0\/37048730       \n\n\tNo standby databases were found.\n\nIdle Node Status (idle nodes ignored in XLog location comparisons):\n\n\tAddress              XLog Loc         Info\n\t--------------------------------------------------------------\n\t192.168.22.243       UNKNOWN          Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP\/IP connections.\n<\/pre>\n<p>Now you can do your maintenance operations and once you finished rebuild the old master as a new standby:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\npostgres@edbppas:\/u02\/pgdata\/PGSITE1\/ [PGSITE1] pg_rewind -D \/u02\/pgdata\/PGSITE1\/ --source-server=\"port=4445 host=192.168.22.245 user=postgres dbname=postgres\"\nservers diverged at WAL position 0\/37048620 on timeline 3\nrewinding from last common checkpoint at 0\/37048578 on timeline 3\nDone!\n\npostgres@edbppas:\/u02\/pgdata\/PGSITE1\/ [PGSITE1] cat recovery.conf\nstandby_mode = 'on'\nprimary_slot_name = 'standby1'\nprimary_conninfo = 'user=postgres password=admin123 host=192.168.22.245 port=4445 sslmode=prefer sslcompression=1'\nrecovery_target_timeline = 'latest'\ntrigger_file='\/u02\/pgdata\/PGSITE1\/trigger_file'\n<\/pre>\n<p>Once you restarted the instance EDB Failover Manager shows:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@edbbart ~]# \/usr\/edb-efm\/bin\/efm cluster-status efm \nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.245       UP     UP        \n\tWitness     192.168.22.244       UP     N\/A       \n\tIdle        192.168.22.243       UP     UNKNOWN   \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.243 192.168.22.245\n\nStandby priority host list:\n\t(List is empty.)\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.245       0\/37068EF0       \n\n\tNo standby databases were found.\n\nIdle Node Status (idle nodes ignored in XLog location comparisons):\n\n\tAddress              XLog Loc         Info\n\t--------------------------------------------------------------\n\t192.168.22.243       0\/37068EF0       DB is in recovery.\n<\/pre>\n<p>The new standby is detected as &#8220;in recovery&#8221; but still shows &#8220;UNKNOWN&#8221;. To fix this execute the &#8220;resume&#8221; command on the new standby and then check the cluster status again:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n[root@edbppas ~] \/usr\/edb-efm\/bin\/efm resume efm\nResume command successful on local agent.\n[root@edbppas ~] \/usr\/edb-efm\/bin\/efm cluster-status efm\nCluster Status: efm\nAutomatic failover is disabled.\n\n\tAgent Type  Address              Agent  DB       Info\n\t--------------------------------------------------------------\n\tStandby     192.168.22.243       UP     UP        \n\tWitness     192.168.22.244       UP     N\/A       \n\tMaster      192.168.22.245       UP     UP        \n\nAllowed node host list:\n\t192.168.22.244 192.168.22.243 192.168.22.245\n\nStandby priority host list:\n\t192.168.22.243\n\nPromote Status:\n\n\tDB Type     Address              XLog Loc         Info\n\t--------------------------------------------------------------\n\tMaster      192.168.22.245       0\/37068EF0       \n\tStandby     192.168.22.243       0\/37068EF0       \n\n\tStandby database(s) in sync with master. It is safe to promote.\n<\/pre>\n<p>Everything back to normal operations.<br \/>\nConclusion: It depends on what exactly you want to do to get the failover cluster configuration back to normal operations. The steps itself are easy, the main issue is to perform them in the right order. Having that documented is a must as you&#8217;ll probably not do these kind of tasks every day.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the last post I looked at how you can do maintenance operations on the standby node when you are working in a PostgreSQL cluster protected by EDB Failover Manager. In this post I&#8217;ll look on how you can do maintenance on the primary node (better: the node where the primary instance currently runs on). [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[713,464,77,238],"type_dbi":[],"class_list":["post-7568","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","tag-enterprisedb","tag-failover-cluster","tag-postgresql","tag-standby"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node\" \/>\n<meta property=\"og:description\" content=\"In the last post I looked at how you can do maintenance operations on the standby node when you are working in a PostgreSQL cluster protected by EDB Failover Manager. In this post I&#8217;ll look on how you can do maintenance on the primary node (better: the node where the primary instance currently runs on). [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2016-04-18T03:51:23+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node\",\"datePublished\":\"2016-04-18T03:51:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\"},\"wordCount\":820,\"commentCount\":0,\"keywords\":[\"enterprisedb\",\"Failover cluster\",\"PostgreSQL\",\"Standby\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\",\"name\":\"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2016-04-18T03:51:23+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\/\/x.com\/westermanndanie\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/","og_locale":"en_US","og_type":"article","og_title":"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node","og_description":"In the last post I looked at how you can do maintenance operations on the standby node when you are working in a PostgreSQL cluster protected by EDB Failover Manager. In this post I&#8217;ll look on how you can do maintenance on the primary node (better: the node where the primary instance currently runs on). [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/","og_site_name":"dbi Blog","article_published_time":"2016-04-18T03:51:23+00:00","author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node","datePublished":"2016-04-18T03:51:23+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/"},"wordCount":820,"commentCount":0,"keywords":["enterprisedb","Failover cluster","PostgreSQL","Standby"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/","url":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/","name":"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2016-04-18T03:51:23+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/maintenance-scenarios-with-edb-failover-manager-2-primary-node\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Maintenance scenarios with EDB Failover Manager (2) \u2013 Primary node"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/7568","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=7568"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/7568\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=7568"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=7568"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=7568"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=7568"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}